Scripted Migration Utility

New in Release 6.1: Learn more about how end-users can mark content they would like to have migrated.

As of Release 5.6.0, it is possible to Migrate Content using our Export/Import Migration Scripts. This enables System Admins to setup a content migration pipeline wherein, at a specified time each day (setup via something like cron), all content is migrated from a staging environment to a production environment. 

The Scripted Migration process includes two main stages:

This article details how to move a Category and all included elements from one server to another. The process of exporting individual Objects or Elements is essentially the same.  See this article:


Migration Capabilities

Note: Root privileges are required to run Migration.

MI Elements and Objects
Scripted Migration
can be migrated

BEFORE MIGRATION, on the new instance make sure to:

  1. Recreate Dimensions for all dimensioned Elements that are migrated.
  2. Recreate Dimensions for all Elements with Filters mapped to Dimensions.
  3. Establish connectivity to all BI tools (by creating respective connection profiles) that serve as Data Sources for migrated Elements/Objects.
can be migrated
External Reports
can be migrated
Datasets/Users Maps
can be migrated
can be migrated
Folders can be migrated (6.2.1 and beyond)
ALL OBJECTS AND ENTITIES that cannot be migrated directly have to be rebuilt on the new instance.
Plugin Data Sources

Key Migration Dependencies

Migration Dependencies
Element/Object IDs
  • Object/Element IDs are preserved unless there are identical IDs on the new server. 
  • In case of existing duplicates, migrated Objects and Elements will be assigned new IDs.
Data Sources
  • If the migrated content was sourced from a Dataset, the Dataset will be imported as well.
  • External Connections to other systems (BI tools) have to be recreated manually on the new server.
Technical/Business Owners
  • Migrated Objects and Elements will retain their Technical/Business Owners if these Users exist on the new instance.
  • Search for User matches is performed first by email and, if there are no hits, by Username.
  • If there are no matching Users, Migrated Objects/Elements will be assigned a new Owner (the first admin that is found on the new server).

1. Exporting Content

Exporting content involves creating a .json file with information on all migrated Elements/Objects that can later be uploaded to a different server.

  • The script is used to export a Category.
  • All associated Documents can be exported in an archive as separate files.

To initiate export, run the following command:

sudo /opt/mi/.python/bin/python /opt/mi/generator/ -c 86 -f  /<directory>/<archive name>.tar.gz -a 


  1. /opt/mi/.python/bin/python is a Python Interpreter (installed during the installation of the MI application)
  2. /opt/mi/generator/ is a path to the script
  3. -c 86  is a Category parameter followed by Category ID
  4. -f  parameter allowing Users to create a .json file or a .tar.gz archive
  5. <directory>  where the dump file will be created
  6. <archive name>.tar.gz is a user-defined archive name
  7. -a is an archive parameter
    • archived export allows for migration of Documents that are associated with the migrated Elements/Objects
    • -a parameter is optional; if Document files do not need to be migrated, a single .json file can be exported

To export a Category:

  1. Run  with the desired parameters
  2. Check the response for errors
  3. In output, review export details
  4. [Optionally] verify that the Category was saved to the .json file or tar.gz archive with the name you specified
  5. [Optionally] view the contents of the exported  tar.gz archive
    • In an archive, Documents are stored by Number IDs (This IDs are archive-relevant only)
    • The actual Document names are stored inside a .json file

1.1. Optional Parameters for export

The list of arguments to use at export
-h, --help
Show help message and exit
Output file name
Category ID to dump
Element IDs to dump (comma-separated)
Dataset IDs to dump (comma-separated)
Do not import any dependent access maps.
-a, --archive
Make tar.gz archive with related documents

2. Importing content

Importing content involves uploading it to the required server.

  • The script is used to import Elements and Objects.


  • Before running the import script, copy the saved .tar.gz archive or .json file to the server where your content needs to be imported.

To initiate import, run the following command:

sudo /opt/mi/.python/bin/python /opt/mi/generator/ -f /<directory>/<archive name>.tar.gz -b ~/.


  1. /opt/mi/.python/bin/python is a Python Interpreter (installed during the installation of the MI application)
  2. /opt/mi/generator/ is a path to the script that dumps data
  3.  -f  parameter allowing Users to specify a .json file or a .tar.gz archive that will be uploaded to the new server
  4. <directory>  from which the upload will be run
  5. <archive name>.tar.gz is a user-defined archive name
  6. -b is a backup parameter

To import a Category:

  1. Run
  2. Check the response for errors
  3. In output, see export details

2.1. Optional Parameters for import

The list of arguments to use at import
-h, --help
Show this help message and exit
-m, --match
Load matching elements
-b BACKUP, --backup BACKUP
Backup elements before replacing
File to load elements from
-n, --no-preserve
Do not preserve IDs
-s, --strict
Delete elements in the target category if they are not in the source category
Do not overwrite access map configuration

3. Verify Migration Results

Upon successful Migration, all migrated content will be accessible from the UI.

4. If Migration runs with errors

In case Migration runs with errors:

  1. Verify that all the Migration Prerequisites have been met (For details, check the Script Response and Output).
  2. Having eliminated the cause of errors, rerun the upload script to update Migration results.