Full export.json format.

Below find the full list of the export.json file parameters.


Field Data Type Optional/Default Description
orgs Array (ScriptOrg[]) Optional / No default Credentials data of the Salesforce orgs you want to process.
It's optional parameter if need to configure manual connection to any of the processed orgs.
Alternatively you can force the Plugin to take SFDX connection from the stored in the local system. In this case simply omit the orgs section.
ScriptOrg.name String Mandatory Username to connect with.
ScriptOrg.instanceUrl String Mandatory Org instance url.
ScriptOrg.accessToken String Mandatory Access token to connect.
pollingIntervalMs Integer Optional / Default 5000 When used Bulk API this parameter defined the polling interval to check for the bulk job status. Decreasing this value may cause additional system load.
concurrencyMode String Optional / Default "Parallel" When used Bulk API V1 this parameter defined the concurrency mode to perform the bulk operations. The valid values are: "Serial", "Parallel". Make sure that the spelling is correct.
bulkThreshold Integer Optional / Default 200 For better performance the plugin uses both Collection API for the small data and Bulk API for the large data processing. Collection API is a fast way to process, but it is always consuming a lot of the quota on API requests, so for large data sizes it's better to use Bulk API. This parameter defined the minimal size of data when need to switch from processing via Collection API to the Bulk API.
alwaysUseRestApiToUpdateRecords Boolean Optional / Default false True value will suppress using the Bulk Api and force the plugin to always update records thru the Rest Api engine even the amount of updated records crosses the bulkThreshold limit.
apiVersion String (in float format) Optional / Default "52.0" API version number to use.
dataRetrievedAddons Array (AddonManifestDefinition[]) Optional Defines the array of SFDMU Add-On modules, that should be triggered AFTER the Plugin has finished to retrieve ALL the source and the target data from all objects in the script and BEFORE it starts to update the Target.
At this event you already have all source and target records for all objects and can manipulate them, then upload their updated versions.
(See also: Introduction to the SFDMU Add-On API Engine)
objects Array (ScriptObject[]) Optional SObjects you want to process.
excludedObjects Array (String) List of API names to exclude sobjects from the job globally across all objectSets.
Basically, it's the same as when you exclude objects using "excluded": true, but this brings you a handy and quick way to exclude objects in case that you have multiple objectSets and want to control object exclusion globally from one place.
objectSets Array (ScriptObjectSet[]) Optional List of sub-sets of SObjects you want to process.

(See also: Multiple Object Sets)
ScriptObjectSet.objects Array (ScriptObject[]) Mandatory SObjects you want to process withing the current sub-set.
ScriptObject.query String Mandatory SOQL query string.
Include in the query string all SObject's fields that you need to export, including referenced fields. It is enough only to list the fields and the plugin will automatically resolve and process all the references between objects. Data from fields that are not listed in the query will not be exported.
Optionally you can filter the records by using WHERE, LIMIT, OFFSET etc. clauses.
Nested queries and complex fields like Account__r.Name are not supported. But you still can use subquery in the WHERE clause for ex: .... WHERE Id IN (SELECT Id FROM .... )
ScriptObject.deleteQuery String Optional, Default none SOQL query string used to delete old records from the target org (see the "Delete" operation and the "deleteOldData" parameter below).
If this parameter is omitted - the same ScriptObject.query will be used to retrieve records both to delete and to update.
ScriptObject.operation String Mandatory Operation that you want to perform with the current SObject.
Available values are:
"Insert" - creates new records on the target org even old versions of these records already exist.
"Update" - only updates existing records. The operation overrides all record fields.
"Upsert" - Inserts new and updates old records, overriding all values.
"Readonly" - To say that you don't want to update this object, only to retrieve its records during the export. Useful in case that you need to include readonly SObject that is referenced from another SObject.
"Delete" - Only removes old target records from the given sObject. No update performed.
"HardDelete" - Same like a Delete. When using the Bulk API this will perform a hard deletion of the records which will not put the deleted records into the Recycle Bin.
"DeleteSource" - Removes records from the Source, the same as ScriptObject.deleteFromSource.
"DeleteHierarchy" - Removes hierarchical records from the Target, the same as ScriptObject.deleteByHierarchy.
ScriptObject.externalId String Mandatory External Id field for this SObject.
This is the unique identifier field, the field which can map any child records refering to this SObject.
Each field that has unique values across all records can be used as an External Id, even it's not marked as External Id within the SObject's metadata. You can also use standard External Id fields defined in the metadata.

This is used to compare source and target records as well as to process the relationships between objects during all operations, except of "Insert" and Readonly".
ScriptObject.master Boolean Optional, Default true True value will tell to the Plugin that this object is "MASTER".
This means, that you want to put this object at the top level of your data model hierarchy. The Plugin will always process all records for this object (but still in respect to the defined object's limitation or WHERE expression). For example if LIMIT 100 is defined - the Plugin will process only 100 records.
False value make this object like a "CHILD" to the MASTER objects in the script.
False value will force the Plugin to detect and process the minimum possible sub-set of records, which is required for keeping relationships between objects. In this case even LIMIT 100 is defined - the Plugin might process another amount of records depend on the current situation.
ScriptObject.allRecords (deperecated) Boolean Optional, Default true (Deprecated) Replaced with the ScriptObject.master.
ScriptObject.deleteOldData Boolean Optional, Default false Forces deletion of old target records before performing update.
The difference from the "Delete" operation (see above) is that the "Delete" operation makes only deletion without further inserting or updating records in the target environment, but deleteOldData can make a deletion prior to any other main DML operation as it is specified for the given ScriptObject
ScriptObject.hardDelete Boolean Optional, Default false Enforces hard deletion of the records instead of regular deletion.

Same as "HardDelete" operation but can work in conclusion with the other delete operations: "Delete", "DeleteSource" and "DeleteHierarchy"
Also will hard delete old records when ScriptObject.deleteOldData = true and Builk Api is currently used.
ScriptObject.useQueryAll Boolean Optional, Default false Enforces using of /queryAll API endpoint instead of /query when querying the source records.
This allows you to include deleted records in the query result.
You can also retrieve only the deleted records by using the following expression: WHERE IsDeleted = true

Note that this parameter does not change the behavior of querying the target records.
ScriptObject.queryAllTarget Boolean Optional, Default false Allows always querying ALL records from the Target org.

When true - automatically ignores WHERE/LIMIT/OFFSET/ORDER BY clauses defined in the object's SOQL query when querying the Target org.
The full query is still applied to pull records from the Source org.
This option should be very useful when you have to update the same target records as were selected from the Source using some filter, e.g. your current challenge is to select only the lastly modified records from the Source and to update the same records at the target side (by the specified externald Id key).
ScriptObject.deleteFromSource (deprecated) Boolean Optional, default false (deprecated, replaced with the operation DeleteSource)

When operation="Delete", this option enables deletion from the Source environment instead of Target .
This feature allows you to delete all records which are selected from the Source based on the export.json object configuration, considering the detected relationships between the objects.

(See the details here: Delete from source feature)
ScriptObject.deleteByHierarchy (deprecated) Boolean Optional, default false (deprecated, replaced with the operation DeleteHierarchy)

When operation="Delete", this option enables deletion of the hierarchical data from the Target, based on the export.json object configuration and on the data retrieved from the Source environment or from the source csv file.

(See the details here: Delete by Hierarchy feature)
ScriptObject.updateWithMockData Boolean Optional, default false Enables data mocking for this SObject
(See Advanced features for the full explanation)
ScriptObject.mockCSVData Boolean Optional, default false Enables export of data to CSV files with masked values instead of the original.
ScriptObject.targetRecordsFilter String Optional Additional SOQL query you can use to filter out unwanted target data just before sending them to the Target.

Target data means the records are directly provided in API request (Bulk API request or REST request) to update the target environment or to generate the target CSV file.
ScriptObject.excluded Boolean Optional, Default false Set to true to exclude corresponding sObject from the migration process.

This parameter useful when you want to exclude certain sObject from the process leaving its definition in the export.json file.
ScriptObject.excludedFields Array (string) Optional Array of field names that should be excluded from the original SOQL query.
Allows to define an array of certain fields to be excluded when using multiselect keywords in the object query.
(See the Advanced features for the full explanation)
ScriptObject.excudedFromUpdateFields Array (string) Optional Array of field names that should be excluded from target update.

The listed fields will still included into the SOQL and the retrieved from the Source and the Target, but not updated in the Target.

This property seems to be useful if you want to retrieve the the specific fields (for example to use with TransformRecords Core Add-On), but you don't need it to be updated in the Target environment.

To completely exclude the fields from the process consider using the excludedFields property.
ScriptObject.useCSVValuesMapping Boolean Optional, Default false When set to true and CSV files are used as data source - it enables changing the raw values from the CSV file according to the mapping table coming from the additional CSV file. Data will be upload into the Target after the transformation is done.
(See the Advanced features for the full explanation)
ScriptObject.useValuesMapping Boolean Optional, Default false When set to true and salesforce org is set as data source - it enables changing the raw source values according to the mapping table coming from the additional CSV file. Data will be uploaded into the Target after the transformation is done.
ScriptObject.useFieldMapping Boolean Optional, Default false When set to true it enables the Field Mapping according to the fieldMapping property.
ScriptObject.mockFields Array (MockField[]) Optional Defines SObject fields that need to update with a fake data.
(See the Advanced features for the full explanation)
ScriptObject.parallelBulkJobs Integer Optional See the global parallelBulkJobs parameter
ScriptObject.parallelRestJobs Integer Optional See the global parallelRestJobs parameter
MockField.name String Mandatory The name of the field to mock.
MockField.pattern String Mandatory The pattern to create mock data for this field.
MockField.excludedRegex String Optional The JS regex expression to exclude the values that should not be masked.
MockField.includedRegex String Optional The JS regex expression to include the only values that should be masked. When defined - other values are not matched the rule will not be masked.
ScriptObject.beforeAddons Array (AddonManifestDefinition[]) Optional Defines the array of SFDMU Add-On modules, that should be triggered on the given parent object only AFTER the object's records were fully processed.
(See also: Introduction to the SFDMU Add-On API Engine)
ScriptObject.afterAddons Array (AddonManifestDefinition[]) Optional Defines the array of SFDMU Add-On modules, that should be triggered on the given parent object BEFORE the object's records are processed.
(See also: Introduction to the SFDMU Add-On API Engine)
ScriptObject.beforeUpdateAddons Array (AddonManifestDefinition[]) Optional Defines the array of SFDMU Add-On modules, that should be triggered on the given parent object right BEFORE the sObject is updated in the target SF environment. You can access and modify the source records at this moment.
(See also: The Custom SFDMU Add-On Api)
ScriptObject.filterRecordsAddons Array (AddonManifestDefinition[]) Optional Defines the array of SFDMU Add-On modules, that should be triggered on the given parent object when the source records are prepared to be uploaded to the the target SF environment. You can access and modify the records, then the Tool will upload the modified version of the records.

(See also: The Custom SFDMU Add-On Api)
ScriptObject.restApiBatchSize Integer Optional The maximal size of each batch while processing the records by the Rest Api.

This is property has identical purpose as the global property restApiBatchSize (see below), but it also allows to set the batch size per object.
ScriptObject.bulkApiV1BatchSize Integer Optional The maximal size of each batch while processing the records by the Bulk Api V1.

This is property has identical purpose as the global property bulkApiV1BatchSize (see below), but it also allows to set the batch size per object.
ScriptObject.fieldMapping Array (MappingItem[]) Optional Defines the custom mapping between the source and the target field and sObject names when they are different.
(See the Advanced features for the full explanation)
MappingItem.targetObject String Optional/Mandatory depend on the configuration The name of the corresponding sObject on the Target side.
MappingItem.sourceField String Optional/Mandatory depend on the configuration The name of the source object field.
MappingItem.targetField String Optional/Mandatory depend on the configuration The name of the corresponding target object field.
promptOnMissingParentObjects Boolean Optional, Default true If parent lookup or master-detail record was not found for the some of the child records - it will prompt or will not prompt user to break or to continue the migration.
It allows user to monitor the job and abort it when some data is missing.
allOrNone Boolean Optional, Default false Abort job execution on any failed record or continue working anyway.
If true the execution will stop or the user will be prompted to stop depend on promptOnUpdateError parameter.
(Note for REST API only: if true except of abort of script execution depend on promptOnUpdateError parameter - any failed records in a non-successful API call cause all changes made within this call to be rolled back. Record changes aren't committed unless all records are processed successfully)
promptOnUpdateError Boolean Optional, Default true When some records failed or when any other error occurred during data update prompt the user to stop the execution or to continue.
promptOnIssuesInCSVFiles Boolean Optional, Default true When issues were found in the source CSV files during the validation prompt the user to stop the execution or to continue.
validateCSVFilesOnly Boolean Optional, Default false In general when you are using CSV files as data source, the source CSV files are subject of format validation before running the migration job itself. validateCSVFilesOnly=true runs only the validation process and stops the execution after the it is completed.
createTargetCSVFiles Boolean Optional, Default true If true the Plugin will produce CSV file containing target records for each processed sObject with error information (if occured) per record.
excludeIdsFromCSVFiles Boolean Optional, Default false If true all record Id and lookup id columns will be omitted when exporting to the CSV files.
This will make CSV files created by the Plugin natively usable with any version control system.
The relationship between objects will still maintained using external Id lookup columns (like Account__r.Name).

In addition, setting this property to true on import from CSV - will force verifying and repairing back the source CSV files, which previously was exported using this option and are originally missing all Id columns.
restApiBatchSize Integer Optional The maximal size of each batch while processing the records by the Rest Api.
The large jobs can now be internally splitted into small chunks as it's available for the Bulk Api. It can be very useful to avoid "maximum request size exceeded" error when uploading large data e.g. Attachment object, which is natively not supported by the Bulk Api.
bulkApiV1BatchSize Integer Optional, Default 9500 The maximal size of each batch while processing the records by the Bulk Api V1 (similar to the restApiBatchSize)
bulkApiVersion String Optional, Default "2.0" The version of Salesforce Bulk Api to use. Valid values are: "1.0" and "2.0"
importCSVFilesAsIs Boolean Optional, Default false Set to true to skip validation and fixing of the source CSV files and use these files as is to import the data into the Target org. Typically, the standard Salesforce Data Loader uses this behavior, considering the CSV file ready to use.
You are fully responsible to get these files ready for the import. You should include all necessary columns and the record ids should be correctly linked between the files.

If set to false (default), so the Plugin will try to analyze and repair the source CSVs before loading them, which includes verification, linking between the files to find appropriate relationships between the imported sobjects, etc.
fileLog Boolean Optional, Default true If set to true (default value) the Plugin will create the log file.
This parameter provides the same functionality as --filelog flag. It gives an additional option to control the logging from the Script, even --filelog was omitted from the command line.
keepObjectOrderWhileExecute Boolean Optional, Default false If set to true the objects are executed by the order as they appear in the objects[] array. All responsibility to arrange objects in the proper order to avoid collisions is on the user.
If set to false (default value) the "Smart order" mode are enabled. The Plugin desides what is the best order to execute objects.
Note, that RecordType object is still executed at the beginning
allowFieldTruncation Boolean Optional, Default false Select this option to truncate data in the following types of fields when loading that data into Salesforce: Url, Multi-select Picklist, Phone, Picklist, Text, Text (Encrypted), Text Area (Long). The data will be truncated to the length set by the field metadata. The option is similar to the option available in the Salesforce Data Loader.
simulationMode Boolean Optional, Default false This feature allows to check which records will be affected by the export.json configuration without actual update, delete or insert them.
It produces exactly the same output (target.csv, .log files) as the live mode, but there are no actual records will be affected.

* Since the Plugin will not create any new record (f.ex. for the Insert operation), it should produce fake record IDs in the _target.csv files instead of the real record IDs when working in the live data migration mode.

* Sometimes the simulated output may be a bit different than in the live output.
This is because normally in each step the Plugin uses all the previous data to compare the target records against the source records and to decide which target records should be now processed. So obviously in some cases the simulated data may differ from the real data and it is a normal behavior of the Plugin.
binaryDataCache String Optional, Default 'InMemory' Allows to store large binary data on the local disk instead of holding it in the memory when processing binary files like Attachments or Files. Can speed up the job performance and can save the memory.

The available values are:
- InMemory - the default value, all blob data stored in the memory just like other record fields.
- CleanFileCache - the blob data stored on the disk in the ./binary_cache/[source_user_name] directory , the cache getting cleared each time the job runs.
- FileCache - the blob data stored on the disk in the ./binary_cache/[source_user_name] directory, the cache is NOT cleared, all the previous files persist until cleared manually, which will give an advanced benefit to quickly load the same binary data from the disk instead querying the remote source in case the migration job has failed for some reasons and you want to re-run it again.
sourceRecordsCache String Optional, Default 'InMemory' Allows to store records retrieved from the Source on the local disk instead of fetching them again on the next run of the same job. Can speed up the job performance.

The available values are:
- InMemory - the default value, all the source records are stored in the memory.
- CleanFileCache - the records are stored on the disk in the ./source_records_cache/[source_user_name] directory , the cache getting cleared each time the job runs.
- FileCache - the source records are stored on the disk in the ./source_records_cache/[source_user_name] directory, the cache is NOT cleared, all the previous files persist until cleared manually, which will give an advanced benefit to quickly load the same source records from the disk instead querying the remote source in case the migration job has failed for some reasons and you want to re-run it again.
parallelBulkJobs Integer Optional The number of Bulk API jobs can be opened at the same time in parallel.
The Plugin has the option to split all records to be transfered into amount of small chunks, then will process each chunk by creating a different Bulk Api job. The jobs are running in parallel threads.
When omitted, the records are processed using the single Api Job.

This approach can increase the overall performance.
In most of the cases, there no chance to run into the "record locked" issue because each bulk job is aways processing only its own record set which should prevent unwanted collisions between the parallel jobs.
parallelRestJobs Integer Optional The same as the parallelBulkJobs, but defined for the Rest Api jobs.
proxyUrl String Optional The url (e.g. https://proxy.proxy.com:8080) of the proxy server which should be used to connect to the SF instances instead of the direct connection.

Additional notes:

  • External Id fields are case-sensitive. The API name(s) of field(s) defined as an external Id (both single and composite) - are always case-sensitive and must match both the source and target org metadata. For instance, you can't set it like externalId: "name" or externalId: "Name;Account__r.name" , because field "name" does not exist in the sobject metadata.

  • "I know what I do".

    When a production SF org is set as a Target, during the job running, the Plugin will always prompt the user to manually enter the target DOMAIN-NAME (for example: prod-instance.my.salesforce.com), in order to get user's confirmation to make modifications and to prevent the critical production data from destroying by accident.

    This prompt can also be skipped by adding the flag --canmodify DOMAIN-NAME to the CLI command line, which is also indicating user's full agreement to the changes made by the Plugin. This flag can be useful, when the Plugin is used in automated jobs.

    Note, that you can't ignore this prompt by simply adding the --noprompt flag. If you will add --noprompt, the job will be aborted when the Target is set to production.

  • Hard delete operation will permanently delete records without putting them into the Recycle Bin. Note, that when deleting deleteOldData = true and the deletion is performed using the Bulk Api this also will hard delete the record.

    See also regarding the hard delete: Hard delete records from Salesforce, Activation of Bulk API Hard Delete on System Administrator Profile


Last updated on Sa Oct 2022