Full export.json file format.


Table of Contents



As documented here the SFDMU uses a single export.json file to customize data migration process.

Here you can find the full list of the JSON objects and properties available for configuration.

Script object.

  • The export.json file exposed the root Script object allowing you to define the most common migration parameters, e.g.:
// The Script object definition (the content of the export.json file)
{
    "bulkApiVersion": "1.0",	// This is a global Script object property

    // ... Other global Script object properties

}
  • All Script object properties are defined in the global parent scope and will override the same (or having a similar functionality) properties, defined in the child objects in the configuration hierarchy.

    Below is the list of of the available Script object properties:

Field Data Type Optional/Default Description
afterAddons Array (AddonManifestDefinition[]) Optional Add-On Api event definition. See: Supported Add-On Api events
allOrNone Boolean Optional, Default false When any record has failed during the target update, settings this property to true enforces the SFDMU to abort the migration job.

When using the REST API: if any record has failed, setting allOrNone to true will enforce all changes made in the target org within the current Api call to be rolled back before aborting the migration job. For that purpose, the Plugin uses the native feature existing in the Salesforce REST API. Changes aren't committed unless all records are processed successfully.
When using the Bulk Api: this property is ignored since isn't natively supported by the SF Bulk Api.
allowFieldTruncation Boolean Optional, Default false Select this option to truncate the field value before importing into the target org, when the field has one of the following types :
• Url,
• Multi-select Picklist,
• Phone,
• Picklist,
• Text,
• Text (Encrypted),
• Text Area (Long)
.

➦ The field value is truncated to the length defined by the field metadata. This option is the similar to the available in the standard Salesforce Data Loader.
alwaysUseRestApiToUpdateRecords Boolean Optional / Default false True value will suppress using the Bulk Api and enforce the plugin to always update records through the REST Api engine even the amount of updated records crosses the bulkThreshold limit.
apiVersion String (in float format) Optional API version number to use.

➦ Examle of value: "55.0"
beforeAddons Array (AddonManifestDefinition[]) Optional Add-On Api event definition. See: Supported Add-On Api events
binaryDataCache String Optional, Default 'InMemory' Allows to cache large binary data on the local disk instead of keeping it in the memory when processing binary files like Attachments or Files. Can speed up the job performance and save the memory heap.

The available values are:
• "InMemory" - the default value, all blob data stored in the memory just like other record fields.
• "CleanFileCache" - the blob data stored on the disk in the ./binary_cache/[source_user_name] directory. This directory is getting cleaned up before each job is starting, so the binary data is not cached locally and should be pulled from the Source each time anew. The pros are, that the files you are getting are always up-to-date.
• "FileCache" - the blob data is stored on the disk in the ./binary_cache/[source_user_name] directory, but the cache is NOT cleared, so all the previous files are maintained for next use and loaded from the disk instead of pulling them from the remote Source each time again. This can dramatically save the time and resources, but the cons might be using of outdated files stored in the cache. But if you are sure that the files remain the same, this would the best option.
bulkApiV1BatchSize Integer Optional, Default 9500 The maximal size of each batch while processing the records by the Bulk Api V1 (similar to the restApiBatchSize for the REST Api)
bulkApiVersion String Optional, Default "2.0" The version of Salesforce Bulk Api to use.

The valid values are:
"1.0"
• "2.0"
bulkThreshold Integer Optional / Default 200 For better performance the plugin uses both Collection API for the small data and Bulk API for the large data processing. Collection API is a fast way to process, but it is always consuming a lot of the quota on API requests, so for large data sizes it's better to use Bulk API. This parameter defined the minimal size of data when need to switch from processing via Collection API to the Bulk API.
concurrencyMode String Optional / Default "Parallel" When used Bulk API V1 this parameter defined the concurrency mode to perform the bulk operations.

The valid values are:
"Serial",
• "Parallel"
.
createTargetCSVFiles Boolean Optional, Default true ➦ If this property is set to true (default value), the Plugin will produce CSV file containing target records for each processed sObject with error information (if occured) per record.
➦ Setting this property to false will suppress producing these target files.
csvReadFileDelimiter String Optional, Default "," The delimiter symbol used in the source CSV files to split row into fields.

The valid values are:
"," (comma)
";" (semicolon).
csvWriteFileDelimiter String Optional, Default "," The delimiter symbol used by the SFDMU to separate fields in the row when writing to CSV files .

The valid values are:
"," (comma)
";" (semicolon).
dataRetrievedAddons Array (AddonManifestDefinition[]) Optional Add-On Api event definition. See: Supported Add-On Api events
excludeIdsFromCSVFiles Boolean Optional, Default false ➦ Setting this property to true on CSV export will make all record Id and lookup id columns NOT to be added.
➦ This will make CSV files created by the Plugin more compact and well usable with version control systems.
➦ The relationship between objects will still maintained using External ID lookup columns (like Account__r.Name).
➦ Setting this property to true on CSV import will enforce repairing back virtually the source CSV files by adding missing Id columns.
excludedObjects Array (String) List of API names to exclude sobjects from the job globally across all objectSets.
Basically, it's the same as when you exclude objects using "excluded": true, but this brings you a handy and quick way to exclude objects in case that you have multiple objectSets and want to control object exclusion globally from one place.
importCSVFilesAsIs Boolean Optional, Default false ➦ Setting this property to true will disable validation and fixing of the source CSV files, making them taken as is to import the data into the Target org. Commonly, the standard Salesforce Data Loader Application uses this behavior, considering the CSV files are ready to use as is.
I this case you are responsible to prepare these files for the import. You should include all necessary columns and the Record Id values and lookups should be correctly populated and linked between the objects.

➦ If this property is set to false (default value), the SFDMU will try to analyze and repair the source CSV files before using them to update the Target org.
keepObjectOrderWhileExecute Boolean Optional, Default false ➦ If this property set to true, the objects are executed by the order as they appear in the objects[] array. I this case you should arrange objects in the proper order to avoid issues, e.g. parent objects should be placed before child.
➦ If this property set to false (default value), the Smart Order mode are enabled. The smart order enforces the Plugin to decide itself what is the best order to execute objects.
➦ There is a predefined list of objects, which are always executed before other objects, regardless the order defined by the script or calculated by the Smart Order, e.g. RecordType object.
objectSets Array (ScriptObjectSet[]) Optional List of sub-sets of SObjects you want to process.

➦ See: Multiple Object Sets
objects Array (ScriptObject[]) Optional SObjects you want to process.
orgs Array (ScriptOrg[]) Optional / No default Credentials data of the Salesforce orgs you want to process.

➦ It's optional parameter if need to configure manual connection to any of the processed orgs.
➦ Alternatively you can enforce the Plugin to take SFDX connection from the stored in the local system. In this case simply omit the orgs section.
parallelBulkJobs Integer Optional, Default 1 The maximal number of Bulk API jobs running in parallel when performing a large size CRUD API operation.

➦ The Plugin always splits records into multiple small pieces (chunks), then processes each chunk independently by creating a dedicated Bulk Api job. All these jobs are running together in parallel threads. This approach can increase the overall Plugin performance but also requires significantly more bandwidth.
➦ Sometimes settings this option to > 1 can cause the well-known "Unable to lock row - Record currently unavailable". However, in most of the cases, there no chance to run into the problem of "record locked", because each bulk job is aways processing only its own record set which should prevent unwanted collisions between the parallel jobs.
parallelRestJobs Integer Optional, Default 1 The same as the parallelBulkJobs, but defined for the REST Api jobs.
pollingIntervalMs Integer Optional / Default 5000 When used Bulk API this parameter defined the polling interval to check for the bulk job status. Decreasing this value may cause extra system load.
promptOnIssuesInCSVFiles Boolean Optional, Default true ➦ When it's true (default value) and issues were found in the source CSV files, the Plugin prompts the user to stop the execution or to continue.
➦ Settings this property to false will suppress asking for the user's confirmaiton and the job will continue despite the issues.
promptOnMissingParentObjects Boolean Optional, Default true ➦ Settings to true (default value) enforces the SFDMU to pause the execution and to prompt the user if he wants to abort or to continue, when the parent lookup or parent MD records are missing for the some of child records and so the relationship between objects is probably broken.
➦ Setting this property to false will suppress asking for user's confirmation and the execution will continue.
proxyUrl String Optional The url (e.g. https://proxy.proxy.com:8080) of the proxy server which should be used to connect to the SF instances instead of the direct connection.

➦ This option can be used for instance to connect to orgs through secured corporate VPN instead of unsecured direct connection.
restApiBatchSize Integer Optional The maximal size of each batch while processing the records by the REST Api.

➦ The large jobs are internally splitted into small chunks as it's available for the Bulk Api. It can be useful to avoid "maximum request size exceeded" error when uploading large binary data, e.g. Attachment object, which is not supported by the Bulk Api.
simulationMode Boolean Optional, Default false This feature allows to check which records will be affected by the export.json configuration without actual update the target org.
In this mode the Plugin produces exactly the same reports (logs and target csv files) as in the live mode, but there are no actual records will be affected.

➦ When inserting new records in simulation mode, since no actual records are created, the Plugin generates dummy record IDs in the _target.csv files instead of the real record IDs as in the live mode.
➦ Sometimes simulated output might be different than live output.
This is because normally in each step the Plugin uses all previously processed records to decide which target records should be now processed. So such differences between simulated and live output is a normal behavior of the Plugin.
➦ Since in the simulation mode the records in target org are not updated (we have to leave them untouched because there is no records rollback feature in the Salesforce API), so you can't use simulation mode to test your migration task against target triggers, validation rules and other functions which require actual update of records. But you still can see which records are pulled from the Source and which are about to be pulled to the Target.
sourceRecordsCache String Optional, Default 'InMemory' Allows to store records retrieved from the Source on the local disk instead of fetching them again on the next run of the same job. Can speed up the job performance.

➦ This option has the same available values as the binaryDataCache option, but it controls caching of records instead of binary data.
➦ The subdirectory containing the cache is: ./source_records_cache/[source_user_name]
validateCSVFilesOnly Boolean Optional, Default false ➦ If this property is set to false (default value), when you are using CSV files as data source, the SFDMU performs a validation and a smart fixing of the source CSVs before actually running the migration job.
➦ Settings this property to true will stop execution after the CSV validation process is completed, allowing you just to detect possible issues in the files without updating the Target.
useSeparatedCSVFiles Boolean Optional, Default false ➦ If this property is set to false (default value), when you are using CSV files as data source and multiple Object Sets are in the export.json, the SFDMU will take the same CSV source files placed in the root working directory for each object set.
➦ Settings this property to true will enforce the SFDMU to use separated CSV source files for each executed Object Set:
For the first Object Set - it will always take the source files from the root working directory (it's done to avoid backward incompatibility issues).
For the rest of Object Sets - it will take the the source CSV files from the subdirectory with the following pattern:
./objectset_source/object-set-<ObjectSet Index>,
e.g. ./objectset_source/object-set-2/

➦ Make sure you always put the source csv files in the correct path, for example Account.csv for the object set #1 you should put into ./Account.csv
and for the object set #2 into:
./objectset_source/object-set-2/Account.csv

ScriptObjectSet object.

  • Sometimes you may have to split the entire "parent" migration job into multiple "child" jobs. Such "child" job is declared using the ScriptObjectSet object type. Each ScriptObjectSet can have its own subset of ScriptObjects, which are being executed all together as a separated sub-job.

  • Defining of multiple ScriptObjectSets might be really helpful, when you need to perform multiple operations with the same sObject (e.g. Account), for instance: you can perform hierarchical Deletion of old Accounts in the first ScriptObjectSet, then you are Inserting new Accounts in the second ScriptObjectSet, etc.

  • The Plugin runs each ScriptObjectSet sequentially by keeping the order as it appears in the export.json.

  • Use the objectSets property of the Script object to declare the array of the ScriptObjectSets:

    {
      "objectSets": [
        {
    
          // ScriptObjectSet configuration to Delete old Accounts and related Opportunities
          "objects": [
            {
              "query": "SELECT Id FROM Account WHERE Name LIKE '%Account To Delete%'",
              "operation": "DeleteHierarchy"
            },
            {
              "query": "SELECT Id, AccountId FROM Opportunity",
              "operation": "DeleteHierarchy",
              "master": false
            }
          ]
        },
        {
    
          // ScriptObjectSet configuration to Insert new Accounts and related Opportunities
          "objects": [
            {
              "query": "SELECT Id, Name, Phone FROM Account WHERE Name LIKE '%Account To Insert%'",
              "operation": "Insert"
            },
            {
              "query": "SELECT Id, Type, StageName, AccountId FROM Opportunity",
              "operation": "Insert",
              "master": false
            }
          ]
        }
      ]
    }
    

Below is the list of the available ScriptObjectSet properties:

Field Data Type Optional/Default Description
ScriptObjectSet.objects Array (ScriptObject[]) Mandatory The list of ScriptObject objects you want to process withing the current ScriptObjectSet

➦ Each ScriptObject declares only one sObject with a single operation applied to it.
➦ This property has the similar meaning as the objects property of the main Script object.

ScriptObject object.

  • The ScriptObject object is used to customize migration parameters for specific sObject. Properties are declared in the ScriptObject's scope will override respective properties of all nested objects in the configuration hierarchy.

  • Use the objects property of the Script object to declare the array of the ScriptObjects which should be processed:

{
    // Array of ScriptObjects
    "objects": [

        // ScriptObject to upsert Accounts
        {
            "query": "SELECT Id FROM Account",
            "operation": "Upsert",
            "externalId": "Name",

            // ... The rest of this ScriptObject properties

        },

        // ScriptObject to upsert Contacts
        {
            "query": "SELECT Id, AccountId FROM Contact",
            "operation": "Upsert",
            "externalId": "LastName",
        }
    ]
}

Below is the list of the available ScriptObject properties:

Field Data Type Optional/Default Description
ScriptObject.afterAddons Array (AddonManifestDefinition[]) Optional Add-On Api event definition. See: Supported Add-On Api events
ScriptObject.beforeAddons Array (AddonManifestDefinition[]) Optional Add-On Api event definition. See: Supported Add-On Api events
ScriptObject.beforeUpdateAddons Array (AddonManifestDefinition[]) Optional Add-On Api event definition. See: Supported Add-On Api events
ScriptObject.bulkApiV1BatchSize Integer Optional The same as the global bulkApiV1BatchSize parameter but defined in object scope.
ScriptObject.deleteOldData Boolean Optional, Default false True value enforces deletion of old target records before performing update.

➦ It's acts like the Delete operation, but the "Delete" operation makes only deletion without further inserting or updating records in the target environment, in opposite, the deleteOldData can be used in conjuction with any operation and can make a deletion prior to any other DML operation with this object.
ScriptObject.deleteQuery String Optional, Default none SOQL query string used to delete old records from the target org (see the "Delete" operation and the "deleteOldData" parameter below).

➦ If this parameter is omitted - the ScriptObject.query will be used to retrieve records and both to delete and update.
ScriptObject.excluded Boolean Optional, Default false Setting this property to true completely excludes corresponding sObject from the migration process.

➦ This parameter is useful, for instance, when you want to exclude certain sObject from the process but leaving its definition in the export.json file for the documentation purposes.
ScriptObject.excludedFields Array (string) Optional Array of field names, which should be excluded from the original SOQL query.

➦ If it's set to true the listed fields are excluded from the SOQL query and so they are completely excluded from the migration process.
➦ It is useful, for instance, when you are using multiselect keywords and want to exclude certain fields, since you have no permissions to access them.
ScriptObject.excudedFromUpdateFields Array (string) Optional Array of field names that should be excluded from target update.

➦ If it's set to true, unlike when excudedField=true, the fields will still included into the SOQL and also retrieved from the Source and the Target orgs, but will not updated in the Target org.

➦ This property seems to be useful if you have only to retrieve the specific fields from the Source org (for example you need their values to transform another fields using the TransformRecords Core Add-On), but you don't need these fields to be updated in the Target org.

➦ To completely exclude the fields from the process consider using the excludedFields property.
ScriptObject.externalId String Mandatory External ID field for this SObject.

➦ This is the unique identifier field, the field which can map any child records refering to this SObject.
➦ Each field that has unique values across all records can be used as an External ID, even it's not marked as External ID within the SObject's metadata. You can also use standard External ID fields defined in the metadata.

➦ This field is used to compare source and target records as well as to process the relationships between objects during all operations, except of "Insert" and Readonly".
ScriptObject.fieldMapping Array (MappingItem[]) Optional See: Fields Mapping
ScriptObject.filterRecordsAddons Array (AddonManifestDefinition[]) Optional Add-On Api event definition. See: Supported Add-On Api events
ScriptObject.hardDelete Boolean Optional, Default false True value enforces hard deletion of the records instead of regular deletion. Works on any deletion API request.

➦ It's the same as "HardDelete" operation but can work in conjuction with the other delete operations: "Delete", "DeleteSource" and "DeleteHierarchy".
➦ It will also hard delete old records when ScriptObject.deleteOldData = true and Builk Api is currently used.
ScriptObject.master Boolean Optional, Default true True value will tell to the Plugin that this object is "master".

➦ This means, that you want to put this object at the top level of your data model hierarchy.
➦ The Plugin will always process all records for this object (but still in respect to the defined object's limitation or WHERE expression). For example if LIMIT 100 is used - the Plugin will process only 100 records.

False value make this object like a "slave" to the "master" objects in the script.

➦ false will enforce the Plugin to detect and process the minimum possible sub-set of records, which is required for keeping relationships between objects. In this case even LIMIT 100 is defined - the Plugin might process another amount of records depend on the current situation.
ScriptObject.mockFields Array (MockField[]) Optional Defines SObject fields that need to update with an anonymized data (or "mocked" data).
ScriptObject.operation String Mandatory Operation that you want to perform with the current SObject.

The available values are:
"Insert" - creates new records on the target org even old versions of these records already exist.
"Update" - only updates existing records. The operation overrides all record fields.
"Upsert" - Inserts new and updates old records, overriding all values.
"Readonly" - To say that you don't want to update this object, only to retrieve its records during the export. Useful in case that you need to include readonly SObject that is referenced from another SObject.
"Delete" - Only removes old target records from the given sObject. No update performed.
"HardDelete" - Same like a Delete. When using the Bulk API this will perform a hard deletion of the records which will not put the deleted records into the Recycle Bin.
"DeleteSource" - Removes records from the Source, see DeleteSource operation
"DeleteHierarchy" - Removes hierarchical records from the Target, see DeleteHierarchy operation
ScriptObject.parallelBulkJobs Integer Optional, Default 1 The same as the global parallelBulkJobs parameter but defined in object scope.
ScriptObject.parallelRestJobs Integer Optional, Default 1 The same as the global parallelRestJobs parameter but defined in object scope.
ScriptObject.query String Mandatory SOQL query string.

➦ Include in this query string all SObject's fields that you need to export, including referenced fields. It is enough only to list the fields and the plugin will automatically resolve and process all the references between objects.
➦ Data from fields that are not listed in the query will not be exported.
➦ Optionally you can filter the records by using WHERE, LIMIT, OFFSET etc. clauses.
➦ Nested queries and complex fields like Account__r.Name are not supported. But you still can use subquery in the WHERE clause for ex: .... WHERE Id IN (SELECT Id FROM .... )
ScriptObject.queryAllTarget Boolean Optional, Default false True value enforces the Plugin to query ALL records from the Target org regardless limitations which are defined in the query string, i.e. it will ignore WHERE/LIMIT/OFFSET/ORDER BY clauses defined in the object's SOQL query when querying the Target org

➦ The full query string is still used to get records from the Source org.
➦ This option is useful, for instance, when you have to select the recently modified records from the Source org and to update the similar records (by External ID key) on the Target side.
➦ When queryAllTarget is set to false (default value), the SFDMU will retrieve the latest records from both sides (as it's defined in the query string, e.g. WHERE CreatedDate = LAST_N_DAYS:2) and then it might fail to match source<->target records, since probably different records where modified for the last 2 days on the Source and the Target sides.
➦ But after setting queryAllTarget to true, the SFDMU will query recent records from the Source org and all records from the Target org, which ensures that the latest source records will also present in the target side
ScriptObject.restApiBatchSize Integer Optional The same as the global restApiBatchSize parameter but defined in object scope.
ScriptObject.skipExistingRecords String Optional True value will avoid updating records which already exist in the target side.

➦ This is useful for instance, when you want to insert ONLY NEW records which do not exist in the Target yet, while the Upsert operation without setting this property to true will still update the existing records.
ScriptObject.targetRecordsFilter String Optional Additional expression (similar to the "normal" SOQL WHERE clause), which you can use to filter out unwanted target data just before sending them to the Target.
ScriptObject.updateWithMockData Boolean Optional, default false True value enables the Data Anonymization feature for this SObject
ScriptObject.useCSVValuesMapping Boolean Optional, Default false When it's true and csvfile is set as the data source, it turns on Values Mapping for this object.
ScriptObject.useFieldMapping Boolean Optional, Default false When set to true it turns on the Fields Mapping for this object.
ScriptObject.useQueryAll Boolean Optional, Default false True value enforces using of /queryAll API endpoint instead of /query when querying the source records. This allows you to include deleted records in the query result.

➦ You can also retrieve only the deleted records by using the following expression: WHERE IsDeleted = true
➦ This parameter does not change the behavior how the target records are queried.
ScriptObject.useValuesMapping Boolean Optional, Default false When it's true and salesforce org is set as the data source, it turns on Values Mapping for this object.
ScriptObject.useSourceCSVFile Boolean Optional, Default false When set to true and salesforce org is set as data source - It will use <ObjectName>.csv file as the source instead of the org.

➦ When the --sourceusername flag is set to ORG, you can override this flag with "csvfile" (to use CSV as a source) for the specific object by setting ScriptObject.useSourceCSVFile = true.
➦ Make sure you've put the corresponding csv file in the working directory.

MappingItem object.

  • The MappingItem object is used to setup the optional Fields Mapping for the current ScriptObject.

  • If you need to map between fields, use fieldMapping property of the ScriptObject to declare the array of MappingItem objects.

  • Make sure you've set useFieldMapping to true to activate the Fields Mapping feature.

      objects: [
          {
              "query": "SELECT Id, Name, ParentId, TEST__c FROM Account",
              "operation": "Upsert",
              "externalId": "ExternalID__c",
    
              // This should be set to true in the ScriptObject's scope to enable the Fields Mapping feature for this object
              "useFieldMapping": true,
    
              "fieldMapping": [
                  {
                      "targetObject": "TestObject__c"
                  },
                  {
                      "sourceField": "ParentId",
                      "targetField": "ParentTestObject__c"
                  },
                  {
                      "sourceField": "ExternalID__c",
                      "targetField": "External_ID__c"
                  }
              ]
          }
      ]
    

Below is the list of the available MappingItem properties:

Field Data Type Optional/Default Description
MappingItem.sourceField String Optional or mandatory depend on the configuration The API name of the source field belonging to the current (source) sObject which is mapped to the targetField.
MappingItem.targetField String Optional or mandatory depend on the configuration The API name of the target field belonging to the targetObject which the sourceField is mapped to it.
MappingItem.targetObject String Optional or mandatory depend on the configuration The API name of the target sObject which the current (source) sObject is mapped to it.
MappingItem.excludeNames Array (String) Optional String array of field names to NOT anonymize. Useful if the 'all' keyword is used instead of specific field name.

MockField object.

  • The MockField object is used to setup the optional Data Anonymization for the current ScriptObject.

  • If you have to anonymize fields of the current sObjects, add the mockFields property in the ScriptObject definition, which will contain the array of the MockField objects:

  • Make sure you've set updateWithMockData=true to activate the Data Anonymization feature.

      objects: [
          {
    
              "query": "SELECT Id, Name FROM Account",
              "operation": "Insert",
              "externalId": "Name",
    
              // This is the data anonymization declaration
              "mockFields": [
                    {
                        "name": "Name",
                        "pattern": "name",
                        "excludedRegex": "^DummyAccount$",
                        "includedRegex": "Account\\sTo\\sMask"
                    }
              ],
    
               // This should be set to true in the ScriptObject's scope to enable the Data Anonymization feature for this object
              "updateWithMockData": true
    
          }
      ]
    

    Below is the list of the available MockField properties:

Field Data Type Optional/Default Description
MockField.excludedRegex String Optional The JS regex expression to exclude values that should not be mocked.
MockField.includedRegex String Optional The JS regex expression to include only values that should be mocked.

➦ When defined, values which are not matched this regex, will not be mocked.
MockField.name String Mandatory The API name of the field belonging to the current sObject to mock (anonymize).
MockField.pattern String Mandatory The pattern to create mock data for this field.
Notes:
  • Either objects or objectSets properties should be set in the export.json.

    However, it's possible to set them together. In this case the global objects array is considered as a initial ScriptObjectSet in the configuration and executed at the beginning of the migration process, after that the Plugin will execute the rest of the ScriptObjectSets in the script.

  • External ID fields are always case-sensitive.

    The API name(s) of field(s) set as an External ID (single and composite) - are always case-sensitive and must exactly match the fields metadata.

    For instance, you CANNOT SET External ID like below:

    • externalId: "name"

    • externalId: "Name;Account__r.name"

      ... because the field "name" (starting from lowercase letter) does not present in the metadata.

  • The "I know what I do" option.

    When a production SF org is set as a Target, during the job running, the Plugin will ask the user to type the instance name of the target production org (e.g. prod-instance.my.salesforce.com) to make sure that the user does really want to make a modification, preventing critical production data from destroying by accident.

    This prompt can also be skipped by adding the flag --canmodify INSTANCE-NAME to the CLI command, which is also indicating user's agreement to the changes are being made by the Plugin.

    Be aware, that you can't ignore this prompt by simply adding the --noprompt flag, since in that case --noprompt will abort the job instead of asking the user.

  • Setting up HardDelete operation as well as putting ScriptObject.hardDelete to true will permanently delete records (only supported by the Bulk Api). Be aware, that when deleteOldData is set to true and old records are deleted using the Buil Api, the target records are also hardly deleted.

    See also the Salesforce documentation about the HardDelete:
    Hard delete records from Salesforce,
    Activation of Bulk API Hard Delete on System Administrator Profile

Last updated on 29th Jan 2023