Script Object.
🡨 Go Back to Export.json File Overview
Table of Contents
- Script Object Overview
- Script Object Structure
- Script Object Properties
- afterAddons (Array of AddonManifestDefinition)
- allOrNone (Boolean)
- allowFieldTruncation (Boolean)
- alwaysUseRestApiToUpdateRecords (Boolean)
- apiVersion (String in float format)
- beforeAddons (Array of AddonManifestDefinition)
- binaryDataCache (String)
- bulkApiV1BatchSize (Integer)
- bulkApiVersion (String)
- bulkThreshold (Integer)
- queryBulkApiThreshold (Integer)
- concurrencyMode (String)
- createTargetCSVFiles (Boolean)
- csvFileDelimiter (String)
- csvReadFileDelimiter (String)
- csvWriteFileDelimiter (String)
- csvFileEncoding (String)
- csvInsertNulls (Boolean)
- csvUseEuropeanDateFormat (Boolean)
- csvWriteUpperCaseHeaders (Boolean)
- csvUseUtf8Bom (Boolean)
- csvAlwaysQuoted (Boolean)
- dataRetrievedAddons (Array of AddonManifestDefinition)
- excludeIdsFromCSVFiles (Boolean)
- excludedObjects (Array of Strings)
- importCSVFilesAsIs (Boolean)
- keepObjectOrderWhileExecute (Boolean)
- objectSets (Array of ScriptObjectSet)
- objects (Array of ScriptObject)
- orgs (Array of ScriptOrg)
- parallelBulkJobs (Integer)
- parallelRestJobs (Integer)
- parallelBinaryDownloads (Integer)
- pollingIntervalMs (Integer)
- pollingQueryTimeoutMs (Integer)
- promptOnIssuesInCSVFiles (Boolean)
- promptOnMissingParentObjects (Boolean)
- groupQuery (String)
- proxyUrl (String)
- restApiBatchSize (Integer)
- simulationMode (Boolean)
- sourceRecordsCache (String)
- validateCSVFilesOnly (Boolean)
- useSeparatedCSVFiles (Boolean)
Script Object Overview ⇧
The export.json file includes a root-level Script object that serves as a central configuration point for defining global migration parameters.
Properties set within this Script object apply universally across the entire migration process and can override similar properties specified at lower levels within the configuration hierarchy.
Script Object Structure ⇧
Here's an example of how the Script object is structured in the export.json file:
{
// Root level of export.json functions as the global Script object for setting migration parameters.
"objects": [
// Array for configuring individual data migration objects; a Script object property.
{
"query": "SELECT Id, Name, Phone FROM Account",
"operation": "Upsert",
"externalId": "AccountExternalId__c"
}
]
// Additional global Script object properties
}
Script Object Properties ⇧
The properties of the Script object are set at the global parent scope, ensuring consistent settings across the migration process.
afterAddons (Array of AddonManifestDefinition) ⇧
Optional. Defines add-ons to be executed after the migration process for specific objects. These add-ons can perform tasks such as data validation or cleanup. Detailed information about the supported events can be found at: Supported Add-On Api events.
Item Type: AddonManifestDefinition
Example of export.json Configuration:
{
"objects": [],
"afterAddons": [
{
"module": "GeneralCleanup",
"args": {
"cleanupAction": "Remove Obsolete Data"
}
}
]
// ... other export.json properties
}
allOrNone (Boolean) ⇧
Optional, Default: false. When any record fails during the target update, setting this property to true will enforce the SFDMU to abort the migration job. When using the REST API, all changes made in the target org within the current API call will be rolled back before aborting the migration job, using the native feature of the Salesforce REST API. Changes aren't committed unless all records are processed successfully. This property is ignored when using the Bulk API since it isn't natively supported by the SF Bulk API.
Example of export.json Configuration:
{
"objects": [],
"allOrNone": true
// ... other export.json properties
}
allowFieldTruncation (Boolean) ⇧
Optional, Default: false. Allows truncation of the field value before importing into the target org for specific field types like Url, Multi-select Picklist, Phone, Picklist, Text, Text (Encrypted), and Text Area (Long). The field value is truncated to the length defined by the field metadata, similar to the feature available in the standard Salesforce Data Loader.
Example of export.json Configuration:
{
"objects": [],
"allowFieldTruncation": true
// ... other export.json properties
}
alwaysUseRestApiToUpdateRecords (Boolean) ⇧
Optional, Default: false. Enforces the plugin to always update records through the REST Api engine, even when the number of updated records exceeds the bulkThreshold limit.
Example of export.json Configuration:
{
"objects": [],
"alwaysUseRestApiToUpdateRecords": true
// ... other export.json properties
}
apiVersion (String in float format) ⇧
Optional. Specifies the API version number to use. Example value: "65.0".
If not set explicitly:
- for
org -> org, SFDMU auto-detects max API version in both orgs and uses the lower one. - for
org -> csvfileorcsvfile -> org, SFDMU uses the max API version supported by that org.
If you set apiVersion in export.json or via CLI --apiversion, SFDMU uses your explicit value and does not auto-detect.
Example of export.json Configuration:
{
"objects": [],
"apiVersion": "65.0"
// ... other export.json properties
}
beforeAddons (Array of AddonManifestDefinition) ⇧
Optional. Specifies add-ons to be executed before the migration process begins for each object. These add-ons can prepare data, verify conditions, or set initial parameters. More information on the event types available for these add-ons can be found at: Supported Add-On Api events.
Item Type: AddonManifestDefinition
Example of export.json Configuration:
{
"objects": [],
"beforeAddons": [
{
"module": "InitialSetupModule",
"args": {
"setupAction": "Configure Default Settings"
}
}
]
// ... other export.json properties
}
binaryDataCache (String) ⇧
Optional, Default: 'InMemory'. This configuration allows for caching of large binary data on the local disk instead of keeping it in memory when processing binary files such as Attachments or Files. This can enhance job performance and conserve memory usage. Available options are "InMemory", "CleanFileCache", and "FileCache".
Example of export.json Configuration:
{
"objects": [],
"binaryDataCache": "InMemory"
// ... other export.json properties
}
Details:
"InMemory": This is the default setting where all blob data is stored in memory, similar to how other record fields are stored. It ensures that data is directly and quickly accessible, although it may consume significant memory resources.
"CleanFileCache": With this setting, blob data is stored on the local disk within the
./binary_cache/[source_user_name]directory. This directory is cleared before each job starts, ensuring that no outdated files are used and all data must be retrieved from the source each time. This option is beneficial for ensuring data is always up-to-date, though it requires data to be pulled repeatedly from the source, which may increase processing time."FileCache": In this mode, blob data is also stored on the local disk in the
./binary_cache/[source_user_name]directory. Unlike "CleanFileCache", the cache is not cleared between jobs, meaning that files are preserved and reused. This reduces the need to repeatedly download data from the source, significantly saving time and resources. However, there is a risk of using outdated data if files at the source are updated. This option is most suitable when file updates are infrequent or when consistent data across jobs is crucial.
bulkApiV1BatchSize (Integer) ⇧
Optional, Default: 9500. Sets the maximum size of each batch when processing records by the Bulk Api V1, similar to the restApiBatchSize for the REST Api.
Example of export.json Configuration:
{
"objects": [],
"bulkApiV1BatchSize": 9500
// ... other export.json properties
}
bulkApiVersion (String) ⇧
Optional, Default: "2.0". Specifies the version of the Salesforce Bulk Api to use. Valid values are "1.0" and "2.0".
Example of export.json Configuration:
{
"objects": [],
"bulkApiVersion": "2.0"
// ... other export.json properties
}
bulkThreshold (Integer) ⇧
Optional, Default: 200. This configuration parameter sets the minimum size of data required to switch from using the Collection API to the Bulk API for CRUD (Create, Read, Update, Delete) operations on records. It is important to note that this parameter does not influence the use of the Bulk API for querying records; for that, refer to the queryBulkApiThreshold parameter.
Example of export.json Configuration:
{
"objects": [],
"bulkThreshold": 200
// ... other export.json properties
}
Details:
Operational Context: When executing CRUD operations, the plugin optimally utilizes the Collection API for smaller datasets due to its speed and efficiency. However, the Collection API significantly impacts the quota of API requests. Therefore, for handling larger datasets, it is advantageous to switch to the Bulk API, which is designed for high-volume data handling and is more quota-efficient.
Decision Mechanism: The
bulkThresholdparameter specifies the threshold data size that necessitates a switch from the Collection API to the Bulk API. This threshold ensures that large data operations are conducted in a manner that optimizes performance and minimizes quota consumption.Data Size Evaluation: At the beginning of a job, the plugin estimates the total data volume by performing a
COUNT()query on all source and target records of the designated sObject. If the result of this query is greater than or equal tobulkThreshold, the plugin opts to use the Bulk API for updates, inserts, and deletions.Exact Condition: The Bulk API switch is triggered when
amountToProcess >= bulkThreshold.
queryBulkApiThreshold (Integer) ⇧
Optional, Default: 30000. This parameter sets the minimum size of a data query required to activate the use of the Bulk API Query instead of the standard REST API. This switch is essential for efficiently managing large-scale data queries.
Example of export.json Configuration:
{
"objects": [],
"queryBulkApiThreshold": 30000
// ... other export.json properties
}
More Details:
Operational Use: The
queryBulkApiThresholdis a critical parameter within the Salesforce Bulk API framework. It dictates when to switch from the REST API to the Bulk API Query, which is optimized for handling extensive data queries efficiently.Performance Optimization: This threshold is particularly crucial for operations involving large data sets. When the expected number of records in a query surpasses this threshold, the Bulk API is employed, enhancing the system's performance by reducing the load and processing time associated with large queries.
Exclusion of Update Operations: It is important to note that this threshold specifically applies to querying data. It does not influence the use of the Bulk API for updating records, which is controlled by the separate
bulkThresholdparameter.Initial Data Estimation: At the start of a job, the plugin assesses the total volume of data by executing a
COUNT()query on all source and target records of the specified sObject. The outcome of this count is then compared toqueryBulkApiThreshold. If the result is greater than or equal to the threshold, the Bulk API Query is used for the operation.
concurrencyMode (String) ⇧
Optional, Default: "Parallel". Defines the concurrency mode to perform the bulk operations when using Bulk API V1. Valid values are "Serial" and "Parallel".
Example of export.json Configuration:
{
"objects": [],
"concurrencyMode": "Parallel"
// ... other export.json properties
}
createTargetCSVFiles (Boolean) ⇧
Optional, Default: true. If set to true, the Plugin will produce a CSV file containing target records for each processed sObject with error information (if occurred) per record. The generated target CSV files also include the Old Id column for source-to-target Id mapping when available. Setting this property to false will suppress producing these target files.
Example of export.json Configuration:
{
"objects": [],
"createTargetCSVFiles": true
// ... other export.json properties
}
csvFileDelimiter (String) ⇧
Optional, Default: ",". Specifies the common delimiter for non-service CSV read and write operations.
Supported values:
commaor,semicolonor;tabor\t- any custom string delimiter
When this property is provided, it has priority over csvReadFileDelimiter and csvWriteFileDelimiter.
Example of export.json Configuration:
{
"objects": [],
"csvFileDelimiter": "semicolon"
// ... other export.json properties
}
Details:
- This parameter is the preferred delimiter setting for modern configurations.
- It is applied consistently to both reading and writing of non-service CSV files.
- It does not change internal service CSV formatting used by SFDMU reports and runtime files.
csvReadFileDelimiter (String) ⇧
Optional, Deprecated, Default: ",". Legacy fallback delimiter for reading non-service CSV files.
Use csvFileDelimiter for new configurations.
Example of export.json Configuration:
{
"objects": [],
"csvReadFileDelimiter": ","
// ... other export.json properties
}
Details:
- This property is read only as a fallback when
csvFileDelimiteris not explicitly provided. - Keep it only for older scripts that still rely on separate read/write delimiter settings.
csvWriteFileDelimiter (String) ⇧
Optional, Deprecated, Default: ",". Legacy fallback delimiter for writing non-service CSV files.
Use csvFileDelimiter for new configurations.
Example of export.json Configuration:
{
"objects": [],
"csvWriteFileDelimiter": ","
// ... other export.json properties
}
Details:
- This property is used only when
csvFileDelimiteris not set. - For new configurations, a single common delimiter is easier to maintain and less error-prone.
csvFileEncoding (String) ⇧
Optional, Default: "utf8". Specifies encoding for non-service CSV read/write operations.
Supported values:
utf8,utf-8utf16le,utf-16leucs2,ucs-2latin1,ascii,binary,hex,base64,base64url
Example of export.json Configuration:
{
"objects": [],
"csvFileEncoding": "utf8"
// ... other export.json properties
}
Notes:
- Service CSV files (
source/*,target/*,reports/*) use internal engine formatting and are not controlled by usercsv*settings. - Internal service CSV format is always:
- delimiter:
, - encoding:
utf8 - BOM: enabled
- quoting: enabled
- upper-case headers: disabled
- delimiter:
- Unsupported encoding values are automatically normalized to
utf8.
csvInsertNulls (Boolean) ⇧
Optional, Default: true. Enables Data Loader-like null handling for CSV import semantics.
When enabled, SFDMU treats special null markers and empty-value rules according to the CSV pipeline behavior.
Example of export.json Configuration:
{
"objects": [],
"csvInsertNulls": true
// ... other export.json properties
}
Details:
- Typical null markers and empty-cell semantics are handled according to SFDMU CSV pipeline rules.
- Keep this setting enabled when business logic expects explicit clearing of target field values.
csvUseEuropeanDateFormat (Boolean) ⇧
Optional, Default: false. Enables parsing of European date formats in CSV import data.
Example of export.json Configuration:
{
"objects": [],
"csvUseEuropeanDateFormat": true
// ... other export.json properties
}
Details:
- Use this option when source CSV date values are represented in day-first notation.
- Enable it consistently across runs to avoid inconsistent date parsing outcomes.
csvWriteUpperCaseHeaders (Boolean) ⇧
Optional, Default: false. Writes non-service CSV header names in uppercase when exporting CSV data.
Example of export.json Configuration:
{
"objects": [],
"csvWriteUpperCaseHeaders": true
// ... other export.json properties
}
Details:
- This is useful when downstream tools require uppercase column names.
- It improves consistency when CSV files are compared or version-controlled across teams.
csvUseUtf8Bom (Boolean) ⇧
Optional, Default: true. Controls UTF-8 BOM behavior for non-service CSV read/write operations when UTF-8 encoding is used.
Example of export.json Configuration:
{
"objects": [],
"csvUseUtf8Bom": true
// ... other export.json properties
}
Details:
- This option matters only for UTF-8 encodings.
- For non-UTF encodings, BOM behavior is not applicable and is ignored.
csvAlwaysQuoted (Boolean) ⇧
Optional, Default: true. Controls whether all non-service CSV values are always written in quotes.
Example of export.json Configuration:
{
"objects": [],
"csvAlwaysQuoted": true
// ... other export.json properties
}
Details:
- When set to
true, all non-service CSV values are written in quotes. - When set to
false, non-service CSV values are quoted only when required by CSV escaping rules. - Internal service CSV files (
source/*,target/*,reports/*) always use quoted values regardless of this setting.
dataRetrievedAddons (Array of AddonManifestDefinition) ⇧
Optional. Defines add-ons that are activated after data is retrieved from the source but before it is processed or migrated. This stage allows for preliminary data manipulation, analysis, or logging. For further details on the supported events, refer to: Supported Add-On Api events.
Item Type: AddonManifestDefinition
Example of export.json Configuration:
{
"objects": [],
"dataRetrievedAddons": [
{
"module": "DataAnalysisModule",
"args": {
"analysisType": "Initial Data Quality Check"
}
}
]
// ... other export.json properties
}
excludeIdsFromCSVFiles (Boolean) ⇧
Optional, Default: false. If set to true, all record ID and lookup ID columns will not be added on CSV export, making CSV files more compact and well usable with version control systems. The relationships between objects will still be maintained using External ID lookup columns. If set to true on CSV import, it will enforce repairing back virtually the source CSV files by adding missing ID columns.
Example of export.json Configuration:
{
"objects": [],
"excludeIdsFromCSVFiles": true
// ... other export.json properties
}
excludedObjects (Array of Strings) ⇧
Specifies a list of API names to exclude sobjects from the job globally across all objectSets. This provides a handy and quick way to exclude objects in cases where multiple objectSets are used and control over object exclusion needs to be maintained globally from one place.
Example of export.json Configuration:
{
"objects": [],
"excludedObjects": ["Account", "Case"]
// ... other export.json properties
}
importCSVFilesAsIs (Boolean) ⇧
Optional, Default: false. If set to true, validation and fixing of the source CSV files are disabled, making them taken as is for importing data into the Target org. You are responsible for preparing these files for the import. If set to false (default value), the SFDMU will try to analyze and repair the source CSV files before using them to update the Target org.
Example of export.json Configuration:
{
"objects": [],
"importCSVFilesAsIs": true
// ... other export.json properties
}
keepObjectOrderWhileExecute (Boolean) ⇧
Optional, Default: false. If set to true, objects are executed in the order they appear in the objects[] array. You should arrange objects in the proper order to avoid issues, e.g., parent objects should be placed before child objects. If set to false (default value), the "Smart Order" mode is enabled, where the Plugin decides the best order to execute objects. There is a predefined list of objects, which are always executed before
other objects, regardless of the order defined by the script or calculated by the Smart Order, e.g., the RecordType object.
Example of export.json Configuration:
{
"objects": [],
"keepObjectOrderWhileExecute": true
// ... other export.json properties
}
objectSets (Array of ScriptObjectSet) ⇧
Optional. List of sub-sets of SObjects you want to process. For more details, see: Multiple Object Sets.
Item Type: ScriptObjectSet
Example of export.json Configuration:
{
"objectSets": [
{
"objects": [
{
"query": "SELECT Id FROM Account",
"operation": "DeleteHierarchy"
},
{
"query": "SELECT Id FROM Opportunity",
"operation": "DeleteHierarchy"
}
]
},
{
"objects": [
{
"query": "SELECT Name FROM Account LIMIT 1",
"operation": "Insert"
}
]
}
]
// ... other export.json properties
}
objects (Array of ScriptObject) ⇧
Optional. Specifies the SObjects you want to process.
Item Type: ScriptObject
Example of export.json Configuration:
{
"objects": [
{
"query": "SELECT Id, Phone, TestObject3__c FROM Account WHERE Name LIKE 'TEST_ACC_%'",
"operation": "Upsert",
"externalId": "Name"
},
{
"query": "SELECT Id, Account__c, TestObject3__c, RecordTypeId FROM TestObject__c",
"operation": "Upsert",
"externalId": "Name"
},
{
"query": "SELECT Id, Account__c, TestObject__c FROM TestObject2__c",
"operation": "Upsert",
"externalId": "Name"
},
{
"query": "SELECT Id, TestObject2__c FROM TestObject3__c",
"operation": "Upsert",
"externalId": "Name"
}
// ... other ScriptObject definitions if available
]
// ... other export.json properties
}
orgs (Array of ScriptOrg) ⇧
Optional. Provides credentials data of the Salesforce orgs you want to process. It's optional if you need to configure a manual connection to any of the processed orgs. Alternatively, you can enforce the Plugin to use local Salesforce CLI auth data by omitting the orgs section.
Item Type: ScriptOrg
Example of export.json Configuration:
{
"objects": [],
"orgs": [
{
"name": "user@example.com",
"instanceUrl": "https://example.my.salesforce.com",
"accessToken": "XXXXtokenXXXX"
}
]
// ... other export.json properties
}
parallelBulkJobs (Integer) ⇧
Optional, Default: 1. Sets the maximum number of Bulk API jobs running in parallel when performing a large size CRUD API operation. The Plugin splits records into multiple small pieces (chunks), then processes each chunk independently by creating a dedicated Bulk API job. All these jobs run together in parallel threads, which can increase the overall Plugin performance but also requires significantly more bandwidth.
Sometimes setting this option to > 1 can cause the well-known "Unable to lock row - Record currently unavailable" issue. However, in most cases, there is no chance to run into the problem of "record locked", because each bulk job always processes only its own record set, which should prevent unwanted collisions between the parallel jobs.
Example of export.json Configuration:
{
"objects": [],
"parallelBulkJobs": 2
// ... other export.json properties
}
parallelRestJobs (Integer) ⇧
Optional, Default: 1. Defines the number of REST API jobs that can run in parallel, similar to parallelBulkJobs for Bulk API jobs.
Example of export.json Configuration:
{
"objects": [],
"parallelRestJobs": 2
// ... other export.json properties
}
parallelBinaryDownloads (Integer) ⇧
Optional, Default: 20 The parallelBinaryDownloads property in the export.json configuration file for the SFDMU Plugin specifies the number of concurrent download threads used when transferring binary data (like Salesforce attachments and document files) between orgs.
Increasing this value can significantly speed up the migration of large volumes of binary data by utilizing more parallel threads, though this should be managed in accordance with the network and Salesforce API limitations to avoid potential throttling.
This setting is particularly beneficial for optimizing data transfer times in large-scale migrations where binary data constitutes a significant portion of the data being migrated.
Example of export.json Configuration:
{
"parallelBinaryDownloads": 10, // Sets the number of parallel download threads for binary data
"objects": [
{
"operation": "Readonly",
"query": "SELECT Id FROM Account WHERE Name = 'ACC_10000'",
"externalId": "Name"
},
{
"operation": "Insert",
"query": "SELECT Id, Body, ParentId$Account FROM FeedItem WHERE Type = 'ContentPost'",
"afterAddons" : [
{
"module": "core:ExportFiles"
}
]
}
]
// ... other export.json properties
}
In this configuration, the parallelBinaryDownloads is set to 10, meaning that up to ten binary files can be downloaded simultaneously using core:ExportFiles Add-On.
pollingIntervalMs (Integer) ⇧
Optional, Default: 5000. Defines the polling interval in milliseconds to check for the bulk job status when using the Bulk API. Decreasing this value may cause extra system load.
Example of export.json Configuration:
{
"objects": [],
"pollingIntervalMs": 5000
// ... other export.json properties
}
pollingQueryTimeoutMs (Integer) ⇧
Optional, Default: 240000 The pollingQueryTimeoutMs property in the SFDMU Plugin's export.json configuration file sets the maximum timeout, in milliseconds, for waiting on a response from a SOQL query. This timeout ensures that migration processes do not linger indefinitely, especially when dealing with large datasets or complex queries.
Example of export.json Configuration:
{
"pollingQueryTimeoutMs": 180000, // Sets the query timeout to 180 seconds (3 minutes)
"objects": [
{
"query": "SELECT Id, Name FROM Account",
"operation": "Upsert",
// Additional ScriptObject properties
}
]
// ... other export.json properties
}
In this configuration, the pollingQueryTimeoutMs is adjusted to 180000 ms (3 minutes) for longer query responses, ensuring smooth migration operations without premature timeouts.
promptOnIssuesInCSVFiles (Boolean) ⇧
Optional, Default: true. If set to true, the Plugin prompts the user to stop the execution or continue when issues are found in the source CSV files. Setting this property to false will suppress asking for the user's confirmation, and the job will continue despite the issues.
Example of export.json Configuration:
{
"objects": [],
"promptOnIssuesInCSVFiles": true
// ... other export.json properties
}
promptOnMissingParentObjects (Boolean) ⇧
Optional, Default: true. If set to true, the SFDMU pauses the execution and prompts the user to decide whether to abort or continue when the parent lookup or parent MD records are missing for some child records, potentially breaking the relationship between objects. Setting this property to false will suppress asking for user's confirmation, and the execution will continue.
Example of export.json Configuration:
{
"objects": [],
"promptOnMissingParentObjects": true
// ... other export.json properties
}
groupQuery (String) ⇧
Optional. Overrides the Group query (or WHERE clause) used in polymorphic lookup scenarios.
Use this option when the default Group retrieval logic must be restricted or expanded for your org setup.
Example of export.json Configuration:
{
"objects": [],
"groupQuery": "SELECT Id, Name, Type FROM Group WHERE Type = 'Queue'"
// ... other export.json properties
}
Details:
- You can provide either a full SOQL query for
Groupor a WHERE-oriented override, depending on your scenario. - This is commonly used when default Group retrieval must be narrowed to specific group types.
proxyUrl (String) ⇧
Optional. Specifies the URL of the
proxy server that should be used to connect to SF instances instead of direct connection. This option can be used, for instance, to connect to orgs through a secured corporate VPN instead of an unsecured direct connection.
Example of export.json Configuration:
{
"objects": [],
"proxyUrl": "https://proxy.proxy.com:8080"
// ... other export.json properties
}
restApiBatchSize (Integer) ⇧
Optional. Sets the maximum size of each batch when processing records by the REST Api. Large jobs are internally split into small chunks as it's available for the Bulk API. This can be useful to avoid the "maximum request size exceeded" error when uploading large binary data, such as the Attachment object, which is not supported by the Bulk API.
Example of export.json Configuration:
{
"objects": [],
"restApiBatchSize": 500
// ... other export.json properties
}
simulationMode (Boolean) ⇧
Optional, Default: false. This feature allows checking which records will be affected by the export.json configuration without actually updating the target org. In this mode, the Plugin produces the same reports (logs and target CSV files) as in live mode, but no actual records are affected.
In simulation mode, since no actual records are created, the Plugin generates dummy record IDs in the _target.csv files instead of real record IDs as in live mode. Sometimes simulated output might be different than live output. This is because normally, in each step, the Plugin uses all previously processed records to decide which target records should now be processed.
Such differences between simulated and live output are a normal behavior of the Plugin. Since in the simulation mode, the records in the target org are not updated (they have to be left untouched because there is no records rollback feature in the Salesforce API), you can't use simulation mode to test your migration task against target triggers, validation rules, and other functions that require actual update of records.
However, you can still see which records are pulled from the Source and which are about to be pulled to the Target.
Example of export.json Configuration:
{
"objects": [],
"simulationMode": true
// ... other export.json properties
}
sourceRecordsCache (String) ⇧
Optional, Default: 'InMemory'. Allows storing records retrieved from the Source on the local disk instead of fetching them again on the next run of the same job. This can speed up the job performance. This option has the same available values as the binaryDataCache option but controls caching of records instead of binary data. The subdirectory containing the cache is: ./source_records_cache/[source_user_name].
Example of export.json Configuration:
{
"objects": [],
"sourceRecordsCache": "InMemory"
// ... other export.json properties
}
validateCSVFilesOnly (Boolean) ⇧
Optional, Default: false. If this property is set to false (default value), when using CSV files as a data source, the SFDMU performs a validation and a smart fixing of the source CSVs before actually running the migration job. Setting this property to true will stop execution after the CSV validation process is completed, allowing you just to detect possible issues in the files without updating the Target.
Example of export.json Configuration:
{
"objects": [],
"validateCSVFilesOnly": true
// ... other export.json properties
}
useSeparatedCSVFiles (Boolean) ⇧
Optional, Default: false. If this property is set to false (default value), when using CSV files as a data source and multiple Object Sets are in the export.json, the SFDMU will take the same CSV source files placed in the root working directory for each object set.
Setting this property to true will enforce the SFDMU to use separated CSV source files for each executed Object Set. For the first Object Set, it will always take the source files from the root working directory (to avoid backward incompatibility issues). For the rest of the Object Sets, it will take the source CSV files from the subdirectory with the following pattern: ./objectset_source/object-set-<ObjectSet Index>, e.g., ./objectset_source/object-set-2/.
Make sure you always put the source CSV files in the correct path, for example, Account.csv for the object set #1 you should put into ./Account.csv and for the object set #2 into ./objectset_source/object-set-2/Account.csv.
When core:ExportFiles is used with CSV media modes, this flag also affects binary path resolution per object set.
Example of export.json Configuration:
{
"objects": [],
"useSeparatedCSVFiles": true
// ... other export.json properties
}