Skip to main content

Exporting FieldKo Data with Bulk API 2.0

Export large volumes of FieldKo records asynchronously using Salesforce’s Bulk API 2.0

Updated this week

When you need to export a large volume of FieldKo records (such as Visits, Tasks, or Surveys), Salesforce’s Bulk API 2.0 is the go-to solution. Bulk API 2.0 enables asynchronous processing of SOQL queries and is designed for queries returning thousands or even millions of records​.

Instead of hitting normal query limits, Bulk API 2.0 lets you run a query job in the background and retrieve the results when ready – perfect for big exports without timing out the system.

Why use Bulk API 2.0?

Bulk API 2.0 is more streamlined than the older Bulk API 1.0. It requires less client-side code and automatically handles breaking the job into batches (chunking) and retrying failed chunks​. In practice, any export of more than ~2,000 records is a good candidate for Bulk API 2.0​. By using Bulk API 2.0, you avoid manual data splitting and gain reliability for large exports.

Preparing the Export Query

First, decide which object and fields you need to export and if you can filter by criteria (for example, exporting only the last 1 year of Visit records). Write a SOQL query selecting the required fields. For instance, to export all Visit__c records you might prepare:

sqlCopyEditSELECT Id, Name, Date__c, Status__c, ... FROM Visit__c WHERE Date__c = LAST_N_YEARS:1

Include filters to limit scope if possible, as this will reduce the data volume and speed up the export. Ensure the query is Bulk API compatible (for example, Bulk queries don’t support queries that sort with certain fields, etc., so keep it straightforward).

Authenticating with Salesforce

Bulk API 2.0 is a REST API, so you’ll need to authenticate your client. You can use an OAuth access token or a Session ID. Tools like Workbench or Postman make this easier:

  • Workbench: Login with your Salesforce credentials and security token. Workbench runs within your browser and will handle authentication for you once logged in.

  • Postman: You’ll need to set up an OAuth flow or use a saved session token. For example, you can use the OAuth 2.0 JWT Bearer flow or Username-Password flow to obtain a bearer token, then include Authorization: Bearer <token> in your calls.

Submitting a Bulk API Query Job

Using Bulk API 2.0 involves creating a query job, waiting for it to complete, and then retrieving results:

  1. Create the Query Job: This is done via an HTTP POST to the Bulk API 2.0 endpoint. For example, you would POST to:
    POST https://yourInstance.salesforce.com/services/data/vXX.X/jobs/query
    Include a JSON body specifying the SOQL query and that it’s a query operation. For example:

    jsonCopyEdit{ "operation": "query", "query": "SELECT Id, Name, Date__c, Status__c FROM Visit__c WHERE Date__c = LAST_N_YEARS:1", "contentType": "CSV" }

    This creates an asynchronous job on Salesforce. (In Workbench, you can do this under Queries > Bulk CSV Query, by entering your SOQL and starting the job.)

    Salesforce will respond with a Job ID (e.g. 750xx0000000045AAA). You can cite this to check status and fetch results​. (Tip: If using Workbench, it will automatically show the job and you can monitor it on the Bulk Data Load Jobs page.)

  2. Monitor Job Status: The job will run in the background. You can check its status with a GET request to /services/data/vXX.X/jobs/query/<JobID>. The status might be Queued, InProgress, or JobComplete. A JobComplete status means the query finished and results are ready. (Workbench will periodically poll this for you and show when complete.)

  3. Retrieve Results: Once complete, use a GET request to retrieve the data. For example:
    GET https://yourInstance.salesforce.com/services/data/vXX.X/jobs/query/<JobID>/results (with your auth header). If the dataset is large, the response might not contain all rows at once – Bulk API 2.0 may include a locator value in the response headers (e.g. Sforce-Locator)​. This indicates there are more result chunks to fetch. You would then call the same URL with a locator=<locator value> query parameter to get the next batch, and repeat until all data is retrieved​. Each chunk will be a CSV of up to a certain number of records.

    In Workbench, when the job is complete, you’ll see a “Download Results” link which handles retrieving all parts for you. Click it to download the CSV file of your records.

  4. Handle Large Datasets: Bulk API 2.0 will automatically chunk very large queries behind the scenes if the object supports it (using PK Chunking on objects with millions of records)​. This means Salesforce might run your query in parallel on different ID ranges to speed it up. The results retrieval with the locator will abstract this. Be aware that extremely large exports (tens of millions of rows) might result in multiple result files – ensure you download all parts. Also note that Bulk API query jobs have a limit (e.g. up to 15GB of data or 15 million rows per job, as per Salesforce limits).

  5. Tips and Tools:

    • Testing in Postman/Workbench: It’s wise to test your Bulk API process on a smaller query first (for example, LIMIT 1000) to make sure you understand the flow. Workbench’s Bulk Query interface is user-friendly for this.

    • Handling CSV Format: The results will be in CSV by default. Check for any special formatting needed (e.g., line breaks in text fields will be escaped).

    • Error Handling: If the job fails or partially succeeds, Salesforce will provide error messages. For example, if your SOQL was invalid, the job will fail quickly with a parse error. You can retrieve the failure message via the job status endpoint.

Example using cURL

To illustrate, creating a Bulk API query via cURL might look like:

bashCopyEditcurl https://yourInstance.salesforce.com/services/data/v47.0/jobs/query \ -H "Authorization: Bearer <YourAccessToken>" \ -H "Content-Type: application/json" \ -d @request.json

Where request.json contains the JSON with your query as shown above. Salesforce will return a response JSON containing the job ID​. Then you’d use cURL GET calls to the results URL as described. In practice, using a tool (Workbench or a Python script using Salesforce APIs) can simplify this.

Summary

Bulk API 2.0 is ideal for exporting FieldKo data at scale because it’s asynchronous and robust. You prepare a SOQL query, let Salesforce process it in the background, and then download the results. This avoids timeouts and manual effort. Remember to securely store your Salesforce credentials or token when using API tools, and always verify the output (for example, row count) to ensure you got all expected data. By using Bulk API 2.0, you can confidently extract thousands or millions of FieldKo records for analysis or backup without impacting your users.

Did this answer your question?