Bulk API and Imports
Outreach API supports bulk operations on several models. It also supports import of data from CSV files to bring data from external sources into Outreach.
Bulk actions through API
Making requests
Bulk requests are shaped as follow:
POST https://api.outreach.io/api/v2/batches/actions/<bulkActionName>
actionParams
to the query as you would for other actions.
The action will always support the skipConfirmation
parameter, we will talk more about this here. In the next section we will
see how to apply filters with the filter
parameter.
Browse Batch definition in the API-Reference to see the bulk actions supported with their parameters.
If you are using an OAuth token, make sure to have batches.read
, batches.write
and the targeted resources write
scopes, e.g: accounts.write
, prospects.write
...Applying filters
As mentioned in the previous section, all bulk actions support thefilter
parameter. You can chain multiple filters and the syntax should respect the newFilterSyntax. Here is an example of a bulk action on accounts:POST https://api.outreach.io/api/v2/batches/actions/accountsBulkModify?actionParams[filter][id]=1&actionParams[filter][id]=2&actionParams[filter][id]=3&actionParams[filter][owner][id]=123
The bulk action will be triggered on the exact same records you would get with the corresponding collection GET request:
GET https://api.outreach.io/api/v2/accounts?filter[id]=1&filter[id]=2&filter[id]=3&filter[owner]=123&newFilterSyntax=true
In the absence of any filter the action would trigger on ALL the records it applies to.
Building the request's body
Make sure to read about "classic" POST requests. The data passed to the bulk actions is given in the attributes. Here is an example ofbulkModify
on accounts:POST https://api.outreach.io/api/v2/batches/actions/accountsBulkModify?actionParams[filter][id]=1,2,3
{
"data": {
"attributes": {
"field": "custom1",
"value": "edit from bulk"
}
}
}
bulkDelete
, that don't require attributes. Those do not require any payload to be passed.Understanding the result
The result given is a batch. This is the model used to track the bulk requests. It will look like this:
{
"data": {
"type": "batch",
"id": 1156,
"attributes": {
"action": "bulk_modify",
"canceledAt": null,
"confirmCount": 2,
"confirmedAt": null,
"confirmedCountRequired": null,
"contextId": 0,
"contextType": "",
"createdAt": "1701880061",
"failures": 0,
"finishedAt": null,
"pending": 0,
"startedAt": null,
"state": "pending",
"summary": null,
"total": 0,
"updatedAt": "1701880061"
},
"relationships": {
"batchItems": {
"links": {
"related": "https://api.outreach.io/api/v2/batchItems?filter%5Bbatch%5D%5Bid%5D=1156"
}
},
},
"links": {
"self": "https://api.outreach.io/api/v2/batches/1156"
}
}
}
pending
or confirming
. We will talk about the latter in the next section.. When it is pending
you can keep track of the progress by querying the /batches/<id>
endpoint with the id of the returned batch, e.g:GET https://api.outreach.io/api/v2/batches/1156
finished
, failed
or canceled
. In any case the summary
might be updated with information relative to failures during the process (some failures are minor and won't trigger the whole batch to fail). If no failures happen, the summary
stays null
. Additionally, the detail of each item that got processed get be found through the batchItems
endpoint. Ideally you will want to filter by the batch you are dealing with, e.g:GET https://api.outreach.io/api/v2/batchItems?filter[batch][id]=1156
If you are interested only in failures you can filter based on the state:
GET https://api.outreach.io/api/v2/batchItems?filter[batch][id]=1156&filter[state]=error
Confirming bulk request
A bulk gets into confirming state when the confirmation hasn't been skipped (see skipping confirmation). This means that an additional action will be required to trigger the action to start.When receiving a batch in confirming state, it will look like this:
{
"data": {
"type": "batch",
"id": 1156,
"attributes": {
// other attributes ...
"confirmCount": 2,
"confirmedCountRequired": true, // can also be null or false
"state": "confirming",
},
}
// relationships, links ...
}
confirmedCountRequired
is true
, you can call:POST https://api.outreach.io/api/v2/batches/1156/actions/confirm?actionParams[confirmedCount]=2
confirmedCount
(there is no harm in putting it though):POST https://api.outreach.io/api/v2/batches/1156/actions/confirm
Note that you can also use the /cancel
action on the batch (without params). It will move the batch state to canceled
and stop processing of items if it ever started.
Skipping confirmation
When using bulk actions in automation, you may want to avoid the confirmation step. For that, when starting a new bulk, you can pass theskipConfirmation
action parameter with true
value:POST https://api.outreach.io/api/v2/batches/actions/accountsBulkModify?filter[id]=1,2,3&actionParams[skipConfirmation]=true
Importing CSV through API
Our system has been designed to allow you to upload a CSV file to our S3 bucket and then use this file using its reference (so calledstorageKey
).
This replicates the feature you may have seen on Outreach Client.
To use the import feature with an Oauth token you will need the imports.write
and imports.read
scope.Uploading the file
First of all you will need to upload your CSV file to our S3 bucket. The first step is to generate the link, we use AWS Presigned URLs. Let's take a look at the API call:POST https://api.outreach.io/api/v2/imports/actions/generateUploadLink
The response body is slightly unconventional as the model doesn't have an id and it will look like this:
{
"data": {
"type": "upload",
"attributes": {
// presigned AWS S3 URL
"uploadUrl": "https://outreachimportsdata-us-east-2.s3.us-east-2.amazonaws.com/bento/folder1/folder2/folder3/data.csv?X-Amz-Algorithm=AWS4-HMAC-SHA256&otherParams...",
"storageKey": "adfaaf30-4514-49ff-ac0f-0c5bafb918b2"
}
}
}
Carefully save the uploadUrl
and the storageKey
as you won't be able to get them again.
uploadUrl
using PUT method. Make sure to read the AWS documentation.Validating the file
After the upload is done, the file is in our S3 bucket but is not yet ready to be used. You will need to validate the file to ensure that it isn't corrupted. This is done by calling the following action:
POST https://api.outreach.io/api/v2/imports/actions/validateUpload?actionParams[storageKey]=adfaaf30-4514-49ff-ac0f-0c5bafb918b&actionParams[hash]=abcde...
hash
is the HMAC-SHA512 hash of the file you uploaded.
The response will look like this:{
"data": {
"type":"validateUpload",
"attributes": {
"headerValue": [{
"value":"object name",
"header":"name",
}],
"recordCount": "123"
}
}
}
recordCount
is the amount of records Outreach API found in the CSV file. You will need this in the next step.Starting the import
You will again need thestorageKey
here. This is how we will find which file to use. You will also need to send us the amount of records in the CSV file as recordCount
.
You can then call the import action of your choice to process the uploaded CSV. You can find those by browsing the import's API Reference:POST https://api.outreach.io/api/v2/imports/actions/accountsImport
{
"data": {
"attributes": {
"storageKey": "adfaaf30-4514-49ff-ac0f-0c5bafb918b",
"recordCount": "123",
"dupeMethod": "skip",
"mappings": {
"companyName": "name",
"<your-CSV-field>": "<outreach-field>",
}
}
}
}
id
is one of them for all models. Other fields may be unique as well for each specific model (e.g id, email...). This is described in the import action itself (listed under the import's API Reference). The dupeMethod
is used to treat those clashing records. It can either be:skip
(default) : we will skip duplicates and no data will be stored from that rowmissing
: we will update empty fields in the record by their imported valueoverwrite
: we will update all the fields with their imported value
All other non-duplicated records are created in DB.
Themappings
are used to map the headers of your CSV file to the Outreach field names. To ensure consistency, provide mappings for all the fields. You may leverage the headerValue
from the previous step.The response you will receive looks like this:
{
"data": {
"type": "import",
"id": 639,
"attributes": {
"createdAt": "2024-03-07T13:21:47.000Z",
"dupeMethod": "skip",
"dupes": 0,
"errorReason": null,
"externalId": null,
"externalName": null,
"externalType": null,
"failures": 0,
"fileName": "account.csv",
"fileSize": null,
"frequency": "once",
"loadFromPlugin": false,
"mappings": {
"Id": "id",
"Title": "title"
},
"pluginId": null,
"prospectOwnerId": null,
"recurring": false,
"reportInstanceId": null,
"scheduledAt": null,
"source": null,
"stageId": null,
"state": "progressing", // or finished or failed
"stateChangedAt": "2024-03-07T13:21:49.000Z",
"syncedUntil": null,
"timeZone": null,
"total": 1,
"type": "ImportAccount::Csv", // special case: Import::Csv for prospects
"updatedAt": "2024-03-07T13:21:49.000Z"
}
}
}
state
of the import using:GET https://api.outreach.io/api/v2/imports/639
NOTE: behind the hood imports are running in batches. You can find the corresponding batches using the following:
GET https://api.outreach.io/api/v2/batches?filter[contextType]=Import&filter[contextId]=639
Bulk rate limiting
Per request
For now bulk requests can be triggered on a maximum of 100 000 items. If it exceeds this amount it won't start and will be moved tofailed
state.
Also, together with the Outreach Client, there is a limit of 5 Millions of record per day that can be processed through bulk actions.