Import files into S3 from Multi FileFeeds
Write processed Multi FileFeed output files into your Amazon S3 bucket using a saved S3 connection.
Availability: Multi FileFeeds with S3 destination connections enabled.
Use Import into S3 when a processed file should be imported by OneSchema into your S3 bucket. OneSchema assumes your configured IAM role and writes output files to the destination bucket and prefix configured on the Multi FileFeed.
Looking for the opposite direction? See Upload from S3 when files should be read from your S3 bucket and uploaded into a Multi FileFeed import.
Terminology
| Term | Direction | Meaning |
|---|---|---|
| Upload from S3 | Your S3 bucket → OneSchema | A file is uploaded into OneSchema by being read from a source S3 bucket. |
| Import into S3 | OneSchema → your S3 bucket | A processed file is imported by OneSchema into S3 by being written to a destination S3 bucket. |
| S3 account | Shared connection object | A saved OneSchema connection containing your IAM role ARN and external ID. |
How it works
sequenceDiagram
participant Client as API Client
participant OS as OneSchema
participant DstS3 as Destination S3 Bucket<br/>(your AWS account)
Client->>OS: Configure Multi FileFeed destination<br/>{type: "s3", bucket, prefix}
OS->>OS: Process submitted import
OS->>OS: Assume OneSchemaS3ConnectionReader
OS->>OS: Assume your IAM role<br/>(external ID)
OS->>DstS3: HeadBucket
DstS3-->>OS: Bucket region
OS->>DstS3: PutObject<br/>s3://bucket/prefix/file_name
DstS3-->>OS: 200 OK
- You configure an S3 destination on a Multi FileFeed.
- When an import finishes processing, OneSchema prepares the output files.
- OneSchema assumes the intermediate
OneSchemaS3ConnectionReaderrole in the OneSchema AWS account. - From that intermediate role, OneSchema assumes your customer IAM role using the configured external ID.
- OneSchema writes each output file to
s3://DESTINATION_BUCKET/DESTINATION_PREFIX/{file_name}.
If a file already exists at the same key, the destination object is overwritten. Use unique prefixes if you need to keep historical output files separate.
Prerequisites
Replace these placeholders with your values:
| Placeholder | Description |
|---|---|
DESTINATION_ACCOUNT_ID | Your AWS account ID. |
DESTINATION_BUCKET | The S3 bucket where OneSchema will write output files. |
DESTINATION_PREFIX | Optional key prefix for output files, for example oneschema/processed/. |
ROLE_ACCOUNT_ID | The AWS account ID that owns CUSTOMER_S3_ROLE_NAME. Usually this is the same as DESTINATION_ACCOUNT_ID. |
CUSTOMER_S3_ROLE_NAME | The IAM role name OneSchema will assume, for example OneSchemaS3AccessRole. |
YOUR_EXTERNAL_ID | Shared secret used in the IAM role trust policy. |
ONESCHEMA_ACCOUNT_ID | OneSchema's AWS account ID for your deployment region. |
Your OneSchema team can provide ONESCHEMA_ACCOUNT_ID.
Step 1 — Create an S3 account in OneSchema
Create an S3 account with the IAM role ARN and external ID that OneSchema should use.
curl -X POST "https://api.oneschema.co/v0/s3-accounts" \
-H "X-API-KEY: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Production S3 Destination",
"role_arn": "arn:aws:iam::ROLE_ACCOUNT_ID:role/CUSTOMER_S3_ROLE_NAME",
"external_id": "YOUR_EXTERNAL_ID"
}'Save the returned id; this is the s3_account_id used when configuring the Multi FileFeed destination.
API reference: Create S3 Account
You can also create or edit S3 accounts from the Connections page. The Test connection account button verifies that OneSchema can assume the configured role. It does not validate access to a specific destination bucket or prefix.
Step 2 — Configure the IAM role trust policy
Allow OneSchema's intermediate role to assume your customer IAM role.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowOneSchemaS3ConnectionReader",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ONESCHEMA_ACCOUNT_ID:role/OneSchemaS3ConnectionReader"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "YOUR_EXTERNAL_ID"
}
}
}
]
}The external ID in this trust policy must exactly match the external ID saved on the OneSchema S3 account.
Step 3 — Grant destination bucket write permissions
Attach an IAM policy to CUSTOMER_S3_ROLE_NAME that allows OneSchema to locate the bucket and write output files.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AccessDestinationBucket",
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": "arn:aws:s3:::DESTINATION_BUCKET"
},
{
"Sid": "WriteDestinationObjects",
"Effect": "Allow",
"Action": ["s3:PutObject"],
"Resource": "arn:aws:s3:::DESTINATION_BUCKET/DESTINATION_PREFIX/*"
}
]
}If you are writing to the bucket root, use:
"Resource": "arn:aws:s3:::DESTINATION_BUCKET/*"If the destination bucket uses a customer-managed KMS key for default encryption, also grant:
{
"Sid": "EncryptDestinationObjects",
"Effect": "Allow",
"Action": ["kms:GenerateDataKey", "kms:Encrypt"],
"Resource": "arn:aws:kms:REGION:DESTINATION_ACCOUNT_ID:key/YOUR_KMS_KEY_ID"
}Contact your OneSchema team if you need OneSchema to send optional S3 PutObject settings such as a storage class, ACL, or explicit server-side encryption parameters.
If the destination bucket is in a different AWS account than CUSTOMER_S3_ROLE_NAME, the bucket policy must also allow that exact role principal. Avoid wildcard principals; S3 Block Public Access can reject them.
[
{
"Sid": "AllowOneSchemaCustomerRoleListBucket",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ROLE_ACCOUNT_ID:role/CUSTOMER_S3_ROLE_NAME"
},
"Action": ["s3:ListBucket"],
"Resource": "arn:aws:s3:::DESTINATION_BUCKET"
},
{
"Sid": "AllowOneSchemaCustomerRoleWrites",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ROLE_ACCOUNT_ID:role/CUSTOMER_S3_ROLE_NAME"
},
"Action": ["s3:PutObject"],
"Resource": "arn:aws:s3:::DESTINATION_BUCKET/DESTINATION_PREFIX/*"
}
]Step 4 — Configure the Multi FileFeed S3 destination
Set the Multi FileFeed destination to S3. You can do this in the dashboard or through the API.
PATCH /v0/multi-file-feeds/{multi_file_feed_id}
curl -X PATCH "https://api.oneschema.co/v0/multi-file-feeds/42" \
-H "X-API-KEY: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"destination": {
"type": "s3",
"data": {
"s3_account_id": 1,
"bucket": "DESTINATION_BUCKET",
"prefix": "DESTINATION_PREFIX"
}
}
}'Set prefix to an empty string or omit it to write to the bucket root. Otherwise, OneSchema writes each output file under that prefix.
API reference: Update a Multi FileFeed
Step 5 — Run an import
After the destination is configured, run the Multi FileFeed normally:
- Create a Multi FileFeed import.
- Upload source files by direct upload, Upload from S3, SFTP source, or another supported source.
- Submit the import.
- OneSchema processes the import and writes output files to the configured S3 destination.
Output files are written to:
s3://DESTINATION_BUCKET/DESTINATION_PREFIX/{file_name} # when prefix is set
s3://DESTINATION_BUCKET/{file_name} # when prefix is empty or omittedFull example
The following example configures the Multi FileFeed destination. After configuration, create an import, upload source files using any supported source method, and submit the import for processing.
import requests
API_KEY = "YOUR_API_KEY"
BASE_URL = "https://api.oneschema.co"
MFF_ID = 42
S3_ACCOUNT_ID = 1
HEADERS = {"X-API-KEY": API_KEY}
# Configure the S3 destination on the Multi FileFeed
resp = requests.patch(
f"{BASE_URL}/v0/multi-file-feeds/{MFF_ID}",
headers=HEADERS,
json={
"destination": {
"type": "s3",
"data": {
"s3_account_id": S3_ACCOUNT_ID,
"bucket": "DESTINATION_BUCKET",
"prefix": "DESTINATION_PREFIX",
},
}
},
)
resp.raise_for_status()Troubleshooting
| Symptom | Likely cause | What to check |
|---|---|---|
| Test connection account fails | OneSchema cannot assume your IAM role. | Check the role ARN, trust policy principal, and external ID. |
| Import processing succeeds but no object appears in S3 | The destination may not be configured on the Multi FileFeed or the prefix may differ from expected. | Check the Multi FileFeed destination settings and destination prefix. |
| Destination write fails with access denied | Your role cannot access the bucket or write objects. | Check s3:ListBucket, s3:PutObject, bucket policy restrictions, and destination prefix. |
| Destination write fails with a KMS error | The bucket default KMS key does not allow the assumed role to encrypt. | Grant kms:GenerateDataKey and kms:Encrypt on the KMS key. |
| Objects are overwritten | OneSchema writes to deterministic keys under the configured prefix. | Use a unique destination prefix per run or include uniqueness in output filenames. |
Limits and behavior
| Constraint | Value |
|---|---|
| Destination path | s3://DESTINATION_BUCKET/DESTINATION_PREFIX/{file_name} when prefix is set; s3://DESTINATION_BUCKET/{file_name} when prefix is empty or omitted. |
| Write operation | PutObject |
| Existing object behavior | Existing objects at the same key are overwritten. |
| Recommended region | Destination bucket in the same AWS region as your OneSchema deployment. |
Updated about 3 hours ago
