There are four methods for importing validated data once it has been processed by OneSchema.
|Frontend passthrough||For smaller file sizes (< 50k rows) and easiest to implement.|
|Paginated JSON GET endpoint||For very large files and if you'd like to request the data in batches of a certain size.|
|S3 URL||For fetching entire CSV, Excel, or JSON files via our S3 download links. Also good for very large files.|
|Webhook JSON||For receiving batches of data via a webhook endpoint rather than requesting it via API.|
Importing via your code can be useful if there's already a process in place to import data received via your frontend. This method is not recommended for files exceeding 50k rows.
To import via your code, do not specify a webhook key in your configuration for OneSchema. Then the results will be passed through as a parameter for the success handler of your implementation.
Data will be passed via a success event.
Data will be passed via an onSuccess event.
The Get Rows endpoint allows data to be requested with a starting row index and row count. This is best for when you'd like to request batched data from your backend.
The Export a CSV endpoint (also available in Excel and JSON flavors) provides a URL to OneSchema's S3 bucket. This is best for when you'd like to request the entire data file from your backend.
Importing via webhook allows you to process a large number of rows through pagination. OneSchema will send requests to an endpoint you host with metadata and up to 1,000 rows of processed data at a time. The data will come over in a JSON array, with each object representing a row of data, and each row containing key value pairs. See the schema of webhook payload metadata for more information about the format of the data.
Detailed information can be found on the Importer Webhook documentation page.
Updated 7 days ago