Skip to content

Document API operational limits #18

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Sep 16, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 19 additions & 0 deletions api-reference/v2/general/limits.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
---
title: Limits
description: 'Rate and operational limits for the Glide API'
---

## Payload Limits

You should not send more than 15MB of data in a single request. If you need to work with more data, use [stashing](/api-reference/v2/stashing/introduction) to upload the data in 15MB chunks.

## Row Limits

Even when using stashing, there are limits to the number of rows you can work with in a single request. These limits are approximate and depend on the size of the rows in your dataset.


| Endpoint | Row Limit |
|---------------------------------------------------------------|------------|
| [Create Table](/api-reference/v2/tables/post-tables) | 8m |
| [Overwrite Table](/api-reference/v2/tables/put-tables) | 8m |
| [Add Rows to Table](/api-reference/v2/tables/post-table-rows) | 250k |
12 changes: 0 additions & 12 deletions api-reference/v2/general/rate-limits.mdx

This file was deleted.

6 changes: 6 additions & 0 deletions api-reference/v2/resources/changelog.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,12 @@ title: Glide API Changelog
sidebarTitle: Changelog
---

### September 13, 2024

- Introduced a new "Limits" document that outlines rate and operational limits for the API.
- Updated guidelines for when to use stashing in line with the new doc.
- Fixed the Bulk Import tutorial to use PUT instead of POST for the Stash Data endpoint.

### September 4, 2024

- Removed "json" as a valid data type in column schemas for now.
Expand Down
4 changes: 2 additions & 2 deletions api-reference/v2/stashing/introduction.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,9 @@ Once all data has been uploaded to the stash, the stash can then be referenced i

## When to Use Stashing

You should use stashing when:
You should use stashing when both of the following conditions are met:

* You have a large dataset that you want to upload to Glide. Anything larger than 5mb should be broken up into smaller chunks and stashed.
* You have a large dataset that you want to upload to Glide. Anything larger than [15MB](/api-reference/v2/general/limits) should be broken up into smaller chunks and stashed.
* You want to perform an atomic operation using a large dataset. For example, you may want to perform an import of data into an existing table but don't want users to see the intermediate state of the import or incremental updates while they're using their application.

## Stash IDs and Serials
Expand Down
14 changes: 7 additions & 7 deletions api-reference/v2/tutorials/bulk-import.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -36,25 +36,25 @@ You are responsible for ensuring that the stash ID is unique and stable across a

## Upload Data

Once you have a stable stash ID, you can use the [stash data endpoint](/api-reference/v2/stashing/post-stashes-serial) to upload the data in stages.
Once you have a stable stash ID, you can use the [stash data endpoint](/api-reference/v2/stashing/put-stashes-serial) to upload the data in chunks.

Upload stages can be run in parallel to speed up the upload of large dataset, just be sure to use the same stash ID across uploads to ensure the final data set is complete.
Chunks can be sent in parallel to speed up the upload of large datasets. Use the same stash ID across uploads to ensure the final data set is complete, and use the serial to control the order of the chunks within the stash.

As an example, the following [stash](/api-reference/v2/stashing/post-stashes-serial) requests will create a final dataset consisting of the two rows identified by the stash ID `20240501-import`.
As an example, the following [stash](/api-reference/v2/stashing/put-stashes-serial) requests will create a final dataset consisting of the two rows identified by the stash ID `20240501-import`. The trailing parameters of `1` and `2` in the request path are the serial IDs. The data in serial `1` will come first in the stash, and the data in serial `2` will come second, even if the requests are processed in a different order.

<Tabs>
<Tab title="POST /stashes/20240501-import/1">
<Tab title="PUT /stashes/20240501-import/1">
```json
[
{
"Name": "Alex",
"Age": 30,
"Birthday": "2024-07-03T10:24:08.285Z"
}
},
]
```
</Tab>
<Tab title="POST /stashes/20240501-import/2">
<Tab title="PUT /stashes/20240501-import/2">
```json
[
{
Expand All @@ -67,7 +67,7 @@ As an example, the following [stash](/api-reference/v2/stashing/post-stashes-ser
</Tab>
</Tabs>

<Note>The trailing parameters of `1` and `2` in the request path are the serial IDs, which distinguish and order the two uploads within the stash.</Note>
<Note>The above is just an example. In practice, you should include more than one row per stash chunk, and if your complete dataset is only 2 rows, you do not need to use stashing at all. See [Limits](/api-reference/v2/general/limits) for guidance.</Note>

## Finalize Import

Expand Down
3 changes: 2 additions & 1 deletion mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,8 @@
"pages": [
"api-reference/v2/general/introduction",
"api-reference/v2/general/authentication",
"api-reference/v2/general/errors"
"api-reference/v2/general/errors",
"api-reference/v2/general/limits"
]
},
{
Expand Down