Skip to content

OpenAPI spec update from glideapps/glide#31344 #35

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Dec 16, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 40 additions & 1 deletion api-reference/v2/general/errors.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -52,4 +52,43 @@ curl --request PUT \
"message": "Invalid request params: Stash ID must be 256 characters max, alphanumeric with dashes and underscores, no leading dash or underscore"
}
}
```
```

### Invalid Row Data

When adding or updating rows in a table, if the row data does not match the table schema, the API will return a `422` response status.

#### Unkown Column

```json
{
"error": {
"type": "column_id_not_found",
"message": "Unknown column ID 'foo'"
}
}
```

#### Invalid Value for Column

```json
{
"error": {
"type": "column_has_invalid_value",
"message": "Invalid value for column 'foo'"
}
}
```

### Row Not Found

When attempting to update a row that does not exist, the API will return a `404` response status.

```json
{
"error": {
"type": "row_not_found",
"message": "Row with ID 'XHz6kF2XSTGi1ADDbryjqw' not found"
}
}
```
5 changes: 5 additions & 0 deletions api-reference/v2/resources/changelog.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,11 @@ title: Glide API Changelog
sidebarTitle: Changelog
---

### December 13, 2024

- Clarified that endpoints return row IDs in the same order as the input rows.
- Clarified the requirements for row data to match the table's schema and what happens if it doesn't.

### November 26, 2024

- Added a warning that using the `PUT /tables` endpoint to overwrite a table will clear user-specific columns.
Expand Down
4 changes: 3 additions & 1 deletion api-reference/v2/tables/delete-table-row.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,6 @@ title: Delete Row
openapi: delete /tables/{tableID}/rows/{rowID}
---

Deletes a row in a Big Table. No error is returned if the row does not exist.
Deletes a row in a Big Table.

No error is returned if the row does not exist.
4 changes: 3 additions & 1 deletion api-reference/v2/tables/patch-table-row.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,6 @@ title: Update Row
openapi: patch /tables/{tableID}/rows/{rowID}
---

Updates an existing row in a Big Table.
Updates an existing row in a Big Table.

If a column is not included in the passed row data, it will not be updated. If a column is passed that does not exist in the table schema, or with a value that does not match the column's type, the default behavior is for no update to be made and the API call to [return an error](/api-reference/v2/general/errors#invalid-row-data). However, you can control this behavior with the `onSchemaError` query parameter.
8 changes: 5 additions & 3 deletions api-reference/v2/tables/post-table-rows.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,11 @@ title: Add Rows to Table
openapi: post /tables/{tableID}/rows
---

Add row data to an existing Big Table.
Add one or more rows to an existing Big Table.

Row data may be passed in JSON, CSV, or TSV format.
Row IDs for the added rows are returned in the response in the same order as the input row data is passed in the request. Row data may be passed in JSON, CSV, or TSV format.

If a column is not included in the passed row data, it will be empty in the added row. If a column is passed that does not exist in the table schema, or with a value that does not match the column's type, the default behavior is for no rows to be added and the API call to [return an error](/api-reference/v2/general/errors#invalid-row-data). However, you can control this behavior with the `onSchemaError` query parameter.

## Examples

Expand All @@ -31,7 +33,7 @@ Row data may be passed in JSON, CSV, or TSV format.
</Accordion>

<Accordion title="Add Rows from Stash">
[Stashing](/api-reference/v2/stashing/introduction) is our process for handling the upload of large datasets. Break down your dataset into smaller, more manageable, pieces and [upload them to a single stash ID](/api-reference/v2/stashing/post-stashes-serial).
[Stashing](/api-reference/v2/stashing/introduction) is our process for handling the upload of large datasets. Break down your dataset into smaller, more manageable, pieces and [upload them to a single stash ID](/api-reference/v2/stashing/put-stashes-serial).

Then, to add all the row data in a stash to the table in a single atomic operation, use the `$stashID` reference in the `rows` field instead of providing the data inline:

Expand Down
8 changes: 6 additions & 2 deletions api-reference/v2/tables/post-tables.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,11 @@ openapi: post /tables

Create a new Big Table, define its structure, and (optionally) populate it with data.

When using a CSV or TSV request body, the name of the table must be passed as a query parameter and the schema of the table is inferred from the content. Alternatively, the CSV/TSV content may be [stashed](/api-reference/v2/stashing/introduction), and then the schema and name may be passed in the regular JSON payload.
Row IDs for any added rows are returned in the response in the same order as the input row data is passed in the request. Row data may be passed in JSON, CSV, or TSV format.

When using a CSV or TSV request body, the name of the table must be passed as a query parameter and the schema of the table is always inferred from the content. Alternatively, the CSV/TSV content may be [stashed](/api-reference/v2/stashing/introduction), and then the schema and name may be passed in the regular JSON payload.

If a schema is passed in the payload, any passed row data must match that schema. If a column is not included in the passed row data, it will be empty in the added row. If a column is passed that does not exist in the schema, or with a value that does not match the column's type, the default behavior is for the table to not be created and the API call to [return an error](/api-reference/v2/general/errors#invalid-row-data). However, you can control this behavior with the `onSchemaError` query parameter.

## Examples

Expand All @@ -30,7 +34,7 @@ When using a CSV or TSV request body, the name of the table must be passed as a
However, this is only appropriate for relatively small initial datasets (around a few hundred rows or less, depending on schema complexity). If you need to work with a larger dataset you should utilize stashing.
</Accordion>
<Accordion title="Create Table from Stash">
[Stashing](/api-reference/v2/stashing/introduction) is our process for handling the upload of large datasets. Break down your dataset into smaller, more manageable, pieces and [upload them to a single stash ID](/api-reference/v2/stashing/post-stashes-serial).
[Stashing](/api-reference/v2/stashing/introduction) is our process for handling the upload of large datasets. Break down your dataset into smaller, more manageable, pieces and [upload them to a single stash ID](/api-reference/v2/stashing/put-stashes-serial).

Then, to create a table from a stash, you can use the `$stashID` reference in the `rows` field instead of providing the data inline:

Expand Down
8 changes: 6 additions & 2 deletions api-reference/v2/tables/put-tables.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,11 @@ title: Overwrite Table
openapi: put /tables/{tableID}
---

Overwrite an existing Big Table by clearing all rows and adding new data.
Overwrite an existing Big Table by clearing all rows and (optionally) adding new data.

Row IDs for any added rows are returned in the response in the same order as the input row data is passed in the request. Row data may be passed in JSON, CSV, or TSV format.

If a column is not included in the passed row data, it will be empty in the added row. If a column is passed that does not exist in the updated table schema, or with a value that does not match the column's type, the default behavior is for no action to be taken and the API call to [return an error](/api-reference/v2/general/errors#invalid-row-data). However, you can control this behavior with the `onSchemaError` query parameter.

<Warning>
There is currently no way to supply values for user-specific columns in the API. Those columns will be cleared when using this endpoint.
Expand Down Expand Up @@ -44,7 +48,7 @@ When using a CSV or TSV request body, you cannot pass a schema. If you need to u
</Accordion>

<Accordion title="Reset table data from Stash">
[Stashing](/api-reference/v2/stashing/introduction) is our process for handling the upload of large datasets. Break down your dataset into smaller, more manageable, pieces and [upload them to a single stash ID](/api-reference/v2/stashing/post-stashes-serial).
[Stashing](/api-reference/v2/stashing/introduction) is our process for handling the upload of large datasets. Break down your dataset into smaller, more manageable, pieces and [upload them to a single stash ID](/api-reference/v2/stashing/put-stashes-serial).

Then, to reset a table's data from the stash, use the `$stashID` reference in the `rows` field instead of providing the data inline:

Expand Down
32 changes: 16 additions & 16 deletions openapi/swagger.json
Original file line number Diff line number Diff line change
Expand Up @@ -151,12 +151,12 @@
"type": "array",
"items": {
"type": "string",
"description": "ID of the row, e.g., `2a1bad8b-cf7c-44437-b8c1-e3782df6`",
"example": "2a1bad8b-cf7c-44437-b8c1-e3782df6"
"description": "ID of the row, e.g., `zcJWnyI8Tbam21V34K8MNA`",
"example": "zcJWnyI8Tbam21V34K8MNA"
},
"description": "Row IDs of added rows, e.g., \n\n```json\n[\n\t\"2a1bad8b-cf7c-44437-b8c1-e3782df6\",\n\t\"93a19-cf7c-44437-b8c1-e9acbbb\"\n]\n```",
"description": "Row IDs of added rows, returned in the same order as the input rows, e.g., \n\n```json\n[\n\t\"zcJWnyI8Tbam21V34K8MNA\",\n\t\"93a19-cf7c-44437-b8c1-e9acbbb\"\n]\n```",
"example": [
"2a1bad8b-cf7c-44437-b8c1-e3782df6",
"zcJWnyI8Tbam21V34K8MNA",
"93a19-cf7c-44437-b8c1-e9acbbb"
]
}
Expand Down Expand Up @@ -517,12 +517,12 @@
"type": "array",
"items": {
"type": "string",
"description": "ID of the row, e.g., `2a1bad8b-cf7c-44437-b8c1-e3782df6`",
"example": "2a1bad8b-cf7c-44437-b8c1-e3782df6"
"description": "ID of the row, e.g., `zcJWnyI8Tbam21V34K8MNA`",
"example": "zcJWnyI8Tbam21V34K8MNA"
},
"description": "Row IDs of added rows, e.g., \n\n```json\n[\n\t\"2a1bad8b-cf7c-44437-b8c1-e3782df6\",\n\t\"93a19-cf7c-44437-b8c1-e9acbbb\"\n]\n```",
"description": "Row IDs of added rows, returned in the same order as the input rows, e.g., \n\n```json\n[\n\t\"zcJWnyI8Tbam21V34K8MNA\",\n\t\"93a19-cf7c-44437-b8c1-e9acbbb\"\n]\n```",
"example": [
"2a1bad8b-cf7c-44437-b8c1-e3782df6",
"zcJWnyI8Tbam21V34K8MNA",
"93a19-cf7c-44437-b8c1-e9acbbb"
]
}
Expand Down Expand Up @@ -1135,12 +1135,12 @@
"type": "array",
"items": {
"type": "string",
"description": "ID of the row, e.g., `2a1bad8b-cf7c-44437-b8c1-e3782df6`",
"example": "2a1bad8b-cf7c-44437-b8c1-e3782df6"
"description": "ID of the row, e.g., `zcJWnyI8Tbam21V34K8MNA`",
"example": "zcJWnyI8Tbam21V34K8MNA"
},
"description": "Row IDs of added rows, e.g., \n\n```json\n[\n\t\"2a1bad8b-cf7c-44437-b8c1-e3782df6\",\n\t\"93a19-cf7c-44437-b8c1-e9acbbb\"\n]\n```",
"description": "Row IDs of added rows, returned in the same order as the input rows, e.g., \n\n```json\n[\n\t\"zcJWnyI8Tbam21V34K8MNA\",\n\t\"93a19-cf7c-44437-b8c1-e9acbbb\"\n]\n```",
"example": [
"2a1bad8b-cf7c-44437-b8c1-e3782df6",
"zcJWnyI8Tbam21V34K8MNA",
"93a19-cf7c-44437-b8c1-e9acbbb"
]
}
Expand Down Expand Up @@ -1513,8 +1513,8 @@
"in": "path",
"schema": {
"type": "string",
"description": "ID of the row, e.g., `2a1bad8b-cf7c-44437-b8c1-e3782df6`",
"example": "2a1bad8b-cf7c-44437-b8c1-e3782df6"
"description": "ID of the row, e.g., `zcJWnyI8Tbam21V34K8MNA`",
"example": "zcJWnyI8Tbam21V34K8MNA"
},
"required": true
},
Expand Down Expand Up @@ -1653,8 +1653,8 @@
"in": "path",
"schema": {
"type": "string",
"description": "ID of the row, e.g., `2a1bad8b-cf7c-44437-b8c1-e3782df6`",
"example": "2a1bad8b-cf7c-44437-b8c1-e3782df6"
"description": "ID of the row, e.g., `zcJWnyI8Tbam21V34K8MNA`",
"example": "zcJWnyI8Tbam21V34K8MNA"
},
"required": true
}
Expand Down
Loading