This repository was archived by the owner on Jan 30, 2025. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 2
π€ GPT Limitations
Aabol, Simen Tvete edited this page Dec 1, 2023
·
3 revisions
This wiki page is meant to list all known limitations the project has encountered.
- When data of ca. 60 or more cells are being graphed by the GPT in python, the GPT spends too much time generating it, and ends up crashing.
- This error is most likely due to a time limit which OpenAI has set for a single GPT response.
- When the GPT is requesting data from an API endpoint, specified in it's "Actions", the GPT throws a 'ResponseTooLargeError' if the response data size is too large.
- It seems like the current limit sits at around 70kb of data, which likely will be increased in later OpenAI GPT updates.
- The current workaround of this issue, is to instruct the GPT to adjust it's query to reduce the response size. This works when query for tables, but is not possible when retrieving metadata for a specific table that exceedes this limit, as metadata can't be reduced in size.
- OpenAI's GPT model frequently changes it's behaviour over time, making it challenging to creating a consistent and stable GPT.
- As the tasks this specific GPT are given are quite extensive and complicated, the instructions gets larger, and as a consequence it's behaviour is more inconsistent.