-
Hi to all, I'm creating a Langchain that helps students of Law ask questions about resolutions and different topics, but this SQL databases have large texts that sometimes the agent executor makes prompts that exceed the openai max tokens giving this error.
So, theres is any way that langchain creates embeddings of the text to avoid this error during the creation of the token? or something that you could recommend to avoid this :3. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
Bumping this question, I've hit the same |
Beta Was this translation helpful? Give feedback.
-
Oh wow. If the data MUST be in a SQL database, have your SQL chain just return the query result directly instead of sending it back through the llm to interpret it. Then play around with something like this on the result https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa.html |
Beta Was this translation helpful? Give feedback.
Oh wow. If the data MUST be in a SQL database, have your SQL chain just return the query result directly instead of sending it back through the llm to interpret it. Then play around with something like this on the result https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa.html