Replies: 1 comment 2 replies
-
If you're using OpenAI you should probably be using tool calling capabilities. https://python.langchain.com/docs/modules/model_io/chat/function_calling/ or https://python.langchain.com/docs/modules/model_io/chat/structured_output/ |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I was working on a simple LCEL chain for a simple task, and this question came to my mind.
Imagine I have a straightforward LCEL chain containing 2 prompts and 2 output parsers that "force" the output to be a JSON.
Example output I got on a simple test
{'joke': "Why did the ice cream truck break down? It had too many 'scoops'!", 'punchline': "It's a pun on the word 'scoops', which can mean both the amount of ice cream served and a problem or mishap." 'theme': 'Ice cream', 'type': 'Pun'}
Now, I can see 3 main ways to ensure that the output is a JSON object and that I can use it in other parts of my code.
JsonOutputParser
) can be encapsulated into a RetryOutputParser (or similar) to ensure that the output will be well formatted JSON;I know these 3 options are not equivalents, but I'd like to know which one is recommended by the ones working with production LLM apps. Is it better to use an Output parser or a pure Python function to test the output? If I'm using one of these 2, is it worth it to use a Evaluator?
This is more of an opinion-based discussion, but it would be nice to have some good explanations.
Note: I'm using JSON to explain the whole thing as a simple example, but it can be a more complicated structured output.
Beta Was this translation helpful? Give feedback.
All reactions