Inconsistent output_json from Agents Using Ollama #1246
Replies: 4 comments
-
What happens with a different model such as gpt-4 mini? or different Ollama one? |
Beta Was this translation helpful? Give feedback.
-
I haven't tried other models, but looking online and at previous issues raised, it appears that there is no issue using gpt, and issues come from using other models. I was hoping someone had a different while using something else than gpt. |
Beta Was this translation helpful? Give feedback.
-
Try using more powerful models eg Llama 3.1 70B on Groq to see if you get more consistent output. From my experience Ollama model powered agents need a lot of massaging and prompt engineering to be accurate and consistent. |
Beta Was this translation helpful? Give feedback.
-
I understand this model might not be as powerful, however, I would expect the JSON output to be tackled by the crewai framework, and not dependent on the model's power. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Description:
Hello, I'm trying to get a consistent JSON output from agents by setting a pydantic model with fields to fill in. Although the content of my output is correct, the JSON output is not working as expected. I've summarized my code and "unwanted" results below:
Unwanted Output:
Steps Taken to Resolve:
Issue:
Is there any idea why this is happening? Could it be specific to Llama3? What recommendations are there for resolving this?
Beta Was this translation helpful? Give feedback.
All reactions