InjectedToolArg works with OpenAI gpt-4o-mini but not longer with gpt-4o #29883
Unanswered
lorerave85
asked this question in
Q&A
Replies: 1 comment
-
I add a detail: I'm debugging and I found this behavior: in the agent.py file in the AgetExecution class there is this method _iter_next_step: this step: # Call the LLM to see what to do.
output = self._action_agent.plan(
intermediate_steps,
callbacks=run_manager.get_child() if run_manager else None,
**inputs,
) in the case of 4o-mini the output is: in the case of 4o the output is: |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
for a few weeks I have had a strange behavior of my agent.
I have reported a simplified version that shows the behavior.
my agent executes an API call passing a parameter that arrives in input on the invoke.
if I use the OpenAI gpt-4o-mini model the parameter is read (and printed), while if I use the gpt-4o model (which has always worked) now it shows me a validation error:
pydantic_core._pydantic_core.ValidationError: 1 validation error for exec_device_user
user_id
as if it lost the parameter.
System Info
langchain==0.3.19
langchain-community==0.3.17
langchain-core==0.3.35
langchain-openai==0.3.6
platform mac, Python 3.12.8
Beta Was this translation helpful? Give feedback.
All reactions