Replies: 1 comment
-
Ok, it turns out the issue was that I had a decorator on my call that was forcing a return to None. This is not an issue with langchain, my bad. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
I'm currently building an agentic workflow that is using AzureChatOpenAI. I'm having an issue where one of my agents is building a prompt, giving it to another agent, but
llm.invoke
is returning None rather than raising an error.I'm getting a log message saying its a moderation message saying I'm trying to jailbreak the system, which I am not trying to do.
What I want to do is get that log back as an error in an exception message that I can pass back to my prompt building agent, however as of now the invoke call just returns None.
Is there a way to set
llm.invoke
to return/raise this error, rather than returning None?System Info
langchain==0.3.10
langchain-community==0.3.10
langchain-core==0.3.49
langchain-experimental==0.3.3
langchain-openai==0.3.11
langchain-postgres==0.0.13
langchain-text-splitters==0.3.2
langchainhub==0.1.14
Beta Was this translation helpful? Give feedback.
All reactions