You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I added a very descriptive title to this question.
I searched the LangChain documentation with the integrated search.
I used the GitHub search to find a similar question and didn't find it.
Commit to Help
I commit to help with one of those options 👆
Example Code
fromlangchain_core.messagesimport (
AIMessage,
AIMessageChunk,
BaseMessage,
BaseMessageChunk,
ChatMessage,
ChatMessageChunk,
FunctionMessage,
FunctionMessageChunk,
HumanMessage,
HumanMessageChunk,
InvalidToolCall,
SystemMessage,
SystemMessageChunk,
ToolCall,
ToolMessage,
ToolMessageChunk,
)
# the previous tool_call_chunk generated a invalid_tool_calls which is as expected.# invalid_tool_calls=[{'name': 'get_current_weather', 'args': None, 'id': 'call_cvjtfa42c3m2t75bs2jg', 'error': None, 'type': 'invalid_tool_call'}]# the new tool_call_chunk parsed from the llm responsetool_call_chunks= [{'name': None, 'args': '{"location": "北京"}', 'id': 'call_cvjtbns2c3m5i17c9fr0', 'type': 'function', 'index':0}]
# try to yield the new tool_call_chunk and generate a full tool_call responseyieldAIMessageChunk(
content='',
# invalid_tool_calls = invalid_tool_calls,tool_call_chunks=tool_call_chunks
)
Description
I'm trying to implement a customized chat model supporting tool calls under langchain frame, the batch mode works as expected, but streaming mode doesn't.
The first step: I write a function to parse the llm respons, the first tool_call_chunk as below:
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
I'm trying to implement a customized chat model supporting tool calls under langchain frame, the batch mode works as expected, but streaming mode doesn't.
The first step: I write a function to parse the llm respons, the first tool_call_chunk as below:
and then yield the AIMessageChunk. chat_model response as expected:
The second step I yield the second tool_call_chunk:
and yield the AIMessageChunk, the tool_call info generated, but the name parameter in args was missing, I don't know what's the problem.
System Info
platform - win10 64bit
conda list | findstr "langchain"
Beta Was this translation helpful? Give feedback.
All reactions