Passing Tool Output back to the Deprecated OllamaFunctions #25546
Replies: 1 comment
-
Yes, there is a "non-hacky" way to pass back tool output to the deprecated Here is the relevant part of the code that ensures a conversational response is returned: DEFAULT_RESPONSE_FUNCTION = {
"name": "__conversational_response",
"description": (
"Respond conversationally if no other tools should be called for a given query."
),
"parameters": {
"type": "object",
"properties": {
"response": {
"type": "string",
"description": "Conversational response to the user.",
},
},
"required": ["response"],
},
}
# In the _generate method
functions.append(DEFAULT_RESPONSE_FUNCTION)
# Later in the _generate method
if (
called_tool is None
or called_tool["name"] == DEFAULT_RESPONSE_FUNCTION["name"]
):
if (
"tool_input" in parsed_chat_result
and "response" in parsed_chat_result["tool_input"]
):
response = parsed_chat_result["tool_input"]["response"]
elif "response" in parsed_chat_result:
response = parsed_chat_result["response"]
else:
raise ValueError(
f"Failed to parse a response from {self.model} output: "
f"{chat_generation_content}"
)
return ChatResult(
generations=[
ChatGeneration(
message=AIMessage(
content=response,
)
)
]
) This code ensures that if no specific tool is called or if the Additionally, LangChain provides a structured way to handle tool outputs and pass them back to the model using from langchain_core.messages import HumanMessage, ToolMessage
messages = [HumanMessage(query)]
ai_msg = llm_with_tools.invoke(messages)
messages.append(ai_msg)
for tool_call in ai_msg.tool_calls:
selected_tool = {"add": add, "multiply": multiply}[tool_call["name"].lower()]
tool_output = selected_tool.invoke(tool_call["args"])
messages.append(ToolMessage(tool_output, tool_call_id=tool_call["id"]))
messages This approach ensures that the tool outputs are properly passed back to the model, allowing it to generate a final conversational response [1][2]. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
To use extremely small models like Gemma or tinyllama using the deprecated OllamaFuncitons is nessesary, since they do not support function calling natively.
ai_msg
Is there a "non-hacky" way (so not forging a system message like "answer conversationally to this sequence") to pass back tool output to the deprecated OllamaFuncitons?
System Info
System Information
Package Information
Optional packages not installed
Other Dependencies
Beta Was this translation helpful? Give feedback.
All reactions