ChatOllama does not invoke tools but ChatOpenAI does #29376
-
Hi there everyone!, I have been building an invoice image reader for the last couple of days. I have done it multiple times using 2025-01-23 10:41:33,131 - httpx - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
2025-01-23 10:41:33,161 - flow.backend.process_images - INFO - Processing event: {'retrieval_facturas_agent': {'messages': [AIMessage(content='{"fila 1": {"CIF_CLIENTE": "my-client", "CLIENTE": "my-client-name", "FICHERO": "", "NUMERO_FACTURA": "", "FECHA_FACTURA": "22/06/2023", "PROVEEDOR": "", "BASE_IMPONIBLE": "", "CIF_PROVEEDOR": "", "IRPF": "", "IVA": "", "TOTAL_FACTURA": ""}}', additional_kwargs={}, response_metadata={})], 'sender': 'retrieval_invoice_agent', 'filename': 'my-file.png'}}
2025-01-23 10:41:33,800 - httpx - INFO - HTTP Request: POST http://127.0.0.1:11434/api/chat "HTTP/1.1 200 OK"
2025-01-23 10:41:33,811 - flow.backend.process_images - INFO - Processing event: {'textract_agent': {'messages': [AIMessage(content='', additional_kwargs={}, response_metadata={'model': 'llama3.1', 'created_at': '2025-01-23T09:41:33.7996878Z', 'done': True, 'done_reason': 'stop', 'total_duration': 623341000, 'load_duration': 78495500, 'prompt_eval_count': 311, 'prompt_eval_duration': 402000000, 'eval_count': 1, 'eval_duration': 130000000, 'message': {'role': 'assistant', 'content': '', 'images': None, 'tool_calls': None}}, name='textract_agent', id='run-476a1373-a2cd-40e0-8fba-dcbc43252ef6-0', usage_metadata={'input_tokens': 311, 'output_tokens': 1, 'total_tokens': 312})], 'sender': 'textract_agent', 'filename': 'my-file.png'}} Note that my tool is the Amazon Textract OCR but it could be any tool. You see here that no tool is invoked in This is how the agent is created: textract_agent = create_agent(
llm=ChatOllama("llama3.1"),
tools=[generate_textract],
system_message="""
You are a Textract OCR agent. You read invoice documents and extract the relevant tables information in a structured format using the TextractProcessor class.
You only make a tool call and extract the relavant tables in the same format that were extracted.
Do NOT add any other text."""
) With: from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
def create_agent(llm, tools, system_message: str):
"""Create an agent."""
base_message = (
"You are a helpful AI assistant, collaborating with other assistants."
" If you are unable to fully answer, that's OK, another assistant with different tools"
" will help where you left off. Execute what you can to make progress."
)
if tools:
prompt = ChatPromptTemplate.from_messages([
(
"system",
base_message +
" You have access to the following tools: {tool_names}.\n{system_message}",
),
MessagesPlaceholder(variable_name="messages"),
])
prompt = prompt.partial(system_message=system_message)
prompt = prompt.partial(tool_names=", ".join([tool.name for tool in tools]))
return prompt | llm.bind_tools(tools)
else:
prompt = ChatPromptTemplate.from_messages([
(
"system",
base_message + "\n{system_message}",
),
MessagesPlaceholder(variable_name="messages"),
])
prompt = prompt.partial(system_message=system_message)
return prompt | llm The router for the workflow has been modified to enforce the agent to use tools: from ..models import AgentState
from langgraph.graph import END
def router(state: AgentState) -> str:
"""Route to the appropriate next step in the workflow."""
last_message = state["messages"][-1]
sender = state["sender"]
# Enforcing agent to open tools
if sender == "textract_agent":
return "tools"
# If there are tool calls, route to tools
if hasattr(last_message, "tool_calls") and last_message.tool_calls:
return "tools"
# Safety check - if we've been through supervisor multiple times, end
if sender == "supervisor_agent" and "FINAL ANSWER" in last_message.content:
return END
return "continue" But I always get |
Beta Was this translation helpful? Give feedback.
Replies: 0 comments 2 replies
-
After seeing some posts, I have also tried with |
Beta Was this translation helpful? Give feedback.
This discussion would probably get more support over on langchain, since it seems unrelated to langgraph;
Going to transfer