docs/how_to/tools_few_shot/ #27360
Replies: 5 comments
-
It would be great if this page included an example of using Few Shot prompting with structured output, both with and without messages |
Beta Was this translation helpful? Give feedback.
-
From the above tutorial what i understand is that "chain" in ref. "chain = {"query": RunnablePassthrough()} | few_shot_prompt | llm_with_tools" is like a custom prompted llm with tool calling capabilities. can anyone confirm if my understanding is correct or not ? if this is the case then i can plug this into langgraph as well. |
Beta Was this translation helpful? Give feedback.
-
Love it. Thanks, and keep it comin'. |
Beta Was this translation helpful? Give feedback.
-
At the end of this page, why it didn't call the add tool? It didn't give the final result, right? I didn't get this |
Beta Was this translation helpful? Give feedback.
-
This is the remainder code for complete answer. from langgraph.prebuilt import create_react_agent
# https://langchain-ai.github.io/langgraph/agents/tools/#disable-parallel-tool-calling
agent = create_react_agent(
llm.bind_tools(tools, parallel_tool_calls=False), tools=tools, prompt=system
)
agent.invoke(
{
"messages": [
*examples,
{"role": "user", "content": "Whats 119 times 8 minus 20"},
]
}
) And the messages are below: {'messages':
[HumanMessage(content="What's the product of 317253 and 128472 plus four", additional_kwargs={}, response_metadata={}, name='example_user', id='693be3e6-dea5-4b82-84bf-a69d3b54fc9f'),
AIMessage(content='', additional_kwargs={}, response_metadata={}, name='example_assistant', id='216d1990-8fe9-4a91-8684-3696b3f6831e', tool_calls=[{'name': 'Multiply', 'args': {'x': 317253, 'y': 128472}, 'id': '1', 'type': 'tool_call'}]),
ToolMessage(content='16505054784', id='228a4721-4d37-431d-a610-259f9070b50a', tool_call_id='1'),
AIMessage(content='', additional_kwargs={}, response_metadata={}, name='example_assistant', id='3adc5198-0532-4029-95c0-b7018e713c03', tool_calls=[{'name': 'Add', 'args': {'x': 16505054784, 'y': 4}, 'id': '2', 'type': 'tool_call'}]),
ToolMessage(content='16505054788', id='ed5efd90-c3e1-46aa-b144-3af8e9a3cfb3', tool_call_id='2'),
AIMessage(content='The product of 317253 and 128472 plus four is 16505054788', additional_kwargs={}, response_metadata={}, name='example_assistant', id='a75d3029-672e-416d-9dc9-b5d557c5655c'),
HumanMessage(content='Whats 119 times 8 minus 20', additional_kwargs={}, response_metadata={}, id='4bd3f287-c01e-4662-981e-12678025b453'),
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_lWUr723dB2fqrEHDco86emfi', 'function': {'arguments': '{"a":119,"b":8}', 'name': 'multiply'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 224, 'total_tokens': 241, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_34a54ae93c', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-f466247a-38e8-4daf-aab7-b622822fcab7-0', tool_calls=[{'name': 'multiply', 'args': {'a': 119, 'b': 8}, 'id': 'call_lWUr723dB2fqrEHDco86emfi', 'type': 'tool_call'}], usage_metadata={'input_tokens': 224, 'output_tokens': 17, 'total_tokens': 241, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}),
ToolMessage(content='952', name='multiply', id='0bf893e5-7a58-4eb0-a3eb-1b83290033d3', tool_call_id='call_lWUr723dB2fqrEHDco86emfi'),
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_KQr4l86vQOJev3HtnWa2G8q5', 'function': {'arguments': '{"a":952,"b":-20}', 'name': 'add'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 249, 'total_tokens': 267, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_34a54ae93c', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-1e95ed3a-5c97-482d-9c34-70856e5ae687-0', tool_calls=[{'name': 'add', 'args': {'a': 952, 'b': -20}, 'id': 'call_KQr4l86vQOJev3HtnWa2G8q5', 'type': 'tool_call'}], usage_metadata={'input_tokens': 249, 'output_tokens': 18, 'total_tokens': 267, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}),
ToolMessage(content='932', name='add', id='a6f9f82e-cf9a-486c-952e-eed62ff1c98a', tool_call_id='call_KQr4l86vQOJev3HtnWa2G8q5'),
AIMessage(content='119 times 8 minus 20 is 932.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 12, 'prompt_tokens': 275, 'total_tokens': 287, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_34a54ae93c', 'finish_reason': 'stop', 'logprobs': None}, id='run-768bcf7a-f38c-4320-915d-02adf9622b9a-0', usage_metadata={'input_tokens': 275, 'output_tokens': 12, 'total_tokens': 287, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})]} |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
docs/how_to/tools_few_shot/
For more complex tool use it's very useful to add few-shot examples to the prompt. We can do this by adding AIMessages with ToolCalls and corresponding ToolMessages to our prompt.
https://python.langchain.com/docs/how_to/tools_few_shot/
Beta Was this translation helpful? Give feedback.
All reactions