In rag tutorial part 2, how can graph_builder decide whether to call tool or not ? #30243
-
Checked other resources
Commit to Help
Example Code from langgraph.graph import END
from langgraph.prebuilt import ToolNode, tools_condition
graph_builder.add_node(query_or_respond)
graph_builder.add_node(tools)
graph_builder.add_node(generate)
graph_builder.set_entry_point("query_or_respond")
graph_builder.add_conditional_edges(
"query_or_respond",
tools_condition,
{END: END, "tools": "tools"},
)
graph_builder.add_edge("tools", "generate")
graph_builder.add_edge("generate", END)
graph = graph_builder.compile() DescriptionIn this tutorial: https://python.langchain.com/docs/tutorials/qa_chat_history/, I don't see any code lines for implementing logic to decide whether running tool(retrieve relevant document related to a query) or not. So how this RAG app can decide whether running tool or not ? and it decide based on what logic ? I am a newbie to LLM so please explain it to me. Above lines is example codes from this tutorial: System InfoWindows 11, langchain 0.3.44 |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
hello. can anyone help me ? |
Beta Was this translation helpful? Give feedback.
-
Let me help you🐣... Starting with the query_or_respond node that includes the function You provide a list of such tools to the llm by binding them to the llm. Here is an example:
And how does the llm decide whether to use a particular tool or not❓
Here, we need to add a docstring which acts as a tool prompt to the llm. This prompt/docstring is highly essential to the llm as it tells the llm what this tool performs. By understanding these tools and their usages, the llm makes its decisions. Furthermore, if we discuss the workflow, if the query_or_response node doesn't generate an AIMessage containing tool_call, the agent will jump to the END node; otherwise, it will continue to the next node, i.e., tools. NOTE 🚨: The prebuilt function Next, from the tool node, it proceeds to the generate node, and then to END 🛌. I hope this helps in clarifying your doubts 🫠 about the llm functionality and how langgraph works. Since you're a beginner, I would highly recommend not starting with creating agents with RAG integration because there are many terms you may not yet understand. Instead, begin by creating simpler langgraph agents without RAG that only utilize tool calling functionality. Here is the documentation of langgraph that contains many examples to build agents: here Additionally, if you struggle with langgraph agents and have never created any agents through langchain (simpler agents), I recommend checking that out. bye, |
Beta Was this translation helpful? Give feedback.
Let me help you🐣...
Starting with the query_or_respond node that includes the function
query_or_respond
. This function generates an AIMessage, which is created by your llm and contains a tool call dict having the tool_name, id, and args. The tool name and ID indicate the specific tool the model intends to use.You provide a list of such tools to the llm by binding them to the llm. Here is an example:
And how does the llm decide whether to use a particular tool or not❓
Well 🎈, If you've observed the code while defining the
retrieve
tool, that is: