Replies: 2 comments 1 reply
-
Hey @smunx! I'm here to help you with any bugs, questions, or contributions you might have. Let's tackle this together! Yes, you can call custom tools with OpenAI Assistant using LangChain. The function call to the To actually call the tool, you need to extract the arguments from the tool call and then pass them to the tool. Here is the correct way to do it: from operator import itemgetter
chain = llm_with_tools | (lambda x: x.tool_calls[0]["args"]) | multiply
chain.invoke("What's four times 23") This code correctly extracts the arguments from the first tool call and passes them to the Additionally, you can use the from langchain.agents import AgentExecutor
from langchain.agents.openai_assistant import OpenAIAssistantRunnable
# Define your custom tools
tools = [multiply]
# Create an assistant with custom tools
agent = OpenAIAssistantRunnable.create_assistant(
name="custom assistant",
instructions="You are a personal assistant. Use the provided tools to answer questions.",
tools=tools,
model="gpt-4-1106-preview",
as_agent=True,
)
# Use AgentExecutor to handle the execution loop
agent_executor = AgentExecutor(agent=agent, tools=tools)
response = agent_executor.invoke({"content": "What's four times 23"})
print(response) This setup ensures that the assistant can correctly call and execute the custom For a more detailed approach, you can follow the guide on how to add ad-hoc tool calling capability to LLMs and Chat Models using LangChain. This involves creating a prompt that specifies the tools the model has access to, the arguments to those tools, and the desired output format. Here is an example of how to set it up: from langchain_core.tools import tool
from langchain_core.output_parsers import JsonOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.tools import render_text_description
from langchain_core.runnables import RunnablePassthrough
# Define your custom tools
@tool
def multiply(x: float, y: float) -> float:
"""Multiply two numbers together."""
return x * y
tools = [multiply]
# Create a prompt
rendered_tools = render_text_description(tools)
system_prompt = f"""\
You are an assistant that has access to the following set of tools.
Here are the names and descriptions for each tool:
{rendered_tools}
Given the user input, return the name and input of the tool to use.
Return your response as a JSON blob with 'name' and 'arguments' keys.
The `arguments` should be a dictionary, with keys corresponding
to the argument names and the values corresponding to the requested values.
"""
prompt = ChatPromptTemplate.from_messages(
[("system", system_prompt), ("user", "{input}")]
)
# Create a chain
chain = prompt | model | JsonOutputParser() | RunnablePassthrough.assign(output=invoke_tool)
response = chain.invoke({"input": "what's four times 23"})
print(response) This approach ensures that the model can request the invocation of the tool and that the tool is correctly executed with the provided arguments [1][2][3][4][5]. |
Beta Was this translation helpful? Give feedback.
-
Can we create a react agent with OpenAIAssistantRunnable to recursively call tools ? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I want to do the following with OpenAI Assistant. But the function call is not happening. Is this possible ?
Beta Was this translation helpful? Give feedback.
All reactions