-
Checked other resources
Commit to Help
Example Codehttps://python.langchain.com/v0.2/docs/integrations/retrievers/tavily/
The code I am using...
import datetime
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableConfig, chain
today = datetime.datetime.today().strftime("%D")
prompt = ChatPromptTemplate(
[
("system", f"You are a helpful assistant. The date today is {today}."),
("human", "{user_input}"),
("placeholder", "{messages}"),
]
)
# specifying tool_choice will force the model to call this tool.
llm_with_tools = llm.bind_tools([tool])
llm_chain = prompt | llm_with_tools
@chain
def tool_chain(user_input: str, config: RunnableConfig):
input_ = {"user_input": user_input}
ai_msg = llm_chain.invoke(input_, config=config)
tool_msgs = tool.batch(ai_msg.tool_calls, config=config)
return llm_chain.invoke({**input_, "messages": [ai_msg, *tool_msgs]}, config=config)
response = tool_chain.invoke("""Give me one latest news on Apple Stock""") DescriptionI need to send 25+ separate messages to Tavily and collect a response for each one. Is it possible to send these messages sequentially? While the examples provided are simplified, real-life scenarios are more complex. Could you provide a sample code to illustrate how this can be done? System InfoName: langchain |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
To send multiple messages sequentially and collect responses for each one, you can iterate over your list of messages and invoke the import datetime
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableConfig, chain
# Define the current date
today = datetime.datetime.today().strftime("%D")
# Define the prompt template
prompt = ChatPromptTemplate(
[
("system", f"You are a helpful assistant. The date today is {today}."),
("human", "{user_input}"),
("placeholder", "{messages}"),
]
)
# Bind tools to the language model
llm_with_tools = llm.bind_tools([tool])
# Create the LLM chain
llm_chain = prompt | llm_with_tools
# Define the tool chain
@chain
def tool_chain(user_input: str, config: RunnableConfig):
input_ = {"user_input": user_input}
ai_msg = llm_chain.invoke(input_, config=config)
tool_msgs = tool.batch(ai_msg.tool_calls, config=config)
return llm_chain.invoke({**input_, "messages": [ai_msg, *tool_msgs]}, config=config)
# List of messages to send
messages = [
"Give me one latest news on Apple Stock",
"What is the weather like in New York?",
"Tell me a joke",
# Add more messages as needed
]
# Configuration for the Runnable
config = RunnableConfig(
tags=["example"],
metadata={"source": "user_request"},
callbacks=None,
run_name="sequential_message_run",
max_concurrency=1,
recursion_limit=25,
configurable={},
run_id=None
)
# Collect responses for each message
responses = []
for message in messages:
response = tool_chain.invoke(message, config=config)
responses.append(response)
# Print the collected responses
for i, response in enumerate(responses):
print(f"Response {i+1}: {response}") In this example:
This approach ensures that each message is processed sequentially, and you can handle more complex scenarios by adjusting the |
Beta Was this translation helpful? Give feedback.
To integrate the
TavilySearchResults
tool into the previously suggested code for sending multiple messages sequentially and collecting responses, follow these steps:Install the required packages:
Set up your Tavily API key:
Instantiate the
TavilySearchResults
tool: