Replies: 3 comments
-
Yes, you can use ChatOllama or other Ollama models in a React agent. To resolve the output parsing error, you can utilize the from langchain import hub
from langchain.agents import AgentExecutor, create_react_agent
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_community.chat_models import ChatOllama
import os
os.environ["TAVILY_API_KEY"] = "<key>"
tools = [TavilySearchResults(max_results=1)]
prompt = hub.pull("hwchase17/react")
llm = ChatOllama(model="llava", temperature=0)
agent = create_react_agent(llm, tools, prompt)
# Define a custom error handler function
def custom_error_handler(exception):
return f"Error occurred: {str(exception)}"
# Create an agent executor with the custom error handler
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True,
handle_parsing_errors=custom_error_handler
)
agent_executor.invoke({"input": "what is LangChain?"}) In this example, the |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
hello, by adding the handle_parsing_errors=True , you can fix this, it worked for me. agent_executor = AgentExecutor(agent=agent, tools=tools,verbose=True, handle_parsing_errors=True) Please try and let me know. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
Is there a way to use ChatOllama or other Ollama models in react agent?
Error Message:
python3.9/site-packages/langchain/agents/agent.py", line 1178, in _iter_next_step
raise ValueError(
ValueError: An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass
handle_parsing_errors=True
to the AgentExecutor. This is the error: Could not parse LLM output:The search results indicate that LangChain is a platform for building applications using large language models (LLMs). It provides tools and resources for developers to create custom chains, evaluate LLMs, and build context-aware reasoning applications. The platform also offers a set of powerful building blocks for global corporations, startups, and tinkerers to build with LangChain.
System Info
langchain==0.2.1
langchain-community==0.2.1
langchain-core==0.2.3
langchain-experimental==0.0.59
langchain-openai==0.1.8
langchain-text-splitters==0.2.0
langchainhub==0.1.17
python =3.9.19
Beta Was this translation helpful? Give feedback.
All reactions