Replies: 2 comments 4 replies
-
Ollama Functions is one way to bind tools, the alternative is via ReAct. Here's an example of a ReAct agent running locally with Ollama (llama3): from langchain_core.tools import tool
from langchain.pydantic_v1 import BaseModel, Field
# ============================================================
# Define a custom but dummy tool
# ============================================================
class SearchInput(BaseModel):
location: str = Field(description="location to search for")
@tool(args_schema=SearchInput)
def weather_forecast(location: str):
"""Weather forecast tool."""
print(f"Weather for {location}")
return f"A dummy forecast for {location}"
# ============================================================
# Define the agent
# ============================================================
from langchain_community.chat_models import ChatOllama
llm = ChatOllama(model="llama3")
from langchain_core.tools import render_text_description
from langchain.agents import AgentExecutor, create_react_agent
from langchain import hub
tools = [weather_forecast]
# a very special prompt embodying the essence of ReAct
prompt = hub.pull("hwchase17/react-json")
prompt = prompt.partial(
tools=render_text_description(tools),
tool_names=", ".join([t.name for t in tools]),
)
agent = create_react_agent(llm, tools, prompt)
# ============================================================
# Test the agent
# note:
# - returning the parsing error will lead to the agent trying again
# - if you set verbose True you can see the agent's internal reflection flow
# ============================================================
agent_executor = AgentExecutor(agent=agent, tools=tools, handle_parsing_errors=True, verbose=False, format="json")
print(agent_executor.invoke({"input":"What is the weather in Paris?"})) |
Beta Was this translation helpful? Give feedback.
3 replies
-
So is langchain, crewai, and llama.index(GPT) the only ways to build an agent? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am using MacOS, and installed Ollama locally. My objective is to develop an Agent using Langchain, that can take actions on inputs from LLM conversations, and execute various scripts or one-off shell commands.
I asked https://chat.langchain.com about this, and it responded with the following:
https://python.langchain.com/docs/integrations/chat/ollama_functions
Is this the best way of developing a custom Agent, without using a managed service for LLM? Or is there another mechanism I should be considering?
Under the Agent Types documentation, I don't see any references to Ollama. Perhaps that's because Ollama Functions are an Agent Type that is still experimental, and hasn't been incorporated into the release documentation?
Beta Was this translation helpful? Give feedback.
All reactions