Skip to content

Commit 99d0ad0

Browse files
authored
chore: move to correct readme (#198)
1 parent 87e21ed commit 99d0ad0

File tree

2 files changed

+178
-175
lines changed

2 files changed

+178
-175
lines changed

packages/toolbox-langchain/README.md

Lines changed: 87 additions & 69 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
![MCP Toolbox Logo](https://raw.githubusercontent.com/googleapis/genai-toolbox/main/logo.png)
2-
# MCP Toolbox LlamaIndex SDK
2+
# MCP Toolbox LangChain SDK
33

44
This SDK allows you to seamlessly integrate the functionalities of
5-
[Toolbox](https://github.com/googleapis/genai-toolbox) into your LlamaIndex LLM
5+
[Toolbox](https://github.com/googleapis/genai-toolbox) into your LangChain LLM
66
applications, enabling advanced orchestration and interaction with GenAI models.
77

88
<!-- TOC ignore:true -->
@@ -15,7 +15,10 @@ applications, enabling advanced orchestration and interaction with GenAI models.
1515
- [Loading Tools](#loading-tools)
1616
- [Load a toolset](#load-a-toolset)
1717
- [Load a single tool](#load-a-single-tool)
18-
- [Use with LlamaIndex](#use-with-llamaindex)
18+
- [Use with LangChain](#use-with-langchain)
19+
- [Use with LangGraph](#use-with-langgraph)
20+
- [Represent Tools as Nodes](#represent-tools-as-nodes)
21+
- [Connect Tools with LLM](#connect-tools-with-llm)
1922
- [Manual usage](#manual-usage)
2023
- [Authenticating Tools](#authenticating-tools)
2124
- [Supported Authentication Mechanisms](#supported-authentication-mechanisms)
@@ -35,48 +38,41 @@ applications, enabling advanced orchestration and interaction with GenAI models.
3538
## Installation
3639

3740
```bash
38-
pip install toolbox-llamaindex
41+
pip install toolbox-langchain
3942
```
4043

4144
## Quickstart
4245

4346
Here's a minimal example to get you started using
44-
# TODO: add link
45-
[LlamaIndex]():
47+
[LangGraph](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent):
4648

4749
```py
48-
import asyncio
49-
50-
from llama_index.llms.google_genai import GoogleGenAI
51-
from llama_index.core.agent.workflow import AgentWorkflow
50+
from toolbox_langchain import ToolboxClient
51+
from langchain_google_vertexai import ChatVertexAI
52+
from langgraph.prebuilt import create_react_agent
5253

53-
from toolbox_llamaindex import ToolboxClient
54+
toolbox = ToolboxClient("http://127.0.0.1:5000")
55+
tools = toolbox.load_toolset()
5456

55-
async def run_agent():
56-
toolbox = ToolboxClient("http://127.0.0.1:5000")
57-
tools = toolbox.load_toolset()
57+
model = ChatVertexAI(model="gemini-1.5-pro-002")
58+
agent = create_react_agent(model, tools)
5859

59-
vertex_model = GoogleGenAI(
60-
model="gemini-1.5-pro",
61-
vertexai_config={"project": "project-id", "location": "us-central1"},
62-
)
63-
agent = AgentWorkflow.from_tools_or_functions(
64-
tools,
65-
llm=vertex_model,
66-
system_prompt="You are a helpful assistant.",
67-
)
68-
response = await agent.run(user_msg="Get some response from the agent.")
69-
print(response)
60+
prompt = "How's the weather today?"
7061

71-
asyncio.run(run_agent())
62+
for s in agent.stream({"messages": [("user", prompt)]}, stream_mode="values"):
63+
message = s["messages"][-1]
64+
if isinstance(message, tuple):
65+
print(message)
66+
else:
67+
message.pretty_print()
7268
```
7369

7470
## Usage
7571

7672
Import and initialize the toolbox client.
7773

7874
```py
79-
from toolbox_llamaindex import ToolboxClient
75+
from toolbox_langchain import ToolboxClient
8076

8177
# Replace with your Toolbox service's URL
8278
toolbox = ToolboxClient("http://127.0.0.1:5000")
@@ -106,63 +102,85 @@ tool = toolbox.load_tool("my-tool")
106102
Loading individual tools gives you finer-grained control over which tools are
107103
available to your LLM agent.
108104

109-
## Use with LlamaIndex
105+
## Use with LangChain
110106

111107
LangChain's agents can dynamically choose and execute tools based on the user
112108
input. Include tools loaded from the Toolbox SDK in the agent's toolkit:
113109

114110
```py
115-
from llama_index.llms.google_genai import GoogleGenAI
116-
from llama_index.core.agent.workflow import AgentWorkflow
111+
from langchain_google_vertexai import ChatVertexAI
117112

118-
vertex_model = GoogleGenAI(
119-
model="gemini-1.5-pro",
120-
vertexai_config={"project": "project-id", "location": "us-central1"},
121-
)
113+
model = ChatVertexAI(model="gemini-1.5-pro-002")
122114

123115
# Initialize agent with tools
124-
agent = AgentWorkflow.from_tools_or_functions(
125-
tools,
126-
llm=vertex_model,
127-
system_prompt="You are a helpful assistant.",
128-
)
129-
130-
# Query the agent
131-
response = await agent.run(user_msg="Get some response from the agent.")
132-
print(response)
116+
agent = model.bind_tools(tools)
117+
118+
# Run the agent
119+
result = agent.invoke("Do something with the tools")
133120
```
134121

135-
### Maintain state
122+
## Use with LangGraph
136123

137-
To maintain state for the agent, add context as follows:
124+
Integrate the Toolbox SDK with LangGraph to use Toolbox service tools within a
125+
graph-based workflow. Follow the [official
126+
guide](https://langchain-ai.github.io/langgraph/) with minimal changes.
127+
128+
### Represent Tools as Nodes
129+
130+
Represent each tool as a LangGraph node, encapsulating the tool's execution within the node's functionality:
138131

139132
```py
140-
from llama_index.core.agent.workflow import AgentWorkflow
141-
from llama_index.core.workflow import Context
142-
from llama_index.llms.google_genai import GoogleGenAI
143-
144-
vertex_model = GoogleGenAI(
145-
model="gemini-1.5-pro",
146-
vertexai_config={"project": "twisha-dev", "location": "us-central1"},
147-
)
148-
agent = AgentWorkflow.from_tools_or_functions(
149-
tools,
150-
llm=vertex_model,
151-
system_prompt="You are a helpful assistant",
152-
)
153-
154-
# Save memory in agent context
155-
ctx = Context(agent)
156-
response = await agent.run(user_msg="Give me some response.", ctx=ctx)
157-
print(response)
133+
from toolbox_langchain import ToolboxClient
134+
from langgraph.graph import StateGraph, MessagesState
135+
from langgraph.prebuilt import ToolNode
136+
137+
# Define the function that calls the model
138+
def call_model(state: MessagesState):
139+
messages = state['messages']
140+
response = model.invoke(messages)
141+
return {"messages": [response]} # Return a list to add to existing messages
142+
143+
model = ChatVertexAI(model="gemini-1.5-pro-002")
144+
builder = StateGraph(MessagesState)
145+
tool_node = ToolNode(tools)
146+
147+
builder.add_node("agent", call_model)
148+
builder.add_node("tools", tool_node)
149+
```
150+
151+
### Connect Tools with LLM
152+
153+
Connect tool nodes with LLM nodes. The LLM decides which tool to use based on
154+
input or context. Tool output can be fed back into the LLM:
155+
156+
```py
157+
from typing import Literal
158+
from langgraph.graph import END, START
159+
from langchain_core.messages import HumanMessage
160+
161+
# Define the function that determines whether to continue or not
162+
def should_continue(state: MessagesState) -> Literal["tools", END]:
163+
messages = state['messages']
164+
last_message = messages[-1]
165+
if last_message.tool_calls:
166+
return "tools" # Route to "tools" node if LLM makes a tool call
167+
return END # Otherwise, stop
168+
169+
builder.add_edge(START, "agent")
170+
builder.add_conditional_edges("agent", should_continue)
171+
builder.add_edge("tools", 'agent')
172+
173+
graph = builder.compile()
174+
175+
graph.invoke({"messages": [HumanMessage(content="Do something with the tools")]})
158176
```
159177

160178
## Manual usage
161179

162-
Execute a tool manually using the `call` method:
180+
Execute a tool manually using the `invoke` method:
163181

164182
```py
165-
result = tools[0].call({"name": "Alice", "age": 30})
183+
result = tools[0].invoke({"name": "Alice", "age": 30})
166184
```
167185

168186
This is useful for testing tools or when you need precise control over tool
@@ -232,7 +250,7 @@ auth_tools = toolbox.load_toolset(auth_tokens={"my_auth": get_auth_token})
232250

233251
```py
234252
import asyncio
235-
from toolbox_llamaindex import ToolboxClient
253+
from toolbox_langchain import ToolboxClient
236254

237255
async def get_auth_token():
238256
# ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
@@ -243,7 +261,7 @@ toolbox = ToolboxClient("http://127.0.0.1:5000")
243261
tool = toolbox.load_tool("my-tool")
244262

245263
auth_tool = tool.add_auth_token("my_auth", get_auth_token)
246-
result = auth_tool.call({"input": "some input"})
264+
result = auth_tool.invoke({"input": "some input"})
247265
print(result)
248266
```
249267

@@ -311,7 +329,7 @@ use the asynchronous interfaces of the `ToolboxClient`.
311329
312330
```py
313331
import asyncio
314-
from toolbox_llamaindex import ToolboxClient
332+
from toolbox_langchain import ToolboxClient
315333

316334
async def main():
317335
toolbox = ToolboxClient("http://127.0.0.1:5000")
@@ -321,4 +339,4 @@ async def main():
321339

322340
if __name__ == "__main__":
323341
asyncio.run(main())
324-
```
342+
```

0 commit comments

Comments
 (0)