1
1
![ MCP Toolbox Logo] ( https://raw.githubusercontent.com/googleapis/genai-toolbox/main/logo.png )
2
- # MCP Toolbox LangChain SDK
2
+ # MCP Toolbox LlamaIndex SDK
3
3
4
4
This SDK allows you to seamlessly integrate the functionalities of
5
- [ Toolbox] ( https://github.com/googleapis/genai-toolbox ) into your LangChain LLM
5
+ [ Toolbox] ( https://github.com/googleapis/genai-toolbox ) into your LlamaIndex LLM
6
6
applications, enabling advanced orchestration and interaction with GenAI models.
7
7
8
8
<!-- TOC ignore:true -->
@@ -15,10 +15,7 @@ applications, enabling advanced orchestration and interaction with GenAI models.
15
15
- [ Loading Tools] ( #loading-tools )
16
16
- [ Load a toolset] ( #load-a-toolset )
17
17
- [ Load a single tool] ( #load-a-single-tool )
18
- - [ Use with LangChain] ( #use-with-langchain )
19
- - [ Use with LangGraph] ( #use-with-langgraph )
20
- - [ Represent Tools as Nodes] ( #represent-tools-as-nodes )
21
- - [ Connect Tools with LLM] ( #connect-tools-with-llm )
18
+ - [ Use with LlamaIndex] ( #use-with-llamaindex )
22
19
- [ Manual usage] ( #manual-usage )
23
20
- [ Authenticating Tools] ( #authenticating-tools )
24
21
- [ Supported Authentication Mechanisms] ( #supported-authentication-mechanisms )
@@ -38,41 +35,48 @@ applications, enabling advanced orchestration and interaction with GenAI models.
38
35
## Installation
39
36
40
37
``` bash
41
- pip install toolbox-langchain
38
+ pip install toolbox-llamaindex
42
39
```
43
40
44
41
## Quickstart
45
42
46
43
Here's a minimal example to get you started using
47
- [ LangGraph] ( https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent ) :
44
+ # TODO: add link
45
+ [ LlamaIndex] ( ) :
48
46
49
47
``` py
50
- from toolbox_langchain import ToolboxClient
51
- from langchain_google_vertexai import ChatVertexAI
52
- from langgraph.prebuilt import create_react_agent
48
+ import asyncio
53
49
54
- toolbox = ToolboxClient(" http://127.0.0.1:5000" )
55
- tools = toolbox.load_toolset()
50
+ from llama_index.llms.google_genai import GoogleGenAI
51
+ from llama_index.core.agent.workflow import AgentWorkflow
52
+
53
+ from toolbox_llamaindex import ToolboxClient
56
54
57
- model = ChatVertexAI(model = " gemini-1.5-pro-002" )
58
- agent = create_react_agent(model, tools)
55
+ async def run_agent ():
56
+ toolbox = ToolboxClient(" http://127.0.0.1:5000" )
57
+ tools = toolbox.load_toolset()
59
58
60
- prompt = " How's the weather today?"
59
+ vertex_model = GoogleGenAI(
60
+ model = " gemini-1.5-pro" ,
61
+ vertexai_config = {" project" : " project-id" , " location" : " us-central1" },
62
+ )
63
+ agent = AgentWorkflow.from_tools_or_functions(
64
+ tools,
65
+ llm = vertex_model,
66
+ system_prompt = " You are a helpful assistant." ,
67
+ )
68
+ response = await agent.run(user_msg = " Get some response from the agent." )
69
+ print (response)
61
70
62
- for s in agent.stream({" messages" : [(" user" , prompt)]}, stream_mode = " values" ):
63
- message = s[" messages" ][- 1 ]
64
- if isinstance (message, tuple ):
65
- print (message)
66
- else :
67
- message.pretty_print()
71
+ asyncio.run(run_agent())
68
72
```
69
73
70
74
## Usage
71
75
72
76
Import and initialize the toolbox client.
73
77
74
78
``` py
75
- from toolbox_langchain import ToolboxClient
79
+ from toolbox_llamaindex import ToolboxClient
76
80
77
81
# Replace with your Toolbox service's URL
78
82
toolbox = ToolboxClient(" http://127.0.0.1:5000" )
@@ -102,85 +106,63 @@ tool = toolbox.load_tool("my-tool")
102
106
Loading individual tools gives you finer-grained control over which tools are
103
107
available to your LLM agent.
104
108
105
- ## Use with LangChain
109
+ ## Use with LlamaIndex
106
110
107
111
LangChain's agents can dynamically choose and execute tools based on the user
108
112
input. Include tools loaded from the Toolbox SDK in the agent's toolkit:
109
113
110
114
``` py
111
- from langchain_google_vertexai import ChatVertexAI
115
+ from llama_index.llms.google_genai import GoogleGenAI
116
+ from llama_index.core.agent.workflow import AgentWorkflow
112
117
113
- model = ChatVertexAI(model = " gemini-1.5-pro-002" )
118
+ vertex_model = GoogleGenAI(
119
+ model = " gemini-1.5-pro" ,
120
+ vertexai_config = {" project" : " project-id" , " location" : " us-central1" },
121
+ )
114
122
115
123
# Initialize agent with tools
116
- agent = model.bind_tools(tools)
117
-
118
- # Run the agent
119
- result = agent.invoke(" Do something with the tools" )
120
- ```
121
-
122
- ## Use with LangGraph
123
-
124
- Integrate the Toolbox SDK with LangGraph to use Toolbox service tools within a
125
- graph-based workflow. Follow the [ official
126
- guide] ( https://langchain-ai.github.io/langgraph/ ) with minimal changes.
127
-
128
- ### Represent Tools as Nodes
129
-
130
- Represent each tool as a LangGraph node, encapsulating the tool's execution within the node's functionality:
131
-
132
- ``` py
133
- from toolbox_langchain import ToolboxClient
134
- from langgraph.graph import StateGraph, MessagesState
135
- from langgraph.prebuilt import ToolNode
136
-
137
- # Define the function that calls the model
138
- def call_model (state : MessagesState):
139
- messages = state[' messages' ]
140
- response = model.invoke(messages)
141
- return {" messages" : [response]} # Return a list to add to existing messages
142
-
143
- model = ChatVertexAI(model = " gemini-1.5-pro-002" )
144
- builder = StateGraph(MessagesState)
145
- tool_node = ToolNode(tools)
146
-
147
- builder.add_node(" agent" , call_model)
148
- builder.add_node(" tools" , tool_node)
124
+ agent = AgentWorkflow.from_tools_or_functions(
125
+ tools,
126
+ llm = vertex_model,
127
+ system_prompt = " You are a helpful assistant." ,
128
+ )
129
+
130
+ # Query the agent
131
+ response = await agent.run(user_msg = " Get some response from the agent." )
132
+ print (response)
149
133
```
150
134
151
- ### Connect Tools with LLM
135
+ ### Maintain state
152
136
153
- Connect tool nodes with LLM nodes. The LLM decides which tool to use based on
154
- input or context. Tool output can be fed back into the LLM:
137
+ To maintain state for the agent, add context as follows:
155
138
156
139
``` py
157
- from typing import Literal
158
- from langgraph.graph import END , START
159
- from langchain_core.messages import HumanMessage
160
-
161
- # Define the function that determines whether to continue or not
162
- def should_continue (state : MessagesState) -> Literal[" tools" , END ]:
163
- messages = state[' messages' ]
164
- last_message = messages[- 1 ]
165
- if last_message.tool_calls:
166
- return " tools" # Route to "tools" node if LLM makes a tool call
167
- return END # Otherwise, stop
168
-
169
- builder.add_edge(START , " agent" )
170
- builder.add_conditional_edges(" agent" , should_continue)
171
- builder.add_edge(" tools" , ' agent' )
172
-
173
- graph = builder.compile()
174
-
175
- graph.invoke({" messages" : [HumanMessage(content = " Do something with the tools" )]})
140
+ from llama_index.core.agent.workflow import AgentWorkflow
141
+ from llama_index.core.workflow import Context
142
+ from llama_index.llms.google_genai import GoogleGenAI
143
+
144
+ vertex_model = GoogleGenAI(
145
+ model = " gemini-1.5-pro" ,
146
+ vertexai_config = {" project" : " twisha-dev" , " location" : " us-central1" },
147
+ )
148
+ agent = AgentWorkflow.from_tools_or_functions(
149
+ tools,
150
+ llm = vertex_model,
151
+ system_prompt = " You are a helpful assistant" ,
152
+ )
153
+
154
+ # Save memory in agent context
155
+ ctx = Context(agent)
156
+ response = await agent.run(user_msg = " Give me some response." , ctx = ctx)
157
+ print (response)
176
158
```
177
159
178
160
## Manual usage
179
161
180
- Execute a tool manually using the ` invoke ` method:
162
+ Execute a tool manually using the ` call ` method:
181
163
182
164
``` py
183
- result = tools[0 ].invoke ({" name" : " Alice" , " age" : 30 })
165
+ result = tools[0 ].call ({" name" : " Alice" , " age" : 30 })
184
166
```
185
167
186
168
This is useful for testing tools or when you need precise control over tool
@@ -250,7 +232,7 @@ auth_tools = toolbox.load_toolset(auth_tokens={"my_auth": get_auth_token})
250
232
251
233
``` py
252
234
import asyncio
253
- from toolbox_langchain import ToolboxClient
235
+ from toolbox_llamaindex import ToolboxClient
254
236
255
237
async def get_auth_token ():
256
238
# ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
@@ -261,7 +243,7 @@ toolbox = ToolboxClient("http://127.0.0.1:5000")
261
243
tool = toolbox.load_tool(" my-tool" )
262
244
263
245
auth_tool = tool.add_auth_token(" my_auth" , get_auth_token)
264
- result = auth_tool.invoke ({" input" : " some input" })
246
+ result = auth_tool.call ({" input" : " some input" })
265
247
print (result)
266
248
```
267
249
@@ -329,7 +311,7 @@ use the asynchronous interfaces of the `ToolboxClient`.
329
311
330
312
``` py
331
313
import asyncio
332
- from toolbox_langchain import ToolboxClient
314
+ from toolbox_llamaindex import ToolboxClient
333
315
334
316
async def main ():
335
317
toolbox = ToolboxClient(" http://127.0.0.1:5000" )
0 commit comments