π MCP Client is the open-source way to connect any LLM to any MCP server in TypeScript/Node.js, letting you build custom agents with tool access without closed-source dependencies.
π‘ Let developers easily connect any LLM via LangChain.js to tools like web browsing, file operations, 3D modeling, and more.
Feature | Description |
---|---|
π Ease of use | Create an MCP-capable agent in just a few lines of TypeScript. |
π€ LLM Flexibility | Works with any LangChain.js-supported LLM that supports tool calling. |
π HTTP Support | Direct SSE/HTTP connection to MCP servers. |
βοΈ Dynamic Server Selection | Agents select the right MCP server from a pool on the fly. |
π§© Multi-Server Support | Use multiple MCP servers in one agent. |
π‘οΈ Tool Restrictions | Restrict unsafe tools like filesystem or network. |
π§ Custom Agents | Build your own agents with LangChain.js adapter or implement new adapters. |
- Node.js 22.0.0 or higher
- npm, yarn, or pnpm (examples use pnpm)
# Install from npm
npm install mcp-use
# LangChain.js and your LLM provider (e.g., OpenAI)
npm install langchain @langchain/openai dotenv
Create a .env
:
OPENAI_API_KEY=your_api_key
import { ChatOpenAI } from '@langchain/openai'
import { MCPAgent, MCPClient } from 'mcp-use'
import 'dotenv/config'
async function main() {
// 1. Configure MCP servers
const config = {
mcpServers: {
playwright: { command: 'npx', args: ['@playwright/mcp@latest'] }
}
}
const client = MCPClient.fromDict(config)
// 2. Create LLM
const llm = new ChatOpenAI({ modelName: 'gpt-4o' })
// 3. Instantiate agent
const agent = new MCPAgent({ llm, client, maxSteps: 20 })
// 4. Run query
const result = await agent.run('Find the best restaurant in Tokyo using Google Search')
console.log('Result:', result)
}
main().catch(console.error)
The MCPAgent
class provides several methods for executing queries with different output formats:
Executes a query and returns the final result as a string.
const result = await agent.run('What tools are available?')
console.log(result)
Yields intermediate steps during execution, providing visibility into the agent's reasoning process.
const stream = agent.stream('Search for restaurants in Tokyo')
for await (const step of stream) {
console.log(`Tool: ${step.action.tool}, Input: ${step.action.toolInput}`)
console.log(`Result: ${step.observation}`)
}
Yields fine-grained LangChain StreamEvent objects, enabling token-by-token streaming and detailed event tracking.
const eventStream = agent.streamEvents('What is the weather today?')
for await (const event of eventStream) {
// Handle different event types
switch (event.event) {
case 'on_chat_model_stream':
// Token-by-token streaming from the LLM
if (event.data?.chunk?.content) {
process.stdout.write(event.data.chunk.content)
}
break
case 'on_tool_start':
console.log(`\nTool started: ${event.name}`)
break
case 'on_tool_end':
console.log(`Tool completed: ${event.name}`)
break
}
}
run()
: Best for simple queries where you only need the final resultstream()
: Best for debugging and understanding the agent's tool usagestreamEvents()
: Best for real-time UI updates with token-level streaming
The library provides built-in utilities for integrating with Vercel AI SDK, making it easy to build streaming UIs with React hooks like useCompletion
and useChat
.
npm install ai @langchain/anthropic
import { ChatAnthropic } from '@langchain/anthropic'
import { LangChainAdapter } from 'ai'
import { createReadableStreamFromGenerator, MCPAgent, MCPClient, streamEventsToAISDK } from 'mcp-use'
async function createApiHandler() {
const config = {
mcpServers: {
everything: { command: 'npx', args: ['-y', '@modelcontextprotocol/server-everything'] }
}
}
const client = new MCPClient(config)
const llm = new ChatAnthropic({ model: 'claude-sonnet-4-20250514' })
const agent = new MCPAgent({ llm, client, maxSteps: 5 })
return async (request: { prompt: string }) => {
const streamEvents = agent.streamEvents(request.prompt)
const aiSDKStream = streamEventsToAISDK(streamEvents)
const readableStream = createReadableStreamFromGenerator(aiSDKStream)
return LangChainAdapter.toDataStreamResponse(readableStream)
}
}
import { streamEventsToAISDKWithTools } from 'mcp-use'
async function createEnhancedApiHandler() {
const config = {
mcpServers: {
everything: { command: 'npx', args: ['-y', '@modelcontextprotocol/server-everything'] }
}
}
const client = new MCPClient(config)
const llm = new ChatAnthropic({ model: 'claude-sonnet-4-20250514' })
const agent = new MCPAgent({ llm, client, maxSteps: 8 })
return async (request: { prompt: string }) => {
const streamEvents = agent.streamEvents(request.prompt)
// Enhanced stream includes tool usage notifications
const enhancedStream = streamEventsToAISDKWithTools(streamEvents)
const readableStream = createReadableStreamFromGenerator(enhancedStream)
return LangChainAdapter.toDataStreamResponse(readableStream)
}
}
// pages/api/chat.ts or app/api/chat/route.ts
import { ChatAnthropic } from '@langchain/anthropic'
import { LangChainAdapter } from 'ai'
import { createReadableStreamFromGenerator, MCPAgent, MCPClient, streamEventsToAISDK } from 'mcp-use'
export async function POST(req: Request) {
const { prompt } = await req.json()
const config = {
mcpServers: {
everything: { command: 'npx', args: ['-y', '@modelcontextprotocol/server-everything'] }
}
}
const client = new MCPClient(config)
const llm = new ChatAnthropic({ model: 'claude-sonnet-4-20250514' })
const agent = new MCPAgent({ llm, client, maxSteps: 10 })
try {
const streamEvents = agent.streamEvents(prompt)
const aiSDKStream = streamEventsToAISDK(streamEvents)
const readableStream = createReadableStreamFromGenerator(aiSDKStream)
return LangChainAdapter.toDataStreamResponse(readableStream)
}
finally {
await client.closeAllSessions()
}
}
// components/Chat.tsx
import { useCompletion } from 'ai/react'
export function Chat() {
const { completion, input, handleInputChange, handleSubmit } = useCompletion({
api: '/api/chat',
})
return (
<div>
<div>{completion}</div>
<form onSubmit={handleSubmit}>
<input
value={input}
onChange={handleInputChange}
placeholder="Ask me anything..."
/>
</form>
</div>
)
}
streamEventsToAISDK()
: Converts streamEvents to basic text streamstreamEventsToAISDKWithTools()
: Enhanced stream with tool usage notificationscreateReadableStreamFromGenerator()
: Converts async generator to ReadableStream
You can store servers in a JSON file:
{
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["@playwright/mcp@latest"]
}
}
}
Load it:
import { MCPClient } from 'mcp-use'
const client = MCPClient.fromConfigFile('./mcp-config.json')
We provide a comprehensive set of examples demonstrating various use cases. All examples are located in the examples/
directory with a dedicated README.
# Install dependencies
npm install
# Run any example
npm run example:airbnb # Search accommodations with Airbnb
npm run example:browser # Browser automation with Playwright
npm run example:chat # Interactive chat with memory
npm run example:stream # Demonstrate streaming methods (stream & streamEvents)
npm run example:stream_events # Comprehensive streamEvents() examples
npm run example:ai_sdk # AI SDK integration with streaming
npm run example:filesystem # File system operations
npm run example:http # HTTP server connection
npm run example:everything # Test MCP functionalities
npm run example:multi # Multiple servers in one session
- Browser Automation: Control browsers to navigate websites and extract information
- File Operations: Read, write, and manipulate files through MCP
- Multi-Server: Combine multiple MCP servers (Airbnb + Browser) in a single task
- Sandboxed Execution: Run MCP servers in isolated E2B containers
- OAuth Flows: Authenticate with services like Linear using OAuth2
- Streaming Methods: Demonstrate both step-by-step and token-level streaming
- AI SDK Integration: Build streaming UIs with Vercel AI SDK and React hooks
See the examples README for detailed documentation and prerequisites.
const config = {
mcpServers: {
airbnb: { command: 'npx', args: ['@openbnb/mcp-server-airbnb'] },
playwright: { command: 'npx', args: ['@playwright/mcp@latest'] }
}
}
const client = MCPClient.fromDict(config)
const agent = new MCPAgent({ llm, client, useServerManager: true })
await agent.run('Search Airbnb in Barcelona, then Google restaurants nearby')
const agent = new MCPAgent({
llm,
client,
disallowedTools: ['file_system', 'network']
})
Pietro Zullo |
Zane |
Luigi Pederzani |
MIT Β© Zane