A Python demonstration project showcasing Claude's advanced capabilities through interleaved thinking, fine-grained tool streaming, and dynamic tool discovery with MCP (Model Context Protocol) integration.
Created by Martin Bowling • GitHub • Twitter/X
ThinkChain demonstrates the power of Claude's streaming interface with advanced features like:
- Interleaved and extended thinking - Claude thinks through problems step-by-step in real-time
- Fine-grained tool streaming - Watch tools execute with live progress updates
- Early interception of tool_use blocks - Tool results are injected back into Claude's thinking process
- Multiple tool calls per turn - Execute multiple tools simultaneously for complex workflows
- Pydantic-validated inputs - Robust type checking and validation for all tool interactions
The system combines local Python tools with MCP servers to create a unified, extensible tool ecosystem that works seamlessly with Claude's streaming capabilities.
# Clone the repository
git clone https://github.com/martinbowling/ThinkChain.git
cd ThinkChain
# Set up your API key
echo "ANTHROPIC_API_KEY=your_api_key_here" > .env
# Run immediately - uv handles all dependencies automatically!
uv run thinkchain.py # Enhanced UI with rich formatting
uv run thinkchain_cli.py # Minimal CLI version
uv run run.py # Smart launcher (auto-detects best UI)
# Clone and set up
git clone https://github.com/martinbowling/ThinkChain.git
cd ThinkChain
# Install dependencies
uv pip install -r requirements.txt
# or: pip install -r requirements.txt
# Set up your API key
echo "ANTHROPIC_API_KEY=your_api_key_here" > .env
# Run the application
python run.py # Smart launcher
python thinkchain.py # Enhanced UI
python thinkchain_cli.py # CLI version
The core innovation of ThinkChain is how tool execution results are injected back into Claude's thinking stream. When Claude calls a tool:
- The tool executes and returns results
- Results are immediately fed back into Claude's thinking process
- Claude can reason about the results before responding to the user
- This creates a natural thinking → tool → thinking → response flow
- Local Tools: Automatically discovers Python tools from the
/tools
directory - MCP Integration: Connects to Model Context Protocol servers for extended functionality
- Hot Reloading: Use
refresh
command to reload tools during development - Unified Registry: All tools (local + MCP) work identically in the streaming interface
- Rich formatting with colors, borders, and progress indicators
- Interactive tool browser with categorized displays
- Command autocomplete and history
- Real-time thinking visualization with syntax highlighting
- Graceful degradation to standard text interface
- Server-sent events (SSE) for real-time communication
- Fine-grained streaming of tool execution progress
- Concurrent tool execution when possible
- Robust error handling and recovery
- Zero-setup execution with
uv run
- no virtual environments or dependency installation needed - Automatic tool discovery from
/tools
directory - Hot reloading with
/refresh
command during development - Rich error messages and graceful degradation when dependencies are missing
The key technical innovation is the tool result injection mechanism:
# Tool results are injected back into Claude's thinking process
async def stream_once(messages, tools):
# Start streaming with thinking enabled
async with client.messages.stream(
model="claude-sonnet-4-20250514",
messages=messages,
tools=tools,
betas=["interleaved-thinking-2025-05-14", "fine-grained-tool-streaming-2025-05-14"],
thinking_budget=1024
) as stream:
async for event in stream:
if event.type == "tool_use":
# Execute tool and inject result back into stream
result = await execute_tool(event.name, event.input)
# This result becomes part of Claude's thinking context
# for the remainder of the response
yield {"type": "tool_result", "content": result}
This creates a feedback loop where:
- Claude's initial thinking leads to tool use
- Tool results inform continued thinking
- Final response incorporates both reasoning and tool outcomes
Local Tools (/tools/*.py) → Validation → Registry
↓
MCP Servers (config.json) → Connection → Registry → Unified Tool List → Claude API
Each tool must implement the BaseTool
interface:
class BaseTool:
@property
def name(self) -> str: ...
@property
def description(self) -> str: ...
@property
def input_schema(self) -> Dict[str, Any]: ...
def execute(self, **kwargs) -> str: ...
User Input → Claude API → Thinking Stream → Tool Detection → Tool Execution
↑ ↓
Response ← Thinking Integration ← Tool Result Injection ← Tool Output
🌐 Web & Data Tools:
- weathertool: Real weather data from wttr.in API for any location worldwide
- duckduckgotool: Live DuckDuckGo search results for web queries and restaurant searches
- webscrapertool: Enhanced web scraper that extracts main content from any webpage
📁 File & Development Tools:
- filecreatortool: Creates new files with specified content and directory structure
- fileedittool: Advanced file editing with full/partial content replacement and search/replace
- filecontentreadertool: Reads and returns content from multiple files simultaneously
- createfolderstool: Creates directories and nested folder structures
- diffeditortool: Precise text snippet replacement in files
⚙️ Development & Package Management:
- uvpackagemanager: Complete interface to uv package manager for Python projects
- lintingtool: Runs Ruff linter on Python files to detect and fix code issues
- toolcreator: Dynamically creates new tools based on natural language descriptions
Configure in mcp_config.json
:
- SQLite: Database operations and queries
- Puppeteer: Web browser automation
- Filesystem: Advanced file system operations
- Brave Search: Real web search integration
While chatting with Claude, you can use these slash commands:
/refresh
or/reload
- Refresh tool discovery (picks up new tools)/tools
- Browse all available tools by category/status
- Show comprehensive system status/clear
- Clear screen while preserving status bar/config
- Show current configuration/config model <model_name>
- Switch between Claude models (sonnet/opus)/config thinking <1024-16000>
- Adjust thinking token budget/help
- Show help information/exit
or/quit
- End the conversation
Legacy Support: All commands work without the /
prefix for backward compatibility.
Create .env
file:
ANTHROPIC_API_KEY=your_api_key_here
Edit mcp_config.json
:
{
"mcpServers": {
"sqlite": {
"command": "uvx",
"args": ["mcp-server-sqlite", "--db-path", "./database.db"],
"enabled": true
},
"puppeteer": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-puppeteer"],
"enabled": false
}
}
}
The system supports both Claude models with configurable settings:
Available Models:
claude-sonnet-4-20250514
(default) - Fast and efficientclaude-opus-4-20250514
- Most capable, slower
Configurable Settings:
- Thinking budget: 1024-16000 tokens (default: 1024)
- Max response tokens: 1024
- Beta features:
interleaved-thinking-2025-05-14
,fine-grained-tool-streaming-2025-05-14
Runtime Configuration:
# Change model during conversation
/config model claude-opus-4-20250514
# Increase thinking budget for complex problems
/config thinking 8192
Creating a new tool is straightforward - just follow these steps:
Create a new Python file in the /tools/
directory (e.g., /tools/mytool.py
):
from tools.base import BaseTool
class MyTool(BaseTool):
name = "mytool"
description = """
A detailed description of what your tool does.
Use this tool when users ask about:
- Specific use case 1
- Specific use case 2
- "Keywords that should trigger this tool"
"""
input_schema = {
"type": "object",
"properties": {
"input_param": {
"type": "string",
"description": "Description of this parameter"
},
"optional_param": {
"type": "integer",
"description": "Optional parameter with default",
"default": 10
}
},
"required": ["input_param"]
}
def execute(self, **kwargs) -> str:
input_param = kwargs.get("input_param")
optional_param = kwargs.get("optional_param", 10)
# Your tool logic here
result = f"Processed: {input_param} with {optional_param}"
return result
- Class name: Must match filename (e.g.,
MyTool
formytool.py
) - Inherit from BaseTool: Import from
tools.base
- Four required attributes:
name
,description
,input_schema
,execute()
method - Return strings: The
execute()
method must return a string result
- Tools are automatically discovered on startup
- Use
/refresh
command to reload tools without restarting - Check with
/tools
command to see your new tool listed
- Descriptive names: Use clear, action-oriented names
- Rich descriptions: Include use cases and keywords that should trigger the tool
- Input validation: Use comprehensive JSON schemas
- Error handling: Wrap risky operations in try/catch blocks
- Helpful output: Return formatted, readable results
MCP allows integration with external servers that provide additional tools:
Most MCP servers can be installed via uvx
or npx
:
# Install SQLite MCP server
uvx install mcp-server-sqlite
# Install Puppeteer MCP server (requires Node.js)
npm install -g @modelcontextprotocol/server-puppeteer
# Install Brave Search MCP server
npm install -g @modelcontextprotocol/server-brave-search
Edit mcp_config.json
to add your server:
{
"mcpServers": {
"my-server": {
"command": "uvx",
"args": ["my-mcp-server", "--custom-arg", "value"],
"description": "Description of what this server provides",
"enabled": true,
"env": {
"API_KEY": "your_api_key_if_needed"
}
}
}
}
Popular MCP servers you can integrate:
Data & Storage:
mcp-server-sqlite
- Database operationsmcp-server-postgres
- PostgreSQL integrationmcp-server-redis
- Redis cache operations
Web & Automation:
@modelcontextprotocol/server-puppeteer
- Browser automation@modelcontextprotocol/server-brave-search
- Web search@modelcontextprotocol/server-filesystem
- File operations
APIs & Services:
@modelcontextprotocol/server-github
- GitHub integration@modelcontextprotocol/server-slack
- Slack integrationmcp-server-aws
- AWS operations
After adding a server, test it:
# Test MCP functionality
python test_mcp.py
# Start ThinkChain and check tools
python thinkchain.py
/tools # Should show both local and MCP tools
# Create tool
vim tools/newtool.py
# Test tool
python thinkchain.py
/refresh # Reload tools
"Use my new tool for X" # Test with Claude
# Iterate and improve
vim tools/newtool.py
/refresh # Reload again
- Use
print()
statements in yourexecute()
method - they'll show in console - Return error messages as strings for Claude to see
- Check the tool discovery logs on startup
# Traditional Python execution
python run.py # Smart launcher
python thinkchain.py # Full-featured UI
python thinkchain_cli.py # Minimal dependencies
# Using uv run (handles dependencies automatically)
uv run run.py # Smart launcher
uv run thinkchain.py # Full-featured UI
uv run thinkchain_cli.py # Minimal dependencies
anthropic>=0.25.0
- Claude API clientsseclient-py
- Server-sent events handlingpydantic
- Data validation and schemaspython-dotenv
- Environment variable management
rich
- Terminal formatting and colorsprompt_toolkit
- Interactive command line features
mcp
- Model Context Protocol client- MCP server packages (installed via
uvx
ornpx
)
Some tools require additional packages that are installed automatically:
Web Tools: (weathertool, duckduckgotool, webscrapertool)
requests
- HTTP requestsbeautifulsoup4
- HTML parsing
Note: Missing dependencies are handled gracefully - tools that can't import will be skipped during discovery with informative error messages.
All main scripts include inline dependency declarations that make them compatible with uv run
:
#!/usr/bin/env python3
# /// script
# dependencies = [
# "anthropic>=0.25.0",
# "sseclient-py",
# "pydantic",
# "python-dotenv",
# "rich>=13.0.0",
# "requests",
# "beautifulsoup4",
# "mcp",
# "httpx",
# ]
# ///
Benefits of uv run
:
- ✅ Zero setup - No need to create virtual environments or install dependencies
- ✅ Automatic dependency management - uv handles everything
- ✅ Isolated execution - Each run gets a clean environment
- ✅ Cross-platform - Works identically on macOS, Linux, and Windows
# All these work immediately after cloning:
uv run thinkchain.py # Enhanced UI with all features
uv run thinkchain_cli.py # Minimal CLI version
uv run run.py # Smart launcher
uv run test_mcp.py # Test MCP integration
This project is designed to be forked and extended! Here are some ideas:
- Add new local tools for your specific use cases
- Integrate additional MCP servers from the growing ecosystem
- Enhance the UI with new visualization features
- Extend the streaming architecture for custom event types
- Build domain-specific tool collections (data science, web dev, etc.)
The process is straightforward with uv run
:
# Fork and clone
git clone https://github.com/yourusername/your-thinkchain-fork.git
cd your-thinkchain-fork
# Set up API key
echo "ANTHROPIC_API_KEY=your_key" > .env
# Create your first tool
vim tools/yourtool.py
# Test immediately with uv run (no setup needed!)
uv run thinkchain.py
/refresh # Loads your new tool
"Use my new tool for X" # Test with Claude
- Data Science ThinkChain: Add pandas, matplotlib, jupyter tools
- Web Development ThinkChain: Add React, npm, git, deployment tools
- DevOps ThinkChain: Add Docker, Kubernetes, AWS/GCP tools
- Research ThinkChain: Add academic paper search, citation tools
MIT License - see the LICENSE file for details.
- Inspired by Pietro Schirano's chain of tools concept - we loved the idea and decided to crank out our own version!
- Built with Anthropic's Claude API
- MCP integration powered by the Model Context Protocol
- Enhanced UI built with Rich and Prompt Toolkit
Ready to think differently about AI tool integration? Fork ThinkChain and start building! 🚀