A Model Context Protocol (MCP) server implementation that creates a quantum-inspired swarm of Claude 3.7 Sonnet instances with field coherence optimization. This server enables enriched reasoning through multiple specialized LLM instances that work together with emergent properties.
- Quantum-Inspired Field Computing: Uses a field-based model to maintain coherence between Claude instances
- WebContainer Integration: Full stack sandboxed environment for execution
- PGLite with Vector Storage: Efficient vector database with pgvector extension
- Multiple Claude Specializations: Instances focus on pattern recognition, information synthesis, and reasoning
- Coherence Optimization: Selects the most coherent outputs across instances
- Extended Thinking Support: Optional 128k token thinking capability
- Live Query Updates: Real-time coherence notifications through PGLite live extension
- VoyageAI Embeddings: High-quality embeddings using VoyageAI's state-of-the-art models (voyage-3-large)
- Node.js 18.x or higher
- Anthropic API key with access to Claude 3.7 Sonnet
- VoyageAI API key (optional but recommended for better embeddings)
-
Clone this repository:
git clone https://github.com/wheattoast11/mcp-mindmesh.git cd mcp-mindmesh
-
Install dependencies:
npm install
-
Create a
.env
file by copying the template:cp .env.template .env
-
Edit
.env
and add your Anthropic API key, VoyageAI API key (optional), and adjust other settings as needed.
Build and start the server:
npm run build
npm start
For development with auto-reload:
npm run dev
You can connect to this MCP server using any MCP client, such as:
- Claude Desktop Application for Windows (official Anthropic client)
- Cursor IDE's agent capabilities
- Cline VSCode extension
- Any other MCP-compatible client
The server will be available at http://localhost:3000
by default (or whichever port you specified in the .env
file).
The main tool provided by this server is reason_with_swarm
. This tool takes a prompt and processes it through multiple specialized Claude instances, returning the most coherent result.
Example usage in Claude Desktop:
Please use the swarm to analyze the relationship between quantum field theory and consciousness.
All configuration options can be set in the .env
file:
Environment Variable | Description | Default |
---|---|---|
ANTHROPIC_API_KEY |
Your Anthropic API key | (required) |
VOYAGE_API_KEY |
Your VoyageAI API key | (optional) |
PORT |
HTTP server port | 3000 |
STDIO_TRANSPORT |
Use stdio transport instead of HTTP | false |
CLAUDE_INSTANCES |
Number of Claude instances in the swarm | 8 |
USE_EXTENDED_THINKING |
Enable 128k extended thinking | true |
COHERENCE_THRESHOLD |
Minimum coherence threshold | 0.7 |
EMBEDDING_MODEL |
VoyageAI embedding model to use | voyage-3-large |
DB_PATH |
Path for the PGLite database | "idb://mindmesh.db" |
DEBUG |
Enable debug logging | false |
The server architecture consists of:
- MCP Server Layer: Implements the Model Context Protocol (2025-03-26 specification)
- WebContainer Layer: Provides sandboxed environment for execution
- PGLite Vector Database: Stores state vectors with pgvector extension
- Claude Swarm Layer: Manages multiple specialized Claude instances
- Quantum Field Layer: Handles field coherence and optimization
- Embedding Layer: Generates high-quality embeddings using VoyageAI models
Requests flow through these layers as follows:
Client Request → MCP Server → Swarm Processing → Claude API → Coherence Optimization → Response
The server uses WebContainer technology for a fully sandboxed environment, providing:
- Isolated execution environment
- Full stack capabilities
- File system access
- Network communication
PGLite provides:
- Client-side PostgreSQL database compiled to WebAssembly
- Vector operations through pgvector extension
- Live query notifications for real-time updates
- Persistent storage across sessions
The coherence optimization system:
- Processes a query through multiple specialized Claude instances
- Generates state vectors for each response
- Calculates coherence metrics between instances
- Selects the most coherent output
- Maintains a dynamic field state in the vector database
The server uses VoyageAI's state-of-the-art embedding models for:
- High-quality state vector generation
- More accurate coherence calculations
- Better field modeling and optimization
When VoyageAI API key is not available, the server falls back to a simpler, deterministic embedding method.
src/index.ts
: Main entry pointsrc/server.ts
: Core server implementation.env
: Configuration filepackage.json
: Dependencies and scripts
npm run build
This will compile TypeScript to JavaScript in the dist
directory.
npm test
MIT
This project uses the following technologies: