Skip to content

FallDownTheSystem/converse

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

96 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Converse MCP Server

npm version

An MCP (Model Context Protocol) server that lets Claude talk to other AI models. Use it to chat with models from OpenAI, Google, Anthropic, X.AI, Mistral, DeepSeek, or OpenRouter. You can either talk to one model at a time or get multiple models to weigh in on complex decisions.

πŸ“‹ Requirements

  • Node.js: Version 20 or higher
  • Package Manager: npm (or pnpm/yarn)
  • API Keys: At least one from any supported provider

πŸš€ Quick Start

Step 1: Get Your API Keys

You need at least one API key from these providers:

Provider Where to Get Example Format
OpenAI platform.openai.com/api-keys sk-proj-...
Google/Gemini makersuite.google.com/app/apikey AIzaSy...
X.AI console.x.ai xai-...
Anthropic console.anthropic.com sk-ant-...
Mistral console.mistral.ai wfBMkWL0...
DeepSeek platform.deepseek.com sk-...
OpenRouter openrouter.ai/keys sk-or-...

Step 2: Add to Claude Code or Claude Desktop

For Claude Code (Recommended)

# Add the server with your API keys
claude mcp add converse \
  -e OPENAI_API_KEY=your_key_here \
  -e GEMINI_API_KEY=your_key_here \
  -e XAI_API_KEY=your_key_here \
  -e ANTHROPIC_API_KEY=your_key_here \
  -e MISTRAL_API_KEY=your_key_here \
  -e DEEPSEEK_API_KEY=your_key_here \
  -e OPENROUTER_API_KEY=your_key_here \
  -s user \
  npx converse-mcp-server

For Claude Desktop

Add this configuration to your Claude Desktop settings:

{
  "mcpServers": {
    "converse": {
      "command": "npx",
      "args": ["converse-mcp-server"],
      "env": {
        "OPENAI_API_KEY": "your_key_here",
        "GEMINI_API_KEY": "your_key_here",
        "XAI_API_KEY": "your_key_here",
        "ANTHROPIC_API_KEY": "your_key_here",
        "MISTRAL_API_KEY": "your_key_here",
        "DEEPSEEK_API_KEY": "your_key_here",
        "OPENROUTER_API_KEY": "your_key_here"
      }
    }
  }
}

Windows Troubleshooting: If npx converse-mcp-server doesn't work on Windows, try:

{
  "command": "cmd",
  "args": ["/c", "npx", "converse-mcp-server"]
}

Step 3: Start Using Converse

Once installed, you can:

  • Chat with a specific model: Ask Claude to use the chat tool with your preferred model
  • Get consensus: Ask Claude to use the consensus tool when you need multiple perspectives
  • Get help: Type /converse:help in Claude

πŸ› οΈ Available Tools

1. Chat Tool

Talk to any AI model with support for files, images, and conversation history. The tool automatically routes your request to the right provider based on the model name.

// Example usage
{
  "prompt": "How should I structure the authentication module for this Express.js API?",
  "model": "gemini-2.5-flash",         // Routes to Google
  // "model": "anthropic/claude-3.5-sonnet", // Routes to OpenRouter (if enabled)
  // "model": "openrouter/auto",          // Auto-select best model
  "files": ["/path/to/src/auth.js", "/path/to/config.json"],
  "images": ["/path/to/architecture.png"],
  "temperature": 0.5,
  "reasoning_effort": "medium",
  "use_websearch": false
}

2. Consensus Tool

Get multiple AI models to analyze the same question simultaneously. Each model can see and respond to the others' answers, creating a rich discussion.

// Example usage
{
  "prompt": "Should we use microservices or monolith architecture for our e-commerce platform?",
  "models": [
    {"model": "gpt-5"},
    {"model": "gemini-2.5-flash"},
    {"model": "grok-4-0709"}
  ],
  "files": ["/path/to/requirements.md"],
  "enable_cross_feedback": true,
  "temperature": 0.2
}

πŸ“Š Supported Models

OpenAI Models

  • gpt-5: Latest flagship model (400K context, 128K output) - Superior reasoning, code generation, and analysis
  • gpt-5-mini: Faster, cost-efficient GPT-5 (400K context, 128K output) - Well-defined tasks, precise prompts
  • gpt-5-nano: Fastest, most cost-efficient GPT-5 (400K context, 128K output) - Summarization, classification
  • o3: Strong reasoning (200K context)
  • o3-mini: Fast O3 variant (200K context)
  • o3-pro: Professional-grade reasoning (200K context) - EXTREMELY EXPENSIVE
  • o3-deep-research: Deep research model (200K context) - 30-90 min runtime
  • o4-mini: Latest reasoning model (200K context)
  • o4-mini-deep-research: Fast deep research model (200K context) - 15-60 min runtime
  • gpt-4.1: Advanced reasoning (1M context)
  • gpt-4o: Multimodal flagship (128K context)
  • gpt-4o-mini: Fast multimodal (128K context)

Google/Gemini Models

API Key Options:

  • GEMINI_API_KEY: For Gemini Developer API (recommended)
  • GOOGLE_API_KEY: Alternative name (GEMINI_API_KEY takes priority)
  • Vertex AI: Use GOOGLE_GENAI_USE_VERTEXAI=true with project/location settings
  • gemini-2.5-flash (alias: flash): Ultra-fast (1M context)
  • gemini-2.5-pro (alias: pro): Deep reasoning (1M context)
  • gemini-2.0-flash: Latest with experimental thinking
  • gemini-2.0-flash-lite: Lightweight fast model, text-only

X.AI/Grok Models

  • grok-4-0709 (alias: grok): Latest advanced model (256K context)
  • grok-3: Previous generation (131K context)
  • grok-3-fast: Higher performance variant

Anthropic Models

  • claude-opus-4.1: Highest intelligence with extended thinking (200K context)
  • claude-sonnet-4: Balanced performance with extended thinking (200K context)
  • claude-3.7-sonnet: Enhanced 3.x generation with thinking (200K context)
  • claude-3.5-sonnet: Fast and intelligent (200K context)
  • claude-3.5-haiku: Fastest model for simple queries (200K context)

Mistral Models

  • magistral-medium: Frontier-class reasoning model (40K context)
  • magistral-small: Small reasoning model (40K context)
  • mistral-medium-3: Frontier-class multimodal model (128K context)

DeepSeek Models

  • deepseek-chat: Strong MoE model with 671B/37B parameters (64K context)
  • deepseek-reasoner: Advanced reasoning model with CoT (64K context)

OpenRouter Models

  • qwen3-235b-thinking: Qwen3 with enhanced reasoning (32K context)
  • qwen3-coder: Specialized for programming tasks (32K context)
  • kimi-k2: Moonshot AI Kimi K2 with extended context (200K context)

πŸ“š Help & Documentation

Built-in Help

Type these commands directly in Claude:

  • /converse:help - Full documentation
  • /converse:help tools - Tool-specific help
  • /converse:help models - Model information
  • /converse:help parameters - Configuration details
  • /converse:help examples - Usage examples

Additional Resources

βš™οΈ Configuration

Environment Variables

Create a .env file in your project root:

# Required: At least one API key
OPENAI_API_KEY=sk-proj-your_openai_key_here
GEMINI_API_KEY=your_gemini_api_key_here  # Or GOOGLE_API_KEY (GEMINI_API_KEY takes priority)  
XAI_API_KEY=xai-your_xai_key_here
ANTHROPIC_API_KEY=sk-ant-your_anthropic_key_here
MISTRAL_API_KEY=your_mistral_key_here
DEEPSEEK_API_KEY=your_deepseek_key_here
OPENROUTER_API_KEY=sk-or-your_openrouter_key_here

# Optional: Server configuration
PORT=3157
LOG_LEVEL=info

# Optional: OpenRouter configuration
OPENROUTER_REFERER=https://github.com/FallDownTheSystem/converse
OPENROUTER_TITLE=Converse
OPENROUTER_DYNAMIC_MODELS=true

Configuration Options

Server Environment Variables (.env file)

Variable Description Default Example
PORT Server port 3157 3157
LOG_LEVEL Logging level info debug, info, error

Claude Code Environment Variables (System/Global)

These must be set in your system environment or when launching Claude Code, NOT in the project .env file:

Variable Description Default Example
MAX_MCP_OUTPUT_TOKENS Token response limit 25000 200000
MCP_TOOL_TIMEOUT Tool execution timeout (ms) 120000 5400000 (90 min for deep research)
# Example: Set globally before starting Claude Code
export MAX_MCP_OUTPUT_TOKENS=200000
export MCP_TOOL_TIMEOUT=5400000  # 90 minutes for deep research models
claude  # Then start Claude Code

Model Selection

Use "auto" for automatic model selection, or specify exact models:

// Auto-selection (recommended)
{ "model": "auto" }

// Specific models
{ "model": "gemini-2.5-flash" }
{ "model": "gpt-5" }
{ "model": "grok-4-0709" }

// Using aliases
{ "model": "flash" }    // -> gemini-2.5-flash
{ "model": "pro" }      // -> gemini-2.5-pro
{ "model": "grok" }     // -> grok-4-0709

Auto Model Behavior:

  • Chat Tool: Selects the first available provider and uses its default model
  • Consensus Tool: When using [{"model": "auto"}], automatically expands to the first 3 available providers

Provider priority order (requires corresponding API key):

  1. OpenAI (gpt-5)
  2. Google (gemini-2.5-pro)
  3. XAI (grok-4-0709)
  4. Anthropic (claude-sonnet-4-20250514)
  5. Mistral (magistral-medium-2506)
  6. DeepSeek (deepseek-reasoner)
  7. OpenRouter (qwen/qwen3-coder)

The system will use the first 3 providers that have valid API keys configured. This enables automatic multi-model consensus without manually specifying models.

Advanced Configuration

Manual Installation Options

Option A: Direct Node.js execution

If you've cloned the repository locally:

{
  "mcpServers": {
    "converse": {
      "command": "node",
      "args": [
        "C:\\Users\\YourUsername\\Documents\\Projects\\converse\\src\\index.js"
      ],
      "env": {
        "OPENAI_API_KEY": "your_key_here",
        "GEMINI_API_KEY": "your_key_here",
        "XAI_API_KEY": "your_key_here",
        "ANTHROPIC_API_KEY": "your_key_here",
        "MISTRAL_API_KEY": "your_key_here",
        "DEEPSEEK_API_KEY": "your_key_here",
        "OPENROUTER_API_KEY": "your_key_here"
      }
    }
  }
}
Option B: Local HTTP Development (Advanced)

For local development with HTTP transport (optional, for debugging):

  1. First, start the server manually with HTTP transport:

    # In a terminal, navigate to the project directory
    cd converse
    MCP_TRANSPORT=http npm run dev  # Starts server on http://localhost:3157/mcp
  2. Then configure Claude to connect to it:

    {
      "mcpServers": {
        "converse-local": {
          "url": "http://localhost:3157/mcp"
        }
      }
    }

Important: HTTP transport requires the server to be running before Claude can connect to it. Keep the terminal with the server open while using Claude.

Configuration File Locations

The Claude configuration file is typically located at:

  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Linux: ~/.config/Claude/claude_desktop_config.json

For more detailed instructions, see the official MCP configuration guide.

πŸ’» Running Standalone (Without Claude)

You can run the server directly without Claude for testing or development:

# Quick run (no installation needed)
npx converse-mcp-server

# Alternative package managers
pnpm dlx converse-mcp-server
yarn dlx converse-mcp-server

For development setup, see the Development section below.

πŸ› Troubleshooting

Common Issues

Server won't start:

  • Check Node.js version: node --version (needs v20+)
  • Try a different port: PORT=3001 npm start

API key errors:

  • Verify your .env file has the correct format
  • Test with: npm run test:real-api

Module import errors:

  • Clear cache and reinstall: npm run clean

Debug Mode

# Enable debug logging
LOG_LEVEL=debug npm run dev

# Start with debugger
npm run debug

# Trace all operations
LOG_LEVEL=trace npm run dev

πŸ”§ Development

Getting Started

# Clone the repository
git clone https://github.com/FallDownTheSystem/converse.git
cd converse
npm install

# Copy environment file and add your API keys
cp .env.example .env

# Start development server
npm run dev

Scripts Available

# Server management
npm start              # Start server (auto-kills existing server on port 3157)
npm run start:clean    # Start server without killing existing processes
npm run start:port     # Start server on port 3001 (avoids port conflicts)
npm run dev            # Development with hot reload (auto-kills existing server)
npm run dev:clean      # Development without killing existing processes
npm run dev:port       # Development on port 3001 (avoids port conflicts)
npm run dev:quiet      # Development with minimal logging
npm run kill-server    # Kill any server running on port 3157

# Testing
npm test               # Run all tests
npm run test:unit      # Unit tests only
npm run test:integration # Integration tests
npm run test:e2e       # End-to-end tests (requires API keys)

# Integration test subcategories
npm run test:integration:mcp        # MCP protocol tests
npm run test:integration:tools      # Tool integration tests
npm run test:integration:providers  # Provider integration tests
npm run test:integration:performance # Performance tests
npm run test:integration:general    # General integration tests

# Other test categories
npm run test:mcp-client # MCP client tests (HTTP-based)
npm run test:providers # Provider unit tests
npm run test:tools     # Tool tests
npm run test:coverage  # Coverage report
npm run test:watch     # Run tests in watch mode

# Code quality
npm run lint           # Check code style
npm run lint:fix       # Fix code style issues
npm run format         # Format code with Prettier
npm run validate       # Full validation (lint + test)

# Utilities
npm run build          # Build for production
npm run debug          # Start with debugger
npm run check-deps     # Check for outdated dependencies
npm run kill-server    # Kill any server running on port 3157

Development Notes

Port conflicts: The server uses port 3157 by default. If you get an "EADDRINUSE" error:

  • Run npm run kill-server to free the port
  • Or use a different port: PORT=3001 npm start

Transport Modes:

  • Stdio (default): Works automatically with Claude
  • HTTP: Better for debugging, requires manual start (MCP_TRANSPORT=http npm run dev)

Testing with Real APIs

After setting up your API keys in .env:

# Run end-to-end tests
npm run test:e2e

# Test specific providers
npm run test:integration:providers

# Full validation
npm run validate

Validation Steps

After installation, run these tests to verify everything works:

npm start           # Should show startup message
npm test            # Should pass all unit tests
npm run validate    # Full validation suite

Project Structure

converse/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ index.js              # Main server entry point
β”‚   β”œβ”€β”€ config.js             # Configuration management
β”‚   β”œβ”€β”€ router.js             # Central request dispatcher
β”‚   β”œβ”€β”€ continuationStore.js  # State management
β”‚   β”œβ”€β”€ systemPrompts.js      # Tool system prompts
β”‚   β”œβ”€β”€ providers/            # AI provider implementations
β”‚   β”‚   β”œβ”€β”€ index.js          # Provider registry
β”‚   β”‚   β”œβ”€β”€ interface.js      # Unified provider interface
β”‚   β”‚   β”œβ”€β”€ openai.js         # OpenAI provider
β”‚   β”‚   β”œβ”€β”€ xai.js            # XAI provider
β”‚   β”‚   β”œβ”€β”€ google.js         # Google provider
β”‚   β”‚   β”œβ”€β”€ anthropic.js      # Anthropic provider
β”‚   β”‚   β”œβ”€β”€ mistral.js        # Mistral AI provider
β”‚   β”‚   β”œβ”€β”€ deepseek.js       # DeepSeek provider
β”‚   β”‚   β”œβ”€β”€ openrouter.js     # OpenRouter provider
β”‚   β”‚   └── openai-compatible.js # Base for OpenAI-compatible APIs
β”‚   β”œβ”€β”€ tools/                # MCP tool implementations
β”‚   β”‚   β”œβ”€β”€ index.js          # Tool registry
β”‚   β”‚   β”œβ”€β”€ chat.js           # Chat tool
β”‚   β”‚   └── consensus.js      # Consensus tool
β”‚   └── utils/                # Utility modules
β”‚       β”œβ”€β”€ contextProcessor.js # File/image processing
β”‚       β”œβ”€β”€ errorHandler.js   # Error handling
β”‚       └── logger.js         # Logging utilities
β”œβ”€β”€ tests/                    # Comprehensive test suite
β”œβ”€β”€ docs/                     # API and architecture docs
└── package.json              # Dependencies and scripts

πŸ“¦ Publishing to NPM

Note: This section is for maintainers. The package is already published as converse-mcp-server.

Quick Publishing Checklist

# 1. Ensure clean working directory
git status

# 2. Run full validation
npm run validate

# 3. Test package contents
npm pack --dry-run

# 4. Test bin script
node bin/converse.js --help

# 5. Bump version (choose one)
npm version patch    # Bug fixes: 1.0.1 β†’ 1.0.2
npm version minor    # New features: 1.0.1 β†’ 1.1.0
npm version major    # Breaking changes: 1.0.1 β†’ 2.0.0

# 6. Test publish (dry run)
npm publish --dry-run

# 7. Publish to npm
npm publish

# 8. Verify publication
npm view converse-mcp-server
npx converse-mcp-server --help

Version Guidelines

  • Patch (npm version patch): Bug fixes, documentation updates, minor improvements
  • Minor (npm version minor): New features, new model support, new tool capabilities
  • Major (npm version major): Breaking API changes, major architecture changes

Post-Publication

After publishing, update installation instructions if needed and verify:

# Test direct execution
npx converse-mcp-server
npx converse

# Test MCP client integration
# Update Claude Desktop config to use: "npx converse-mcp-server"

Troubleshooting Publication

  • Git not clean: Commit all changes first
  • Tests failing: Fix issues before publishing
  • Version conflicts: Check existing versions with npm view converse-mcp-server versions
  • Permission issues: Ensure you're logged in with npm whoami

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Make your changes
  4. Run tests: npm run validate
  5. Commit changes: git commit -m 'Add amazing feature'
  6. Push to branch: git push origin feature/amazing-feature
  7. Open a Pull Request

Development Setup

# Fork and clone your fork
git clone https://github.com/yourusername/converse.git
cd converse

# Install dependencies
npm install

# Create feature branch
git checkout -b feature/your-feature

# Make changes and test
npm run validate

# Commit and push
git add .
git commit -m "Description of changes"
git push origin feature/your-feature

πŸ™ Acknowledgments

This MCP Server was inspired by and builds upon the excellent work from BeehiveInnovations/zen-mcp-server.

πŸ“„ License

MIT License - see LICENSE file for details.

πŸ”— Links