MultiSynq MCP Server is a customized MCP (Model Context Protocol) server that provides seamless access to MultiSynq documentation and capabilities. Built on MetaMCP, it extends the platform with pre-configured MultiSynq integration, allowing AI tools to understand and work with MultiSynq's activity-based architecture.
- π MultiSynq Documentation Access: Pre-configured Context7 integration for instant access to MultiSynq docs
- π Zero Configuration: Works out of the box with public endpoints at
/sse
,/mcp
, and/api
- π Flexible Authentication: Supports both public access and API key authentication
- π οΈ Developer Friendly: Built-in MCP Inspector for testing and debugging
- π¦ Production Ready: Includes rate limiting, security headers, and health checks
- βοΈ Railway Deployable: One-click deployment to Railway with PostgreSQL
Note: This is a customized fork of MetaMCP specifically tailored for MultiSynq's needs.
- π― Use Cases
- π Concepts
- π Quick Start
- π MCP Protocol Compatibility
- π Connect to MetaMCP
- βοΈ Cold Start Problem and Custom Dockerfile
- π Authentication
- π OpenID Connect (OIDC) Provider Support
- π Custom Deployment and SSE conf for Nginx
- ποΈ Architecture
- πΊοΈ Roadmap
- π i18n
- π€ Contributing
- π License
- π Credits
The MultiSynq MCP Server enables AI tools like Claude, Cursor, and Cline to understand and work with MultiSynq's activity-based architecture. It provides:
- π Instant Documentation Access: AI tools can search and retrieve MultiSynq documentation
- π§ Activity Patterns: Understand how to implement activities, timelines, and sync
- ποΈ Architecture Guidance: Get best practices for building with MultiSynq
- π Code Examples: Access real-world examples and implementation patterns
graph LR
A[AI Tool] -->|MCP Protocol| B[MultiSynq MCP Server]
B -->|Context7| C[MultiSynq Docs]
B -->|Returns| D[Relevant Information]
D --> A
A MCP server configuration that tells MetaMCP how to start a MCP server.
"HackerNews": {
"type": "STDIO",
"command": "uvx",
"args": ["mcp-hn"]
}
- Group one or more MCP servers into a namespace
- Enable/disable MCP servers or at tool level
- Apply middlewares to MCP requests and responses
- Create endpoints and assign namespace to endpoints
- Multiple MCP servers in the namespace will be aggregated and emitted as a MetaMCP endpoint
- Choose auth level and strategy
- Host through SSE or Streamable HTTP transports in MCP and OpenAPI endpoints for clients like Open WebUI
- Intercepts and transforms MCP requests and responses at namespace level
- Built-in example: "Filter inactive tools" - optimizes tool context for LLMs
- Future ideas: tool logging, error traces, validation, scanning
Similar to the official MCP inspector, but with saved server configs - MetaMCP automatically creates configurations so you can debug MetaMCP endpoints immediately.
For local development with MultiSynq integration:
# 1. Clone the repository
git clone https://github.com/multisynq/multimcp.git
cd multimcp
# 2. Install dependencies
pnpm install
# 3. Set up local PostgreSQL database
# Option A: Install PostgreSQL locally
sudo apt install postgresql # Ubuntu/Debian
brew install postgresql # macOS
# Option B: Use Docker for PostgreSQL only
docker run -d --name metamcp-postgres \
-e POSTGRES_USER=metamcp_user \
-e POSTGRES_PASSWORD=m3t4mcp \
-e POSTGRES_DB=metamcp_db \
-p 5432:5432 \
postgres:16-alpine
# 4. Set up environment variables
cat > .env.local << EOF
DATABASE_URL=postgresql://metamcp_user:m3t4mcp@localhost:5432/metamcp_db
BETTER_AUTH_SECRET=dev-secret-key-at-least-32-chars
APP_URL=http://localhost:12008
NEXT_PUBLIC_APP_URL=http://localhost:12008
NODE_ENV=development
EOF
# 5. Build packages and initialize database
pnpm build
pnpm db:push:dev
# 6. Start development servers
pnpm dev:backend # Terminal 1
pnpm dev:frontend # Terminal 2
# 7. Test MultiSynq integration
curl http://localhost:12008/api/health
# Open http://localhost:12008/mcp-inspector
For detailed instructions, see LOCAL_TESTING_GUIDE.md
Clone repo, prepare .env
, and start with docker compose:
git clone https://github.com/multisynq/multimcp.git
cd multimcp
cp example.env .env
# Edit .env with your configuration
docker compose up -d
If you modify APP_URL env vars, make sure you only access from the APP_URL, because MetaMCP enforces CORS policy on the URL, so no other URL is accessible.
Note that the pg volume name may collide with your other pg dockers, which is global, consider rename it in docker-compose.yml
:
volumes:
metamcp_postgres_data:
driver: local
Still recommend running postgres through docker for easy setup:
pnpm install
pnpm dev
- β Tools, Resources, and Prompts supported
- β OAuth-enabled MCP servers tested for 03-26 version
If you have questions, feel free to leave GitHub issues or PRs.
{
"mcpServers": {
"multisynq": {
"url": "http://localhost:12008/sse"
}
}
}
Or for production:
{
"mcpServers": {
"multisynq": {
"url": "https://mcp.multisynq.io/sse"
}
}
}
Since the MultiSynq MCP Server uses SSE (Server-Sent Events), Claude Desktop needs the MCP SSE client:
{
"mcpServers": {
"multisynq": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-sse", "http://localhost:12008/sse"]
}
}
}
Or for production:
{
"mcpServers": {
"multisynq": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-sse", "https://mcp.multisynq.io/sse"]
}
}
}
Add to your Cline settings:
{
"multisynq": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-sse", "http://localhost:12008/sse"]
}
}
The MultiSynq MCP Server's public endpoints (/sse
, /mcp
, /api
) are configured for public access by default - no authentication required! This makes it easy to get started.
For production deployments, you can enable API key authentication through the dashboard.
The MultiSynq MCP Server is optimized for production deployment on Railway:
- Pre-configured: Context7 MCP server is pre-installed in the Docker image
- Fast startup: Idle sessions pre-allocated for instant response
- Health checks: Built-in health endpoints for monitoring
- Auto-scaling: Works seamlessly with Railway's scaling features
See RAILWAY_DEPLOYMENT.md for detailed deployment instructions.
- π‘οΈ Better Auth for frontend & backend (TRPC procedures)
- πͺ Session cookies enforce secure internal MCP proxy connections
- π API key authentication for external access via
Authorization: Bearer <api-key>
header - π’ Multi-tenancy: Designed for organizations to deploy on their own machines. Supports both private and public access scopes. Users can create MCPs, namespaces, endpoints, and API keys for themselves or for everyone. Public API keys cannot access private MetaMCPs.
MetaMCP supports OpenID Connect authentication for enterprise SSO integration. This allows organizations to use their existing identity providers (Auth0, Keycloak, Azure AD, etc.) for authentication.
Add the following environment variables to your .env
file:
# Required
OIDC_CLIENT_ID=your-oidc-client-id
OIDC_CLIENT_SECRET=your-oidc-client-secret
OIDC_DISCOVERY_URL=https://your-provider.com/.well-known/openid-configuration
# Optional customization
OIDC_PROVIDER_ID=oidc
OIDC_SCOPES=openid email profile
OIDC_PKCE=true
MetaMCP has been tested with popular OIDC providers:
- Auth0:
https://your-domain.auth0.com/.well-known/openid-configuration
- Keycloak:
https://your-keycloak.com/realms/your-realm/.well-known/openid-configuration
- Azure AD:
https://login.microsoftonline.com/your-tenant-id/v2.0/.well-known/openid-configuration
- Google:
https://accounts.google.com/.well-known/openid-configuration
- Okta:
https://your-domain.okta.com/.well-known/openid-configuration
- π PKCE (Proof Key for Code Exchange) enabled by default
- π‘οΈ Authorization Code Flow with automatic user creation
- π Auto-discovery of OIDC endpoints
- πͺ Seamless session management with existing auth system
Once configured, users will see a "Sign in with OIDC" button on the login page alongside the email/password form. The authentication flow automatically creates new users on first login.
For more detailed configuration examples and troubleshooting, see CONTRIBUTING.md.
If you want to deploy it to a online service or a VPS, a instance of at least 2GB-4GB of memory is required. And the larger size, the better performance.
Since MCP leverages SSE for long connection, if you are using reverse proxy like nginx, please refer to an example setup nginx.conf.example
The MultiSynq MCP Server is built with:
- Frontend: Next.js with MCP Inspector for testing
- Backend: Express.js with tRPC and MultiSynq integration
- Database: PostgreSQL for configuration and state
- MCP Integration: Context7 for MultiSynq documentation access
- Deployment: Docker + Railway for production
sequenceDiagram
participant AI as AI Tool (Claude/Cursor)
participant MCP as MultiSynq MCP Server
participant C7 as Context7
participant Docs as MultiSynq Docs
AI ->> MCP: search("how to create activity")
MCP ->> C7: Query MultiSynq documentation
C7 ->> Docs: Retrieve relevant sections
Docs ->> C7: Return documentation
C7 ->> MCP: Formatted results
MCP ->> AI: Activity creation guide + examples
Current Status: β Production Ready with MultiSynq Integration
Completed Features:
- β MultiSynq documentation access via Context7
- β
Public endpoints at
/sse
,/mcp
,/api
- β Rate limiting and security headers
- β Railway deployment configuration
- β Comprehensive test coverage
- β MCP Inspector integration
Future Enhancements:
- π Enhanced search capabilities for MultiSynq patterns
- π Additional MultiSynq-specific tools
- π§ͺ Integration with MultiSynq playground
- π Usage analytics and insights
We welcome contributions! This is an open-source project maintained by MultiSynq.
- Report Issues: GitHub Issues
- Submit PRs: GitHub Pull Requests
- Documentation: Improvements to docs always welcome!
MIT - See LICENSE file for details.
This project is built on top of the excellent MetaMCP platform. Special thanks to:
- The MetaMCP team for creating the foundational MCP aggregation platform
- The Model Context Protocol team for the MCP specification
- Context7 for enabling seamless documentation access
Built with β€οΈ by MultiSynq