The plugged.in App is a comprehensive web application for managing Model Context Protocol (MCP) servers with integrated RAG (Retrieval-Augmented Generation) capabilities. It works in conjunction with the plugged.in MCP Proxy to provide a unified interface for discovering, configuring, and utilizing AI tools across multiple MCP servers while leveraging your own documents as context.
This application enables seamless integration with any MCP client (Claude, Cline, Cursor, etc.) while providing advanced management capabilities, document-based knowledge augmentation, and real-time notifications through an intuitive web interface.
⭐ If you find this project useful, please consider giving it a star on GitHub! It helps us reach more developers and motivates us to keep improving.
- MCP Registry Integration: Modified fork of the official MCP Registry - users can now claim MCP servers with GitHub credentials
- Completely Rewritten Discovery Process: Enhanced server detection and management with improved performance and reliability
- Full Streamable HTTP Support: Complete implementation of Streamable HTTP transport protocol
- OAuth for MCP Servers: OAuth authentication handled by plugged.in with state-of-the-art encryption - no client-side authentication needed anymore
- Trending Servers with Analytics: Every MCP tool call via pluggedin-mcp is tracked and displayed in trending servers
- Bidirectional Notifications: MCP proxy can now send, receive, mark as read, and delete notifications
- Smart Server Wizard: Multi-step wizard with GitHub verification, environment detection, and registry submission
- Enhanced Security: Comprehensive input validation with Zod schemas and XSS/SSRF protection
- Multi-Workspace Support: Switch between different sets of MCP configurations to prevent context pollution
- Interactive Playground: Test and experiment with your MCP tools directly in the browser
- Tool Management: Discover, organize, and manage AI tools from multiple sources
- Resource & Template Discovery: View available resources and resource templates for connected MCP servers
- Custom Instructions: Add server-specific instructions that can be used as MCP prompts
- Prompt Management: Discover and manage prompts from connected MCP servers
- End-to-End Encryption: All sensitive MCP server configuration data (commands, arguments, environment variables, URLs) is now encrypted at rest using AES-256-GCM
- Per-Profile Encryption: Each profile has its own derived encryption key, ensuring complete isolation between workspaces
- Secure Server Sharing: Shared servers use sanitized templates that don't expose sensitive credentials
- Transparent Operation: Encryption and decryption happen automatically without affecting the user experience
- AI-Generated Documents: MCP servers can create and manage documents in your library with full attribution
- Document Preview Modal: View PDFs, images, and text files directly in the browser with zoom controls
- Enhanced Document Viewer: Navigate between documents, fullscreen mode, and metadata display
- Multi-Format Support: Native rendering for PDFs, images, markdown, and various code file formats
- Model Attribution Tracking: Complete history of which AI models created or updated each document
- Advanced Document Search: Semantic search with filtering by AI model, date, tags, and source type
- Document Versioning: Track changes and maintain version history for AI-generated content
- Multi-Source Support: Documents from uploads, AI generation, or API integrations
- Document Library with RAG: Upload and manage documents that serve as knowledge context for AI interactions
- Real-Time Notifications: Get instant notifications for MCP activities with optional email delivery
- Progressive Server Initialization: Faster startup with resilient server connections
- Enhanced Security: Industry-standard sanitization and secure environment variable handling
- Improved UI/UX: Redesigned playground, better responsive design, and theme customization
- Server Notes: Add custom notes to each configured MCP server
- Extensive Logging: Detailed logging capabilities for MCP interactions in the Playground
- Expanded Discovery: Search for MCP servers across GitHub, Smithery, and npmjs.com
- Email Verification: Secure account registration with email verification
- Self-Hostable: Run your own instance with full control over your data
The easiest way to get started with the plugged.in App is using Docker Compose:
# Clone the repository
git clone https://github.com/VeriTeknik/pluggedin-app.git
cd pluggedin-app
# Set up environment variables
cp .env.example .env
# Edit .env with your specific configuration
# Start the application with Docker Compose
docker compose up --build -d
Then open http://localhost:12005 in your browser to access the plugged.in App.
For existing installations upgrading to v2.2.0:
# Quick upgrade for Docker users
docker pull ghcr.io/veriteknik/pluggedin-app:v2.2.0
docker-compose down && docker-compose up -d
# The encryption will be applied automatically to existing servers
Note: Ensure you have the NEXT_SERVER_ACTIONS_ENCRYPTION_KEY
environment variable set. If not present, generate one:
pnpm generate-encryption-key
- The plugged.in App running (either self-hosted or at https://plugged.in)
- An API key from the plugged.in App (available in the API Keys page)
- The plugged.in MCP Proxy installed
{
"mcpServers": {
"pluggedin": {
"command": "npx",
"args": ["-y", "@pluggedin/pluggedin-mcp-proxy@latest"],
"env": {
"PLUGGEDIN_API_KEY": "YOUR_API_KEY",
"PLUGGEDIN_API_BASE_URL": "http://localhost:12005" // For self-hosted instances
}
}
}
}
For Cursor, you can use command-line arguments:
npx -y @pluggedin/pluggedin-mcp-proxy@latest --pluggedin-api-key YOUR_API_KEY --pluggedin-api-base-url http://localhost:12005
The plugged.in ecosystem consists of integrated components working together to provide a comprehensive MCP management solution with RAG capabilities:
sequenceDiagram
participant MCPClient as MCP Client (e.g., Claude Desktop)
participant PluggedinMCP as plugged.in MCP Proxy
participant PluggedinApp as plugged.in App
participant MCPServers as Installed MCP Servers
MCPClient ->> PluggedinMCP: Request list tools/resources/prompts
PluggedinMCP ->> PluggedinApp: Get capabilities via API
PluggedinApp ->> PluggedinMCP: Return capabilities (prefixed)
MCPClient ->> PluggedinMCP: Call tool/read resource/get prompt
alt Standard capability
PluggedinMCP ->> PluggedinApp: Resolve capability to server
PluggedinApp ->> PluggedinMCP: Return server details
PluggedinMCP ->> MCPServers: Forward request to target server
MCPServers ->> PluggedinMCP: Return response
else Custom instruction
PluggedinMCP ->> PluggedinApp: Get custom instruction
PluggedinApp ->> PluggedinMCP: Return formatted messages
end
PluggedinMCP ->> MCPClient: Return response
alt Discovery tool
MCPClient ->> PluggedinMCP: Call pluggedin_discover_tools
PluggedinMCP ->> PluggedinApp: Trigger discovery action
PluggedinApp ->> MCPServers: Connect and discover capabilities
MCPServers ->> PluggedinApp: Return capabilities
PluggedinApp ->> PluggedinMCP: Confirm discovery complete
PluggedinMCP ->> MCPClient: Return discovery result
end
The plugged.in App supports various configuration options through environment variables:
# Core Configuration
DATABASE_URL=postgresql://user:password@localhost:5432/pluggedin
NEXTAUTH_URL=http://localhost:12005
NEXTAUTH_SECRET=your-secret-key
# Feature Flags (New in v2.1.0)
ENABLE_RAG=true # Enable RAG features
ENABLE_NOTIFICATIONS=true # Enable notification system
ENABLE_EMAIL_VERIFICATION=true # Enable email verification
# Email Configuration (for notifications)
EMAIL_SERVER_HOST=smtp.example.com
EMAIL_SERVER_PORT=587
EMAIL_SERVER_USER=your-email@example.com
EMAIL_SERVER_PASSWORD=your-password
EMAIL_FROM=noreply@example.com
# RAG Configuration (optional)
RAG_API_URL=http://localhost:8000 # Your RAG service endpoint
RAG_CHUNK_SIZE=1000 # Document chunk size
RAG_CHUNK_OVERLAP=200 # Chunk overlap for context
# MCP Server Sandboxing (Linux)
FIREJAIL_USER_HOME=/home/pluggedin
FIREJAIL_LOCAL_BIN=/home/pluggedin/.local/bin
FIREJAIL_APP_PATH=/home/pluggedin/pluggedin-app
FIREJAIL_MCP_WORKSPACE=/home/pluggedin/mcp-workspace
The plugged.in platform now features advanced RAG v2 capabilities with AI document exchange:
Core RAG Features:
- Enable RAG in playground settings
- Upload documents through the Library page
- Documents are automatically indexed for context retrieval
- Supported formats: PDF, TXT, MD, DOCX, JSON, HTML, and more
New RAG v2 Features:
-
AI Document Generation: MCP servers can create documents directly in your library
- Full model attribution tracking (which AI created/updated the document)
- Version history with change tracking
- Content deduplication via SHA-256 hashing
-
Advanced Document Sources:
upload
: Traditional file uploadsai_generated
: Documents created by AI models via MCPapi
: Documents created via API integrations
-
Smart Document Search:
- Semantic search with relevance scoring
- Filter by AI model, provider, date range, tags, and source
- Automatic snippet generation with keyword highlighting
-
Document Management:
- Visibility levels: private, workspace, or public
- Parent-child relationships for document versions
- Profile-based organization alongside project-based scoping
Example: AI Creating a Document via MCP
POST /api/documents/ai
{
"title": "Analysis Report",
"content": "# Market Analysis\n\nDetailed findings...",
"format": "md",
"tags": ["analysis", "market"],
"category": "report",
"metadata": {
"model": {
"name": "Claude",
"provider": "Anthropic",
"version": "3"
},
"visibility": "workspace"
}
}
- Real-time notifications for MCP activities
- Optional email delivery for important alerts
- Configurable notification preferences per profile
- Activity logging for debugging and monitoring
# AI model creates a document
curl -X POST https://your-domain.com/api/documents/ai \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"title": "Technical Analysis Report",
"content": "# Technical Analysis\n\n## Summary\n\nThis document contains...",
"format": "md",
"tags": ["technical", "analysis", "ai-generated"],
"category": "report",
"metadata": {
"model": {
"name": "Claude 3 Opus",
"provider": "Anthropic",
"version": "20240229"
},
"context": "User requested technical analysis of system architecture",
"visibility": "private"
}
}'
# Response
{
"success": true,
"documentId": "550e8400-e29b-41d4-a716-446655440000",
"message": "Document successfully created",
"url": "/library/550e8400-e29b-41d4-a716-446655440000"
}
# Search for documents created by specific AI models
curl -X POST https://your-domain.com/api/documents/search \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"query": "system architecture",
"filters": {
"modelName": "Claude 3 Opus",
"modelProvider": "Anthropic",
"source": "ai_generated",
"dateFrom": "2024-01-01T00:00:00Z",
"tags": ["technical"]
},
"limit": 10,
"offset": 0
}'
# Response
{
"results": [
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"title": "Technical Analysis Report",
"description": "AI-generated report by Claude 3 Opus",
"snippet": "...system architecture analysis shows that...",
"relevanceScore": 0.85,
"source": "ai_generated",
"aiMetadata": {
"model": {
"name": "Claude 3 Opus",
"provider": "Anthropic",
"version": "20240229"
}
},
"tags": ["technical", "analysis", "ai-generated"],
"modelAttributions": [
{
"model_name": "Claude 3 Opus",
"model_provider": "Anthropic",
"contribution_type": "created"
}
]
}
],
"total": 1,
"limit": 10,
"offset": 0,
"hasMore": false
}
When uploading large documents, track the RAG processing progress:
# Check upload status
curl -X GET https://your-domain.com/api/documents/upload-status/UPLOAD_ID \
-H "Authorization: Bearer YOUR_API_KEY"
# Response
{
"success": true,
"progress": {
"status": "processing",
"progress": {
"step": "embeddings",
"current": 45,
"total": 100,
"step_progress": {
"chunks_processed": 45,
"total_chunks": 100,
"percentage": 45,
"estimated_remaining_time": "2m 30s"
}
},
"message": "Generating embeddings for document chunks..."
}
}
- Node.js v18+ (recommended v20+)
- PostgreSQL 15+
- PNPM package manager
- Nginx web server (for production deployments)
- Systemd (for service management)
-
Clone the repository:
git clone https://github.com/VeriTeknik/pluggedin-app.git /home/pluggedin/pluggedin-app cd /home/pluggedin/pluggedin-app
-
Install dependencies:
pnpm install
-
Set up environment variables:
cp .env.example .env # Edit .env with your specific configuration
-
Run database migrations:
pnpm db:migrate:auth pnpm db:generate pnpm db:migrate
-
Build the application for production:
NODE_ENV=production pnpm build
-
Create a systemd service file at
/etc/systemd/system/pluggedin.service
:[Unit] Description=plugged.in Application Service After=network.target postgresql.service Wants=postgresql.service [Service] User=pluggedin Group=pluggedin WorkingDirectory=/home/pluggedin/pluggedin-app ExecStart=/usr/bin/pnpm start Restart=always RestartSec=10 StandardOutput=append:/var/log/pluggedin/pluggedin_app.log StandardError=append:/var/log/pluggedin/pluggedin_app.log Environment=PATH=/usr/bin:/usr/local/bin Environment=NODE_ENV=production Environment=PORT=12005 [Install] WantedBy=multi-user.target
-
Set up Nginx as a reverse proxy:
# HTTPS Server server { listen 443 ssl; server_name your-domain.com; # SSL configuration ssl_certificate /etc/letsencrypt/live/your-domain.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/your-domain.com/privkey.pem; # Next.js static files location /_next/static/ { alias /home/pluggedin/pluggedin-app/.next/static/; expires 365d; add_header Cache-Control "public, max-age=31536000, immutable"; } # Proxy settings for Node.js application location / { proxy_pass http://localhost:12005; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } } # HTTP redirect to HTTPS server { listen 80; server_name your-domain.com; return 301 https://$host$request_uri; }
-
Enable and start the service:
sudo systemctl daemon-reload sudo systemctl enable pluggedin.service sudo systemctl start pluggedin.service
Enhanced Security Features
The plugged.in App implements comprehensive security measures to protect your data and prevent common vulnerabilities:
-
Input Validation & Sanitization
- URL Validation: SSRF protection blocks private IPs, localhost, and dangerous ports
- Command Allowlisting: Only approved commands (node, npx, python, python3, uv, uvx, uvenv)
- Header Validation: RFC 7230 compliant with injection prevention
- HTML Sanitization: All user inputs sanitized with
sanitize-html
- Environment Variables: Secure parsing with proper quote handling
-
MCP Server Security
- Sandboxing (Linux/Ubuntu): STDIO servers wrapped with
firejail --quiet
- Transport Validation: Security checks for STDIO, SSE, and Streamable HTTP
- Session Management: Secure session handling for Streamable HTTP
- Error Sanitization: Prevents information disclosure
- Sandboxing (Linux/Ubuntu): STDIO servers wrapped with
-
API Security
- Rate Limiting: Tiered limits for different endpoint types
- Authentication: JWT-based with 30-day session expiry
- CORS Protection: Properly configured for all endpoints
- Audit Logging: Comprehensive activity tracking
-
Data Protection
- Encryption at Rest: AES-256-GCM for sensitive server data
- Per-Profile Keys: Isolated encryption per workspace
- Secure Sharing: Sanitized templates without credentials
- HTTPS Enforcement: Required in production
To enable sandboxing, install Firejail:
sudo apt update && sudo apt install firejail
Feature | Self-Hosted | Cloud (plugged.in) |
---|---|---|
Cost | Free | Free tier available |
Data Privacy | Full control | Server-side encryption |
Authentication | Optional | Built-in |
Session Context | Basic | Enhanced |
Hosting | Your infrastructure | Managed service |
Updates | Manual | Automatic |
Latency | Depends on your setup | Optimized global CDN |
The plugged.in App is designed to work seamlessly with the plugged.in MCP Proxy, which provides:
- A unified interface for all MCP clients
- Tool discovery and reporting
- Request routing to the appropriate MCP servers
- Support for the latest MCP specification including Streamable HTTP transport
- Compatible with STDIO, SSE, and Streamable HTTP server types
- plugged.in MCP Proxy Repository
- Model Context Protocol (MCP) Specification
- Claude Desktop Documentation
- Cline Documentation
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
The plugged.in project is actively developing several exciting features:
- Testing Infrastructure: Comprehensive test coverage for core functionality
- Playground Optimizations: Improved performance for log handling
- Embedded Chat (Phase 2): Generate revenue through embeddable AI chat interfaces
- AI Assistant Platform (Phase 3): Create a social network of specialized AI assistants
- Privacy-Focused Infrastructure (Phase 4): Dedicated RAG servers and distributed GPU services
- Retrieval-Augmented Generation (RAG): Integration with vector databases like Milvus
- Collaboration & Sharing: Multi-user sessions and embeddable chat widgets
- Full MCP Streamable HTTP Support: Added support for the new MCP Streamable HTTP transport protocol
- OAuth 2.1 Integration: Support for OAuth-based authentication flows
- Enhanced Configuration: Custom headers and session management for Streamable HTTP servers
- Multi-Language Support: Updated translations for all supported languages
See CHANGELOG.md for the latest updates.
- Document Library with RAG Integration: Upload and manage documents that enhance AI context
- Real-Time Notification System: Get instant updates on MCP activities with email support
- Progressive Server Initialization: Faster, more resilient MCP server connections
- Enhanced Playground UI: Redesigned layout with better responsiveness and streaming indicators
- Improved RAG Query Security: Replaced custom sanitization with
sanitize-html
library for robust XSS protection - Secure Environment Variable Parsing: Implemented
dotenv
library for proper handling of quotes, multiline values, and special characters - Enhanced Input Validation: Added comprehensive validation for all user inputs across the application
- Strengthened API Security: Implemented rate limiting and improved authentication checks
- Fixed JSON-RPC protocol interference in MCP proxy
- Resolved memory leaks in long-running playground sessions
- Corrected streaming message handling
- Fixed localhost URL validation for development environments
See Release Notes for complete details.
The app now includes intelligent throttling mechanisms to prevent redundant discovery calls:
- Tools API (
/api/tools
): Implements 5-minute throttling to avoid repeated discovery attempts - Discovery API (
/api/discover
): Uses 2-minute throttling for explicit discovery requests - In-memory caching: Tracks recent discovery attempts to prevent duplicate calls
- Failure recovery: Clears throttle cache on discovery failures to allow faster retries
- Single query optimization: Fetches server data and tool counts in one query using LEFT JOIN
- Reduced database load: Eliminates redundant tool count queries
- Indexed lookups: Uses existing database indexes for faster server and tool queries
- Asynchronous discovery: All discovery processes run in background without blocking API responses
- Error handling: Comprehensive error handling with automatic retry mechanisms
- Status tracking: Provides clear feedback on discovery progress and throttling status
- Reduced API latency: Faster response times for tools API calls
- Lower database load: Fewer redundant queries and optimized data fetching
- Better user experience: Prevents duplicate work and provides instant feedback
- Scalable architecture: Can handle multiple concurrent discovery requests efficiently