A production-ready custom Mem0 implementation with Model Context Protocol (MCP) support, allowing AI agents and applications to maintain persistent memories.
- Custom Mem0 MCP Server
- π Table of Contents
- β‘ Quick Navigation
- π What This Project Does
- ποΈ Architecture
- π οΈ Quick Start
- π Production Deployment
- πΎ Backup & Recovery
- π Available Commands
- π§ Configuration
- π€ MCP Integration
- π§ͺ Testing & Development
- π Production Deployment (Additional Info)
- π Security
- π API Documentation
- π€ Contributing
- π License
- π Troubleshooting
- π Quick Links
π Get Started Quickly
# Development setup
git clone <your-repo>
cd custom-mem0
make dev-setup
make up-dev
# VS Code MCP Integration
# Add to settings.json:
"mcp": {
"servers": {
"memory-mcp": {
"url": "http://localhost:8888/memory/mcp/sse"
}
}
}
Access Points:
- API: http://localhost:8888
- Health: http://localhost:8888/health
- Neo4j: http://localhost:8474
π§ Most Common Commands
make up-dev # Start development
make health # Check status
make logs # View logs
make backup # Backup data
make mcp-inspect # Debug MCP
make test # Run tests
This project provides a custom memory service that:
- Persistent Memory Management: Store, retrieve, update, and delete memories for users and AI agents
- MCP Integration: Exposes memory operations as MCP tools and resources for seamless integration with AI agents
- Multiple Backend Support: Choose between Neo4j (graph-based) or Qdrant (vector-based) for memory storage
- Production Ready: Containerized with Docker, health checks, proper logging, and graceful shutdown
- Development Friendly: Hot reload, comprehensive testing, and debugging tools
- π§ Memory Operations: Add, search, update, delete memories
- π Graph Relationships: Neo4j backend for complex memory relationships
- π― Vector Search: Qdrant backend for semantic similarity search
- π€ MCP Protocol: Standardized interface for AI agent integration
- π³ Containerized: Docker setup for development and production
- π Health Monitoring: Built-in health checks and status endpoints
- π‘οΈ Security: Non-root containers, proper error handling
- π Observability: Structured logging and monitoring
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β MCP Client β β FastAPI App β β Memory Backend β
β (AI Agent) βββββΊβ (MCP Server) βββββΊβ (Neo4j/Qdrant) β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β
βΌ
βββββββββββββββββββ
β Vector Store β
β (pgvector) β
βββββββββββββββββββ
- Docker & Docker Compose: For containerized deployment
- uv: Fast Python package manager (install guide)
- Python 3.13+: Required version specified in pyproject.toml
- Node.js: For MCP inspector tool (optional)
-
Clone and Setup
git clone <your-repo> cd custom-mem0 make dev-setup
-
Configure Environment
cp .env.example .env # Edit .env with your configuration
-
Start Development Environment
make up-dev
-
Access the Service
- API: http://localhost:8888
- Health Check: http://localhost:8888/health
- Neo4j Browser: http://localhost:8474 (user: neo4j, password: mem0graph)
- PostgreSQL: localhost:8432 (user: postgres, password: postgres)
-
Full Production Setup
make deploy-prod
This command:
- Creates pre-deployment backups
- Builds production images
- Deploys services with health checks
- Validates deployment
- Sets up monitoring cron jobs
-
Manual Production Setup
make prod-setup make up make health
-
Monitor Health
make health make status
- Use strong passwords for databases
- Set proper OpenAI API keys
- Configure appropriate resource limits
- Set up monitoring and alerting
- Regular backups with
make backup
- Health endpoint:
/health
- Container health checks included
- Graceful shutdown handling
- Structured logging for observability
π Production Backup Strategy
The system includes comprehensive backup functionality for production environments:
-
Application-Aware Backups
- PostgreSQL: Uses
pg_dump
for consistent database snapshots - Neo4j: Database dumps using Neo4j admin tools
- History: File-level backup of SQLite history database
- PostgreSQL: Uses
-
Automated Backup Process
make backup-automated # Full backup with validation and cleanup make backup # Manual backup make backup-validate # Verify backup integrity make backup-monitor # Check backup health
π» Backup Commands
# Create backups
make backup # All databases
make backup-postgres # PostgreSQL only
make backup-neo4j # Neo4j only
make backup-history # History database only
# Manage backups
make backup-list # List all backups
make backup-validate # Check backup integrity
make backup-cleanup # Remove old backups (30+ days)
make backup-monitor # Health monitoring
# Restore from backups
make restore-postgres BACKUP_FILE=postgres_20241225_120000.sql.gz
make restore-neo4j BACKUP_FILE=neo4j_20241225_120000.tar.gz
π Backup Monitoring
The system includes automated backup monitoring:
- Health Checks: Validates backup age, size, and integrity
- Alerting: Email and webhook notifications for backup issues
- Disk Space: Monitors available storage for backups
- Automated Cleanup: Removes backups older than 30 days
Set up automated backups with cron:
# Daily backup at 2 AM
0 2 * * * cd /path/to/custom-mem0 && make backup-automated >> logs/backup.log 2>&1
# Backup monitoring every 6 hours
0 */6 * * * cd /path/to/custom-mem0 && make backup-monitor >> logs/monitor.log 2>&1
βοΈ Cloud Backup Integration
Upload backups to cloud storage:
make backup-to-cloud # Requires AWS CLI configuration
Configure AWS CLI:
aws configure
# Enter your AWS credentials and region
backups/
βββ postgres/
β βββ postgres_20241225_120000.sql.gz
β βββ postgres_20241225_140000.sql.gz
βββ neo4j/
β βββ neo4j_20241225_120000.tar.gz
β βββ neo4j_20241225_140000.tar.gz
βββ history/
βββ history_20241225_120000.tar.gz
βββ history_20241225_140000.tar.gz
π¨ Disaster Recovery
-
Full System Recovery
# Stop services make down # List available backups make backup-list # Restore databases make restore-postgres BACKUP_FILE=postgres_YYYYMMDD_HHMMSS.sql.gz make restore-neo4j BACKUP_FILE=neo4j_YYYYMMDD_HHMMSS.tar.gz # Start services make up make health
-
Point-in-Time Recovery
- Backups are timestamped for specific recovery points
- Choose the backup closest to your desired recovery time
- PostgreSQL dumps include complete schema and data
- Regular Testing: Regularly test backup restoration procedures
- Multiple Locations: Store backups in multiple locations (local + cloud)
- Monitoring: Use backup monitoring to catch issues early
- Documentation: Keep recovery procedures documented and accessible
- Security: Encrypt backups containing sensitive data
Run make help
to see all available commands:
make help # Show all commands
make up # Start production environment (default backend)
make up-pgvector # Start with PostgreSQL/pgvector backend
make up-qdrant # Start with Qdrant backend
make up-dev # Start development with hot reload
make down # Stop all services
make logs # View container logs
make health # Check service health
make test # Run tests
make mcp-inspect # Debug MCP protocol
make backup # Backup data volumes
π Environment Variables
Key configuration options in .env
:
# Backend Selection
BACKEND="pgvector" # or "qdrant"
# OpenAI Configuration
OPENAI_API_KEY="your-api-key"
OPENAI_MODEL="gpt-4o-mini"
OPENAI_EMBEDDING_MODEL="text-embedding-3-small"
# Neo4j Configuration
NEO4J_IP="neo4j:7687"
NEO4J_USERNAME="neo4j"
NEO4J_PASSWORD="mem0graph"
# PostgreSQL (Vector Store)
POSTGRES_HOST="postgres"
POSTGRES_PORT=5432
POSTGRES_USER="postgres"
POSTGRES_PASSWORD="password"
# FastAPI Configuration
FASTAPI_HOST="localhost"
FASTAPI_PORT=8000
MEMORY_LOG_LEVEL="info"
ποΈ Backend Options
- Best for: Traditional SQL with vector search, ACID transactions
- Features: Familiar SQL interface, rich ecosystem, structured data
- Vector Store: PostgreSQL with pgvector extension
- Graph Store: Neo4j (shared)
- Best for: Purpose-built vector search, high performance
- Features: Advanced filtering, clustering, optimized for vectors
- Vector Store: Qdrant native vectors
- Graph Store: Neo4j (shared)
π Multi-Backend Setup
Choose your vector store backend with simple commands:
# Start with PostgreSQL/pgvector (default)
make up-pgvector # Production
make up-dev-pgvector # Development
# Start with Qdrant
make up-qdrant # Production
make up-dev-qdrant # Development
Quick Setup:
# Use pre-configured environments
cp .env.pgvector .env # For PostgreSQL backend
cp .env.qdrant .env # For Qdrant backend
make up # Start with selected backend
Switching Backends:
make down # Stop current services
cp .env.qdrant .env # Switch configuration
make up # Start with new backend
Both backends share the same Neo4j graph store and provide identical MCP tools and APIs.
π οΈ Available Tools
add_memory
: Store new memoriessearch_memories
: Find memories by similarityupdate_memory
: Modify existing memoriesdelete_memory
: Remove specific memoriesdelete_all_memories
: Clear all memories for a user/agent
π¦ Available Resources
memories://{user_id}/{agent_id}/{limit}
: Retrieve all memories
π» VS Code Integration
To use this MCP server with VS Code Copilot, add the following configuration to your VS Code settings.json
:
"mcp": {
"servers": {
"memory-mcp": {
"url": "http://localhost:8888/memory/mcp/sse"
}
}
}
Once configured, you can:
- Reference tools: Use
#
to access memory tools directly in VS Code - Custom instructions: Write natural language instructions to efficiently interact with the memory system
- Seamless integration: The memory tools will be available alongside other Copilot features
Make sure your MCP server is running (make up-dev
or make up
) before using it in VS Code.
π‘ Example Usage
# Add a memory
await memory_client.add_memory(
data="User prefers dark mode interface",
user_id="user123",
agent_id="assistant"
)
# Search memories
results = await memory_client.search_memories(
query="interface preferences",
user_id="user123"
)
π§ͺ Running Tests
make test # Run all tests
make lint # Check code style
make format # Format code
make check # Run all checks
π Debugging
make logs SERVICE=mem0 # View specific service logs
make shell # Access container shell
make db-shell # Access PostgreSQL
make neo4j-shell # Access Neo4j
make mcp-inspect # Debug MCP protocol
β‘ Development Features
- Hot Reload: Code changes automatically restart the server
- Volume Mounting: Live code editing without rebuilds
- Debug Logging: Detailed logs for development
- MCP Inspector: Visual debugging of MCP protocol
π³ Docker Production
make prod-setup
make up
make health
βοΈ Environment Considerations
- Use strong passwords for databases
- Set proper OpenAI API keys
- Configure appropriate resource limits
- Set up monitoring and alerting
- Regular backups with
make backup
π Health Monitoring
- Health endpoint:
/health
- Container health checks included
- Graceful shutdown handling
- Structured logging for observability
- Non-root containers: All services run as non-root users
- Environment isolation: Proper Docker networking
- Secret management: Environment-based configuration
- Input validation: Pydantic models for API validation
- Error handling: Graceful error responses
When running, visit:
- Swagger UI: http://localhost:8888/docs
- ReDoc: http://localhost:8888/redoc
- OpenAPI JSON: http://localhost:8888/openapi.json
- Fork the repository
- Create a feature branch
- Make your changes
- Run tests:
make check
- Submit a pull request
This project is released under the AGPL-3.0 License.
π§ Common Issues
make logs # Check logs
make health # Check health status
make status # Check container status
make db-shell # Test database access
make mcp-inspect # Debug MCP protocol
curl http://localhost:8888/health # Check API health
π Getting Help
- Check logs with
make logs
- Use MCP inspector with
make mcp-inspect
- Review health status with
make health
- Access container shell with
make shell