Skip to content

mjnong/custom-mem0

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

8 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Custom Mem0 MCP Server

A production-ready custom Mem0 implementation with Model Context Protocol (MCP) support, allowing AI agents and applications to maintain persistent memories.

πŸ“š Table of Contents


⚑ Quick Navigation

πŸš€ Get Started Quickly
# Development setup
git clone <your-repo>
cd custom-mem0
make dev-setup
make up-dev

# VS Code MCP Integration
# Add to settings.json:
"mcp": {
    "servers": {
        "memory-mcp": {
            "url": "http://localhost:8888/memory/mcp/sse"
        }
    }
}

Access Points:

πŸ”§ Most Common Commands
make up-dev              # Start development
make health              # Check status
make logs                # View logs
make backup              # Backup data
make mcp-inspect         # Debug MCP
make test                # Run tests

πŸš€ What This Project Does

This project provides a custom memory service that:

  • Persistent Memory Management: Store, retrieve, update, and delete memories for users and AI agents
  • MCP Integration: Exposes memory operations as MCP tools and resources for seamless integration with AI agents
  • Multiple Backend Support: Choose between Neo4j (graph-based) or Qdrant (vector-based) for memory storage
  • Production Ready: Containerized with Docker, health checks, proper logging, and graceful shutdown
  • Development Friendly: Hot reload, comprehensive testing, and debugging tools

Core Features

  • 🧠 Memory Operations: Add, search, update, delete memories
  • πŸ”— Graph Relationships: Neo4j backend for complex memory relationships
  • 🎯 Vector Search: Qdrant backend for semantic similarity search
  • πŸ€– MCP Protocol: Standardized interface for AI agent integration
  • 🐳 Containerized: Docker setup for development and production
  • πŸ” Health Monitoring: Built-in health checks and status endpoints
  • πŸ›‘οΈ Security: Non-root containers, proper error handling
  • πŸ“Š Observability: Structured logging and monitoring

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   MCP Client    β”‚    β”‚  FastAPI App    β”‚    β”‚  Memory Backend β”‚
β”‚   (AI Agent)    │◄──►│  (MCP Server)   │◄──►│ (Neo4j/Qdrant) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                              β”‚
                              β–Ό
                       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                       β”‚  Vector Store   β”‚
                       β”‚   (pgvector)    β”‚
                       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ› οΈ Quick Start

Prerequisites

  • Docker & Docker Compose: For containerized deployment
  • uv: Fast Python package manager (install guide)
  • Python 3.13+: Required version specified in pyproject.toml
  • Node.js: For MCP inspector tool (optional)

Development Setup

  1. Clone and Setup

    git clone <your-repo>
    cd custom-mem0
    make dev-setup
  2. Configure Environment

    cp .env.example .env
    # Edit .env with your configuration
  3. Start Development Environment

    make up-dev
  4. Access the Service

πŸš€ Production Deployment

Automated Production Deployment

  1. Full Production Setup

    make deploy-prod

    This command:

    • Creates pre-deployment backups
    • Builds production images
    • Deploys services with health checks
    • Validates deployment
    • Sets up monitoring cron jobs
  2. Manual Production Setup

    make prod-setup
    make up
    make health
  3. Monitor Health

    make health
    make status

Environment Considerations

  • Use strong passwords for databases
  • Set proper OpenAI API keys
  • Configure appropriate resource limits
  • Set up monitoring and alerting
  • Regular backups with make backup

Health Monitoring

  • Health endpoint: /health
  • Container health checks included
  • Graceful shutdown handling
  • Structured logging for observability

πŸ’Ύ Backup & Recovery

πŸ”’ Production Backup Strategy

The system includes comprehensive backup functionality for production environments:

Backup Types

  1. Application-Aware Backups

    • PostgreSQL: Uses pg_dump for consistent database snapshots
    • Neo4j: Database dumps using Neo4j admin tools
    • History: File-level backup of SQLite history database
  2. Automated Backup Process

    make backup-automated    # Full backup with validation and cleanup
    make backup              # Manual backup
    make backup-validate     # Verify backup integrity
    make backup-monitor      # Check backup health
πŸ’» Backup Commands
# Create backups
make backup                          # All databases
make backup-postgres                 # PostgreSQL only
make backup-neo4j                   # Neo4j only
make backup-history                 # History database only

# Manage backups
make backup-list                    # List all backups
make backup-validate               # Check backup integrity
make backup-cleanup                # Remove old backups (30+ days)
make backup-monitor                # Health monitoring

# Restore from backups
make restore-postgres BACKUP_FILE=postgres_20241225_120000.sql.gz
make restore-neo4j BACKUP_FILE=neo4j_20241225_120000.tar.gz
πŸ” Backup Monitoring

The system includes automated backup monitoring:

  • Health Checks: Validates backup age, size, and integrity
  • Alerting: Email and webhook notifications for backup issues
  • Disk Space: Monitors available storage for backups
  • Automated Cleanup: Removes backups older than 30 days

Production Backup Schedule

Set up automated backups with cron:

# Daily backup at 2 AM
0 2 * * * cd /path/to/custom-mem0 && make backup-automated >> logs/backup.log 2>&1

# Backup monitoring every 6 hours
0 */6 * * * cd /path/to/custom-mem0 && make backup-monitor >> logs/monitor.log 2>&1
☁️ Cloud Backup Integration

Upload backups to cloud storage:

make backup-to-cloud    # Requires AWS CLI configuration

Configure AWS CLI:

aws configure
# Enter your AWS credentials and region

Backup Storage Structure

backups/
β”œβ”€β”€ postgres/
β”‚   β”œβ”€β”€ postgres_20241225_120000.sql.gz
β”‚   └── postgres_20241225_140000.sql.gz
β”œβ”€β”€ neo4j/
β”‚   β”œβ”€β”€ neo4j_20241225_120000.tar.gz
β”‚   └── neo4j_20241225_140000.tar.gz
└── history/
    β”œβ”€β”€ history_20241225_120000.tar.gz
    └── history_20241225_140000.tar.gz
🚨 Disaster Recovery
  1. Full System Recovery

    # Stop services
    make down
    
    # List available backups
    make backup-list
    
    # Restore databases
    make restore-postgres BACKUP_FILE=postgres_YYYYMMDD_HHMMSS.sql.gz
    make restore-neo4j BACKUP_FILE=neo4j_YYYYMMDD_HHMMSS.tar.gz
    
    # Start services
    make up
    make health
  2. Point-in-Time Recovery

    • Backups are timestamped for specific recovery points
    • Choose the backup closest to your desired recovery time
    • PostgreSQL dumps include complete schema and data

Backup Best Practices

  • Regular Testing: Regularly test backup restoration procedures
  • Multiple Locations: Store backups in multiple locations (local + cloud)
  • Monitoring: Use backup monitoring to catch issues early
  • Documentation: Keep recovery procedures documented and accessible
  • Security: Encrypt backups containing sensitive data

πŸ“‹ Available Commands

Run make help to see all available commands:

make help                # Show all commands
make up                  # Start production environment (default backend)
make up-pgvector         # Start with PostgreSQL/pgvector backend
make up-qdrant           # Start with Qdrant backend
make up-dev              # Start development with hot reload
make down                # Stop all services
make logs                # View container logs
make health              # Check service health
make test                # Run tests
make mcp-inspect         # Debug MCP protocol
make backup              # Backup data volumes

πŸ”§ Configuration

🌍 Environment Variables

Key configuration options in .env:

# Backend Selection
BACKEND="pgvector"  # or "qdrant"

# OpenAI Configuration
OPENAI_API_KEY="your-api-key"
OPENAI_MODEL="gpt-4o-mini"
OPENAI_EMBEDDING_MODEL="text-embedding-3-small"

# Neo4j Configuration
NEO4J_IP="neo4j:7687"
NEO4J_USERNAME="neo4j"
NEO4J_PASSWORD="mem0graph"

# PostgreSQL (Vector Store)
POSTGRES_HOST="postgres"
POSTGRES_PORT=5432
POSTGRES_USER="postgres"
POSTGRES_PASSWORD="password"

# FastAPI Configuration
FASTAPI_HOST="localhost"
FASTAPI_PORT=8000
MEMORY_LOG_LEVEL="info"
πŸ—„οΈ Backend Options

PostgreSQL/pgvector Backend (Default)

  • Best for: Traditional SQL with vector search, ACID transactions
  • Features: Familiar SQL interface, rich ecosystem, structured data
  • Vector Store: PostgreSQL with pgvector extension
  • Graph Store: Neo4j (shared)

Qdrant Backend

  • Best for: Purpose-built vector search, high performance
  • Features: Advanced filtering, clustering, optimized for vectors
  • Vector Store: Qdrant native vectors
  • Graph Store: Neo4j (shared)
πŸ”„ Multi-Backend Setup

Choose your vector store backend with simple commands:

# Start with PostgreSQL/pgvector (default)
make up-pgvector          # Production
make up-dev-pgvector      # Development

# Start with Qdrant
make up-qdrant            # Production  
make up-dev-qdrant        # Development

Quick Setup:

# Use pre-configured environments
cp .env.pgvector .env     # For PostgreSQL backend
cp .env.qdrant .env       # For Qdrant backend
make up                   # Start with selected backend

Switching Backends:

make down                 # Stop current services
cp .env.qdrant .env       # Switch configuration
make up                   # Start with new backend

Both backends share the same Neo4j graph store and provide identical MCP tools and APIs.

πŸ€– MCP Integration

πŸ› οΈ Available Tools
  • add_memory: Store new memories
  • search_memories: Find memories by similarity
  • update_memory: Modify existing memories
  • delete_memory: Remove specific memories
  • delete_all_memories: Clear all memories for a user/agent
πŸ“¦ Available Resources
  • memories://{user_id}/{agent_id}/{limit}: Retrieve all memories
πŸ’» VS Code Integration

To use this MCP server with VS Code Copilot, add the following configuration to your VS Code settings.json:

"mcp": {
    "servers": {
        "memory-mcp": {
            "url": "http://localhost:8888/memory/mcp/sse"
        }
    }
}

Once configured, you can:

  • Reference tools: Use # to access memory tools directly in VS Code
  • Custom instructions: Write natural language instructions to efficiently interact with the memory system
  • Seamless integration: The memory tools will be available alongside other Copilot features

Make sure your MCP server is running (make up-dev or make up) before using it in VS Code.

πŸ’‘ Example Usage
# Add a memory
await memory_client.add_memory(
    data="User prefers dark mode interface",
    user_id="user123",
    agent_id="assistant"
)

# Search memories
results = await memory_client.search_memories(
    query="interface preferences",
    user_id="user123"
)

πŸ§ͺ Testing & Development

πŸ§ͺ Running Tests
make test                # Run all tests
make lint                # Check code style
make format              # Format code
make check               # Run all checks
πŸ› Debugging
make logs SERVICE=mem0   # View specific service logs
make shell               # Access container shell
make db-shell            # Access PostgreSQL
make neo4j-shell         # Access Neo4j
make mcp-inspect         # Debug MCP protocol
⚑ Development Features
  • Hot Reload: Code changes automatically restart the server
  • Volume Mounting: Live code editing without rebuilds
  • Debug Logging: Detailed logs for development
  • MCP Inspector: Visual debugging of MCP protocol

πŸš€ Production Deployment (Additional Info)

🐳 Docker Production
make prod-setup
make up
make health
βš™οΈ Environment Considerations
  • Use strong passwords for databases
  • Set proper OpenAI API keys
  • Configure appropriate resource limits
  • Set up monitoring and alerting
  • Regular backups with make backup
πŸ’“ Health Monitoring
  • Health endpoint: /health
  • Container health checks included
  • Graceful shutdown handling
  • Structured logging for observability

πŸ”’ Security

  • Non-root containers: All services run as non-root users
  • Environment isolation: Proper Docker networking
  • Secret management: Environment-based configuration
  • Input validation: Pydantic models for API validation
  • Error handling: Graceful error responses

πŸ“š API Documentation

When running, visit:

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Run tests: make check
  5. Submit a pull request

πŸ“„ License

This project is released under the AGPL-3.0 License.

πŸ†˜ Troubleshooting

πŸ”§ Common Issues

Service won't start

make logs                # Check logs
make health              # Check health status

Database connection issues

make status              # Check container status
make db-shell            # Test database access

Memory operations failing

make mcp-inspect         # Debug MCP protocol
curl http://localhost:8888/health  # Check API health
πŸ†˜ Getting Help
  • Check logs with make logs
  • Use MCP inspector with make mcp-inspect
  • Review health status with make health
  • Access container shell with make shell

πŸ”— Quick Links


About

MCP Server for a Mem0 as Agentic Memory to support user personalisation.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published