Maxa is an AI assistant that maintains persistent memory and theory of mind capabilities, enabling more natural and context-aware interactions over time.
- Persistent Memory: Maintains context across conversations
- Theory of Mind: Understands user preferences and mental states
- Eternal Inference: Continuously learns and adapts to user needs
- Modular Architecture: Easy to extend and customize
-
Clone the repository
git clone https://github.com/yourusername/maxa-ai.git cd maxa-ai
-
Set up environment variables Copy
.env.example
to.env
and update with your API keys:cp .env.example .env # Edit .env with your API keys
-
Install dependencies
pip install -r requirements.txt
-
Run the application
uvicorn app.main:app --reload
-
Access the API
- API docs: http://localhost:8000/docs
- Redoc: http://localhost:8000/redoc
-
Build and run with Docker Compose
docker-compose up --build
-
Access services
- API: http://localhost:8000
- Qdrant UI: http://localhost:6333
- Prometheus: http://localhost:9090
- Grafana: http://localhost:3000 (admin/admin)
All endpoints require authentication using JWT tokens.
POST /api/v1/chat
- Send a message and get a responseGET /api/v1/chat/ws/{conversation_id}
- WebSocket endpoint for real-time chat
POST /api/v1/memory
- Store a memoryGET /api/v1/memory
- Search memoriesDELETE /api/v1/memory/{memory_id}
- Delete a memory
We use black
for code formatting and isort
for import sorting.
# Format code
black .
# Sort imports
isort .
Run tests with pytest:
pytest tests/
This project is licensed under the MIT License - see the LICENSE file for details.
If you use Maxa in your research, please cite:
@software{maxa2023,
author = Kossiso Royce,
title = {Maxa: Eternal Inference AI with Theory of Mind},
year = {2025},
}