A complete, self-hosted AI infrastructure stack featuring workflow automation, experiment tracking, and a ChatGPT-like interface.
- n8n: Powerful workflow automation with 200+ integrations
- MLflow: End-to-end machine learning lifecycle management
- Open WebUI: Beautiful, responsive web interface for LLMs
- Ollama: Local LLM runner supporting multiple models
- Nginx: Secure reverse proxy with Let's Encrypt SSL
- Docker Compose: Easy deployment and scaling
- Docker and Docker Compose installed
- Minimum 8GB RAM (16GB+ recommended for LLMs)
- Linux-based system (Ubuntu 20.04+ recommended)
- Domain name (or subdomains) pointing to your server
ai-stack/
├── docker-compose.yml # Main Docker Compose configuration
├── docker-compose.override.yml # Development overrides
├── docker-compose.prod.yml # Production overrides
├── nginx/ # Nginx configurations
│ ├── nginx.conf # Main Nginx config
│ └── conf.d/ # Server blocks
│ ├── default.conf # Default catch-all
│ ├── n8n.conf # n8n proxy config
│ ├── mlflow.conf # MLflow proxy config
│ └── openwebui.conf # Open WebUI proxy config
├── n8n/ # n8n data
│ └── data/ # Workflows and credentials
├── mlflow/ # MLflow data
│ └── artifacts/ # Model artifacts
├── openwebui/ # Open WebUI data
│ └── data/ # Chat history and models
├── certs/ # SSL certificates
├── .env.example # Example environment variables
├── setup.sh # Setup script
├── LICENSE # MIT License
└── README.md # This file
-
Clone the repository:
git clone https://github.com/jomasego/ai-stack.git cd ai-stack
-
Copy the example environment file:
cp .env.example .env
-
Edit the
.env
file with your configuration:nano .env
- Set your domain (e.g.,
DOMAIN=example.com
) - Set your email for Let's Encrypt
- Update passwords and secrets
- Set your domain (e.g.,
-
Run the setup script:
chmod +x setup.sh ./setup.sh
This will:
- Install Docker and Docker Compose if needed
- Set up SSL certificates
- Configure Nginx
- Start all services
-
Access the services:
- n8n: https://n8n.yourdomain.com
- MLflow: https://mlflow.yourdomain.com
- Open WebUI: https://chat.yourdomain.com
Edit the .env
file to configure the stack:
# Domain configuration
DOMAIN=yourdomain.com
# n8n
N8N_HOST=n8n.yourdomain.com
N8N_USER_EMAIL=admin@yourdomain.com
N8N_USER_PASSWORD=your-secure-password
# Open WebUI
WEBUI_SECRET_KEY=your-secure-secret-key
# Let's Encrypt
EMAIL=your-email@example.com
STAGING=0 # Set to 1 for testing
By default, the following ports are used:
- 80/443: Nginx (HTTP/HTTPS)
- 5678: n8n (internal)
- 5000: MLflow (internal)
- 8080: Open WebUI (internal)
- 11434: Ollama (internal)
Start services:
docker-compose up -d
Stop services:
docker-compose down
View logs:
# All services
docker-compose logs -f
# Specific service
docker-compose logs -f n8n
Update services:
docker-compose pull
docker-compose up -d --force-recreate
- Change all default passwords in the
.env
file - Use strong passwords for all services
- Enable authentication for all services
- Keep Docker and images updated
- Back up your data regularly
Contributions are welcome! Please read our Contributing Guide for details on how to contribute.
This project is licensed under the MIT License - see the LICENSE file for details.
- n8n - Workflow automation
- MLflow - Machine learning lifecycle
- Open WebUI - ChatGPT-like interface
- Ollama - Local LLM runner
- Nginx - Reverse proxy