- Ubuntu 24.04 LTS
- Minimum 4GB RAM (8GB+ recommended)
- 20GB+ free disk space
- CPU with AVX2 support for optimal performance
ssh root@your-vps-ip
sudo apt update && sudo apt install -y nano git docker.io python3 python3-pip
# Install Docker
sudo apt install -y ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Add current user to docker group
sudo usermod -aG docker $USER
newgrp docker
# Start and enable Docker service
sudo systemctl start docker
sudo systemctl enable docker
sudo systemctl status docker
# If you see permission issues, you can try:
sudo chmod 666 /var/run/docker.sock
# Verify Docker is working
docker --version
docker ps
-
Clone the repository
sudo apt install -y ufw ufw enable ufw allow ssh ufw allow 80 ufw allow 443 ufw allow 3000 # WebUI ufw allow 5678 # n8n workflow ufw allow 8080 # (if you want to expose SearXNG) ufw allow 11434 # (if you want to expose Ollama) ufw allow 8501 # Archon Streamlit UI (if using Archon) ufw allow 5001 # DocLing Serve (if using DocLing) ufw reload
-
Navigate to the project directory
git clone https://github.com/ThijsdeZeeuw/small_kwintes_cloud.git cd small_kwintes_cloud nano .env
-
Start services
python3 start_services.py --profile cpu
Note: If you have a GPU available, you can use:
# For NVIDIA GPUs with proper drivers installed python3 start_services.py --profile cuda
-
Open n8n workflow
- Navigate to
http://YOUR_SERVER_IP:5678
in your browser http://46.202.155.155/:5678
- Navigate to
- Qdrant:
http://qdrant:6333
- Ollama:
- Docker version:
http://ollama:11434
- Local installation:
http://host.docker.internal:11434/
(for local Ollama setup)
- Docker version:
- Supabase: Use credentials from
.env
file
If you prefer to use Ollama locally instead of from the Docker package:
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull required models
ollama pull nomic-embed-text
ollama pull qwen2.5:7b-instruct-q4_K_M
# Make sure to start the Ollama service
systemctl --user enable ollama
systemctl --user start ollama
-
Access WebUI
- Open
http://YOUR_SERVER_IP:3000/
in your browser
- Open
-
Configure Workspace Functions
- Go to Admin Settings -> Workspace -> Functions -> Add Function
- Give it a name and description
- URL:
http://YOUR_SERVER_IP:3000/admin/functions
http://46.202.155.155:3000/admin/functions
-
Add n8n_pip Code
- Copy code from https://openwebui.com/f/coleam/n8n_pipe
- Change the webhook URL to:
http://host.docker.internal:5678/webhook/invoke_n8n_agent
- Enable the function
-
Add Tools
- Example: Add Wikipedia Tool
To make your n8n webhooks accessible from the internet, we've integrated ngrok:
-
Setup Ngrok Account
- Get a free account at https://ngrok.com/
- Copy your auth token from the dashboard
-
Configure Environment
- Update the
.env
file with your ngrok auth token:NGROK_AUTHTOKEN=your-ngrok-auth-token
- Update the
-
Start Services with Ngrok
- Run
python3 start_services.py --profile cpu
- The script will automatically configure ngrok and update your webhook URLs
- Run
-
Using the Ngrok URL
- When creating webhooks in n8n, they will automatically use your ngrok URL
- If using Telegram or other external services, the webhook URL will be:
https://xxxx-xxxx-xxxx.ngrok-free.app/webhook/YOUR-WEBHOOK-ID
For more details, see the NGROK_SETUP.md file.
Kwintes Cloud includes a Telegram login system for easy and secure authentication:
-
Setup Telegram Bot
- Create a bot via BotFather on Telegram
- Get your bot token and set in the
.env
file:TELEGRAM_BOT_TOKEN=your-telegram-bot-token TELEGRAM_BOT_USERNAME=Kwintes_cloud
- Configure your domain in BotFather under Login Widget settings
-
Access Login Page
- Navigate to
https://login.kwintes.cloud
or your configured LOGIN_HOSTNAME - The login page uses Telegram's secure widget for authentication
- Navigate to
-
Technical Integration
- Frontend: HTML page with Telegram widget
- Backend: n8n workflow for authentication processing
- Database: Users stored in Supabase (PostgreSQL)
-
Security Features
- HMAC-SHA256 signature verification
- Row-level security in the database
- Secure token handling
For detailed setup instructions, see TELEGRAM_LOGIN.md and README_TELEGRAM_INTEGRATION.md.
DocLing is a computational linguistics platform that can be integrated with your Local AI Package for natural language processing tasks.
-
Pull the DocLing CPU-optimized image
docker pull ghcr.io/docling-project/docling-serve-cpu # OR docker pull quay.io/docling-project/docling-serve-cpu
-
Run the DocLing container with UI enabled
docker run -p 5001:5001 -e DOCLING_SERVE_ENABLE_UI=true quay.io/docling-project/docling-serve-cpu
-
Access the DocLing UI
- Navigate to
http://YOUR_SERVER_IP:5001
in your browser
- Navigate to
-
Install the Python package with UI dependencies
pip install "docling-serve[ui]"
-
Run DocLing with UI enabled
docling-serve run --enable-ui
You can add DocLing to your existing docker-compose.yml file to have it start with your other services:
services:
# ... existing services
docling:
image: quay.io/docling-project/docling-serve-cpu
ports:
- "5001:5001"
environment:
- DOCLING_SERVE_ENABLE_UI=true
restart: unless-stopped
You can create workflows in n8n that utilize DocLing's NLP capabilities:
-
Add an HTTP Request node in n8n
- Method: POST
- URL:
http://docling:5001/api/analyze
- Body:
{ "text": "{{$node['Previous Node'].data.text}}", "tasks": ["pos", "ner", "sentiment"] }
-
Process the results in subsequent nodes
- Parse the linguistic analysis results
- Use the structured data for further processing or decision-making
Archon is an AI orchestration framework that can be integrated with Local AI Package for enhanced capabilities.
- Python 3.11+
- Supabase account (for vector database)
- OpenAI/Anthropic/OpenRouter API key or Ollama for local LLMs
- Note: Only OpenAI supports streaming in the Streamlit UI currently
-
Clone the Archon repository
git clone https://github.com/coleam00/archon.git cd archon
-
Setup with Docker (Recommended)
# This will build both containers and start Archon python3 run_docker.py
This script automatically:
- Builds the MCP server container
- Builds the main Archon container
- Runs Archon with appropriate port mappings
- Uses environment variables from .env file if it exists
-
Access Archon UI
- Navigate to
http://YOUR_SERVER_IP:8501
in your browser
- Navigate to
-
Integration with Local AI Package
- In your Archon configuration, you can point to the Local AI Package's services:
- For vector databases, use Qdrant at
http://localhost:6333
- For local LLMs, use Ollama at
http://localhost:11434
- For vector databases, use Qdrant at
- In your Archon configuration, you can point to the Local AI Package's services:
When using Model Context Protocol (MCP) servers with n8n in Docker deployments, you can configure them using environment variables. This enables AI agents to utilize various capabilities like search engines, weather data, and more.
Environment variables for MCP servers should be prefixed with MCP_
in your docker-compose file:
version: '3'
services:
n8n:
image: n8nio/n8n
environment:
# MCP server environment variables
- MCP_BRAVE_API_KEY=your-brave-api-key
- MCP_OPENAI_API_KEY=your-openai-key
- MCP_SERPER_API_KEY=your-serper-key
- MCP_WEATHER_API_KEY=your-weather-api-key
# Enable community nodes as tools
- N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=true
ports:
- "5678:5678"
volumes:
- ~/.n8n:/home/node/.n8n
These environment variables will be automatically passed to your MCP servers when they are executed.
-
Install MCP Server Packages
For Brave Search:
npm install -g @modelcontextprotocol/server-brave-search
For other services, install the appropriate package:
npm install -g @modelcontextprotocol/server-openai npm install -g @modelcontextprotocol/server-serper npm install -g @modelcontextprotocol/server-weather
-
Configure MCP Client Credentials in n8n
For each MCP server:
- Open n8n workflow editor
- Go to Credentials > Add Credentials
- Select MCP Client
- Set Command:
npx
- Set Arguments:
-y @modelcontextprotocol/server-[service-name]
- Add any required environment variables
-
Using MCP Servers with AI Agent
- Add an AI Agent node to your workflow
- Enable MCP Client as a tool
- Configure different MCP Client nodes with different credentials
- Create prompts that leverage multiple data sources
-
Configure credentials:
- Command:
npx
- Arguments:
-y @modelcontextprotocol/server-brave-search
- Environment Variables:
BRAVE_API_KEY=your-api-key
- Command:
-
Create a workflow:
- Add an MCP Client node
- Select "List Tools" operation to see available search tools
- Add another MCP Client node
- Select "Execute Tool" operation
- Choose the "brave_search" tool
- Set Parameters to:
{"query": "latest AI news"}
If you're running a local MCP server that supports Server-Sent Events (SSE):
-
Start the local MCP server:
npx @modelcontextprotocol/server-example-sse
-
Configure MCP Client credentials in n8n:
- Select Connection Type: Server-Sent Events (SSE)
- Create new credentials of type MCP Client (SSE) API
- Set SSE URL:
http://localhost:3001/sse
- Add any required headers for authentication
To use MCP clients as tools in AI Agent nodes, set the environment variable:
export N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=true
In Docker, add this to your environment configuration:
environment:
- N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=true
To ensure the services start on system boot:
sudo nano /etc/systemd/system/localai.service
Add the following content:
[Unit]
Description=Local AI Package
After=docker.service
Requires=docker.service
[Service]
Type=simple
User=YOUR_USERNAME
WorkingDirectory=/path/to/local-ai-packaged
ExecStart=/usr/bin/python3 start_services.py --profile cpu
Restart=on-failure
RestartSec=10
[Install]
WantedBy=multi-user.target
Enable and start the service:
sudo systemctl daemon-reload
sudo systemctl enable localai.service
sudo systemctl start localai.service
# Stop all services
docker compose -p localai -f docker-compose.yml -f supabase/docker/docker-compose.yml down
# Pull latest versions of all containers
docker compose -p localai -f docker-compose.yml -f supabase/docker/docker-compose.yml pull
# Start services again with your desired profile
python3 start_services.py --profile <your-profile>
- Check the GitHub README
- Visit the Community Forum
- Check service logs:
docker compose -p localai logs
- Check system resources:
htop
(install withsudo apt install htop
if needed)
If you see an error like:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Fix it with these commands:
# Start the Docker service
sudo systemctl start docker
# Check if it's running
sudo systemctl status docker
# Enable it to start automatically on boot
sudo systemctl enable docker
# If you have permission issues
sudo chmod 666 /var/run/docker.sock
Then try running the start script again:
python3 start_services.py --profile cpu