#Beta - not working currently
A security-first, production-ready autoscaling solution for n8n workflow automation. Deploy n8n securely with zero open ports using Cloudflare tunnels, automatic scaling, and enterprise-grade features.
Traditional n8n deployments expose your server directly to the internet. This is dangerous.
This solution provides:
- π‘οΈ Zero Attack Surface: Cloudflare tunnels mean no open ports on your server
- π Global DDoS Protection: Cloudflare's network shields your instance
- π Automatic HTTPS: Valid SSL certificates without manual configuration
- β‘ Auto-scaling: Handle any workload from 1 to hundreds of workflows
- π Production Grade: Comprehensive backup/restore, monitoring, and security
β
Battle Tested: Handles hundreds of simultaneous executions on 8-core 16GB VPS
β
One-Click Deploy: Interactive setup wizard configures everything
β
Multi-Platform: Auto-detects Docker/Podman, ARM64/AMD64
β
Enterprise Ready: Backup/restore, monitoring, Tailscale VPN support
This work is an extension of the amazing original setup made by @conor-is-my-name. He detail the further setup here:-
π Detailed Guide: https://www.reddit.com/r/n8n/comments/1l9mi6k/major_update_to_n8nautoscaling_build_step_by_step/
When I get around to it I will update the how to in this repo to include more of that information.
graph TD
A[n8n Main] -->|Queues jobs| B[Redis]
B -->|Monitors queue| C[Autoscaler]
C -->|Scales| D[n8n Workers]
B -->|Monitors queue| E[Redis Monitor]
F[PostgreSQL] -->|Stores data| A
A -->|Webhooks| G[n8n Webhook]
- Cloudflare tunnel integration (recommended) - Zero open ports, DDoS protection, automatic HTTPS
- Tailscale VPN support - Private network access for teams
- Secure password generation - Cryptographically secure secrets with optional salt
- Environment isolation - Separate dev/test/production configurations
- Dynamic worker scaling based on Redis queue length
- Configurable thresholds for scale-up/down decisions
- Multi-architecture support (ARM64/AMD64) with auto-detection
- Container runtime flexibility (Docker/Podman) with auto-detection
- Performance tuning variables for all components
- One-script installation with interactive setup wizard
- Comprehensive backup system with smart PostgreSQL backups (full/incremental)
- Point-in-time restore with interactive recovery tool
- Rclone cloud storage integration (70+ providers)
- Health checks for all services
- Systemd integration for production deployments
- Docker/Podman and Docker/Podman Compose
- Cloudflare account with a domain (free account works fine)
- Linux/macOS/WSL2 environment for setup scripts
- Get Docker: For new users, we recommend Docker Desktop or use the Docker convenience script for Ubuntu
- Cloudflare Domain: Set up a domain in Cloudflare (can be transferred from another provider)
- Cloudflare Tunnel: Create a tunnel token at Cloudflare Zero Trust β Access β Tunnels
π Why Cloudflare Tunnels?
- No open ports: Your server stays completely private
- Built-in DDoS protection: Cloudflare's global network protects your instance
- Free SSL/TLS: Automatic HTTPS with valid certificates
- Access control: Optional authentication and access policies
- Better performance: Cloudflare's CDN speeds up your workflows
- Create Cloudflare account and add your domain
- Navigate to Zero Trust β Access β Tunnels β Create tunnel
- Copy your tunnel token (starts with
eyJ...
) - Configure ingress rules in the tunnel dashboard:
- Add
n8n.yourdomain.com
βhttp://n8n:5678
- Add
webhook.yourdomain.com
βhttp://n8n-worker:5679
- Add
- Save configuration - DNS records are created automatically
-
Clone this repository:
git clone <repository-url> cd n8n-autoscaling
-
Run the interactive setup wizard:
./n8n-setup.sh
The wizard will guide you through:
- π Secure password generation
- π Cloudflare tunnel configuration (recommended)
- π Data directory setup
- ποΈ Database initialization
- π Health checks and testing
-
Start your secure n8n instance:
docker compose up -d
π That's it! Your n8n instance is now running securely through Cloudflare tunnels.
The setup wizard automatically chooses the optimal architecture:
π Cloudflare Tunnels (Recommended)
- β Auto-detected: When tunnel token is configured
- β No Traefik: Direct tunnel β n8n connection
- β Zero ports: Maximum security
- β
Command:
docker compose -f docker-compose.yml -f docker-compose.cloudflare.yml up -d
π Traditional Setup (Fallback)
β οΈ Used when: No tunnel token configuredβ οΈ Includes Traefik: Reverse proxy for port exposureβ οΈ Requires: Firewall configuration and SSL setupβ οΈ Command:docker compose -f docker-compose.yml up -d
- Tailscale VPN: Private network access for teams
- Custom reverse proxy: Integration with existing infrastructure
The n8n-setup.sh
script provides:
- Interactive Configuration: Step-by-step guided setup
- Automatic Path Resolution: Converts relative paths to absolute for Docker compatibility
- Environment Management: Create dev/test/production environments
- Security: Generates secure random passwords with optional salt
- Security Features: Cloudflare tunnel integration (recommended), Tailscale VPN support, secure password generation
- Optional Features: Rclone mounts (any cloud storage), external networks, custom reverse proxy
- Database Setup: Automatic PostgreSQL and Redis initialization
- Health Checks: Verifies services are running correctly
- Reset Options: Clean slate functionality if you need to start over
- Install systemd Make your installation persistent
If you need to start fresh or have issues with credentials:
./n8n-setup.sh
# Select option 3: Reset environment
Reset options include:
- Everything: Removes all data, .env file, and Docker resources
- Just Data: Keeps configuration but removes all database/app data
- Just .env: Removes configuration file (warning: existing data won't be accessible)
Optional cloud storage support for data and backups using rclone (70+ providers supported).
- Install rclone:
curl https://rclone.org/install.sh | sudo bash
- Configure provider:
rclone config
(setup your Google Drive, S3, etc.) - Create mounts: Setup automatic mounting for data and backups
- Enable in n8n: Uncomment rclone variables in
.env
and use rclone compose override
Supported Providers: Google Drive, OneDrive, Dropbox, AWS S3, Azure Blob, Backblaze B2, SFTP, and many more.
Use rootless containers (Podman/Docker) for best security and simplified permissions with rclone mounts.
π Detailed Setup Guide: docs/rclone-mounts.md
- Provider-specific examples (Google Drive, S3, OneDrive, SFTP)
- Docker vs Podman considerations
- Performance tuning for media files
- Systemd service configuration
- Troubleshooting common issues
- Make sure you set your own passwords and encryption keys in the .env file if you dont use the setup script
- By default each worker handles 10 tasks at a time, you can modify this in the docker-compose under:
- N8N_CONCURRENCY_PRODUCTION_LIMIT=10
- Adjust these to be greater than your longest expected workflow execution time measured in seconds:
- N8N_QUEUE_BULL_GRACEFULSHUTDOWNTIMEOUT=300
- N8N_GRACEFUL_SHUTDOWN_TIMEOUT=300
Variable | Description | Default |
---|---|---|
MIN_REPLICAS |
Minimum number of worker containers | 1 |
MAX_REPLICAS |
Maximum number of worker containers | 5 |
SCALE_UP_QUEUE_THRESHOLD |
Queue length to trigger scale up | 5 |
SCALE_DOWN_QUEUE_THRESHOLD |
Queue length to trigger scale down | 2 |
POLLING_INTERVAL_SECONDS |
How often to check queue length | 30 |
COOLDOWN_PERIOD_SECONDS |
Time between scaling actions | 180 |
QUEUE_NAME_PREFIX |
Redis queue prefix | bull |
QUEUE_NAME |
Redis queue name | jobs |
Ensure these n8n environment variables are set:
EXECUTIONS_MODE=queue
QUEUE_BULL_REDIS_HOST=redis
QUEUE_HEALTH_CHECK_ACTIVE=true
The autoscaler:
- Monitors Redis queue length every
POLLING_INTERVAL_SECONDS
- Scales up when:
- Queue length >
SCALE_UP_QUEUE_THRESHOLD
- Current replicas <
MAX_REPLICAS
- Queue length >
- Scales down when:
- Queue length <
SCALE_DOWN_QUEUE_THRESHOLD
- Current replicas >
MIN_REPLICAS
- Queue length <
- Respects cooldown period between scaling actions
The system includes:
- Redis queue monitor service (
redis-monitor
) - Docker health checks for all services
- Detailed logging from autoscaler
To enable automatic container updates, use Watchtower:
docker run -d \
--name watchtower \
-v /var/run/docker.sock:/var/run/docker.sock \
containrrr/watchtower \
--schedule "0 0 2 * * *" \
--cleanup
Podman auto-update is automatically configured when using the systemd service generator (./generate-systemd.sh
). The system will check for updates daily and restart containers with newer images.
The system includes automated backup functionality with incremental PostgreSQL backups to minimize storage and improve performance.
- PostgreSQL: Smart backup system (full every 12h, incremental hourly)
- Redis: Database snapshots using BGSAVE (compressed)
- n8n Data: Complete data directories including webhook data (compressed)
- π Encryption: All backups automatically encrypted using your N8N_ENCRYPTION_KEY
The system uses a sophisticated backup approach:
- Full Backups: Complete database dump (larger, standalone restore)
- Incremental Backups: WAL (Write-Ahead Log) files only (smaller, faster)
- Smart Backup: Automatically chooses full (every 12h) or incremental (hourly)
This approach reduces backup time and storage space while maintaining complete recovery capability.
# Smart backup (recommended - automatically chooses full or incremental)
./backup.sh
# Force specific backup types
./backup.sh postgres-full # Force full PostgreSQL backup
./backup.sh postgres-incremental # Force incremental PostgreSQL backup
./backup.sh postgres # Smart PostgreSQL backup
./backup.sh redis # Redis backup only
./backup.sh n8n # n8n data backup only
# View help and cron examples
./backup.sh --help
All backups are automatically encrypted using AES-256-CBC encryption with your N8N_ENCRYPTION_KEY
. This provides enterprise-grade security for your backup data.
Encryption Features:
- Automatic: All backups (PostgreSQL, Redis, n8n data) are encrypted by default
- Secure: Uses AES-256-CBC encryption with salt
- Key Management: Uses your existing
N8N_ENCRYPTION_KEY
(same as n8n encryption) - Transparent: Restore script automatically handles encrypted backups
Key Requirements:
- Your
N8N_ENCRYPTION_KEY
must be at least 16 characters - The same key is required for backup and restore operations
- Keep your encryption key secure and backed up separately
File Extensions:
- Encrypted backups:
.gz.enc
(e.g.,postgres_full_20240115_143022.sql.gz.enc
) - Unencrypted backups:
.gz
(when encryption key not available)
Add to crontab for automated backups:
Option 1: Simple Smart Backups (Recommended)
# Smart backups - full every 12h, incremental hourly
0 * * * * /path/to/n8n-autoscaling/backup.sh >/dev/null 2>&1
Option 2: Separate Service Schedules
# PostgreSQL full backup twice daily
0 0,12 * * * /path/to/n8n-autoscaling/backup.sh postgres-full >/dev/null 2>&1
# PostgreSQL incremental hourly (skip full backup hours)
0 1-11,13-23 * * * /path/to/n8n-autoscaling/backup.sh postgres-incremental >/dev/null 2>&1
# Other services hourly
30 * * * * /path/to/n8n-autoscaling/backup.sh redis >/dev/null 2>&1
45 * * * * /path/to/n8n-autoscaling/backup.sh n8n >/dev/null 2>&1
# Edit your crontab
crontab -e
# Add the recommended line (Option 1)
0 * * * * /path/to/n8n-autoscaling/backup.sh >/dev/null 2>&1
# Save and verify
crontab -l
To enable automatic cloud storage sync:
- Uncomment
RCLONE_BACKUP_MOUNT
in.env
- Ensure your rclone remote is mounted at the specified path
- Backups will automatically sync to cloud storage and local copies will be removed
- 7-day retention is maintained on cloud storage
- PostgreSQL Full: ~50-500MB (depends on data size)
- PostgreSQL Incremental: ~1-50MB (depends on activity)
- Redis: ~1-100MB (depends on queue size)
- n8n Data: ~10-200MB (depends on workflows and executions)
Storage Locations:
- Local:
./backups/{postgres,redis,n8n}/
(if not using rclone cloud storage) - Rclone Cloud Storage: Configured path with automatic cleanup
- Retention: 7 days for all backup types
The system includes an interactive restore script that automates recovery:
# Interactive restore (recommended)
./restore.sh
# List available backups
./restore.sh --list
# Dry run (see what would be restored)
./restore.sh --dry-run
# Help and options
./restore.sh --help
Restore Script Features:
- Interactive Menu: Choose service and backup point
- Multi-Source Discovery: Finds backups from both local and rclone cloud storage
- π Automatic Decryption: Handles encrypted backups transparently using N8N_ENCRYPTION_KEY
- Safety Backup: Creates backup of current data before restore
- Integrity Validation: Verifies backup files before restore
- Smart Container Management: Safely stops/starts containers
- Point-in-Time Recovery: Shows backups with timestamps
- Dry Run Mode: Preview restore without making changes
Safety Features:
- Multiple confirmation prompts
- Automatic safety backup creation
- Backup integrity validation
- Container state management
- Detailed progress reporting
# Manual PostgreSQL restore from full backup
gunzip < postgres_full_20240115_143022.sql.gz | docker compose exec -T postgres psql -U postgres -d n8n
# For incremental recovery:
# 1. Restore the latest full backup
# 2. Apply WAL files in chronological order
# (Use restore.sh for automated incremental recovery)
The system includes extensive performance tuning options in .env.example
. Uncomment and adjust these variables as needed:
N8N_CONCURRENCY_PRODUCTION_LIMIT
: Tasks per worker (default: 10)N8N_EXECUTIONS_DATA_PRUNE
: Enable automatic execution data cleanupN8N_EXECUTIONS_DATA_MAX_AGE
: Keep executions for X hours (default: 336 = 2 weeks)NODE_OPTIONS
: Node.js memory limits and optimization flagsUV_THREADPOOL_SIZE
: Node.js thread pool size for I/O operations
POSTGRES_SHARED_BUFFERS
: Memory for caching data (default: 256MB)POSTGRES_EFFECTIVE_CACHE_SIZE
: Total memory available for caching (default: 1GB)POSTGRES_WORK_MEM
: Memory per query operation (default: 4MB)POSTGRES_MAX_WORKER_PROCESSES
: Background worker processes- Parallel query settings for improved performance on multi-core systems
REDIS_MAXMEMORY
: Maximum memory usage (default: 512mb)REDIS_MAXMEMORY_POLICY
: Eviction policy when memory limit reachedREDIS_SAVE_*
: Persistence configuration for snapshots- Connection and networking optimizations
AUTOSCALER_CPU_LIMIT
: CPU limit for autoscaler containerAUTOSCALER_MEMORY_LIMIT
: Memory limit for autoscaler containerAUTOSCALER_REDIS_POOL_SIZE
: Connection pool size for RedisAUTOSCALER_DOCKER_TIMEOUT
: Timeout for Docker operations
All performance variables are commented out by default. Uncomment and adjust based on your system resources and workload requirements.
β
Zero Attack Surface: No open ports on your server
β
DDoS Protection: Cloudflare's global network shields your instance
β
Automatic HTTPS: Valid SSL certificates without configuration
β
Access Control: Optional authentication and IP restrictions
β
Audit Logs: Track all access attempts
Container Runtime Security: The autoscaler requires container runtime socket access. Security varies significantly:
- π’ Rootless Podman - Maximum security, containers run as regular user
- π‘ Rootless Docker - Good security with user namespaces
- π΄ Rootful Podman - Limited security, some root access
- π΄ Rootful Docker - Poor security, full root access equivalent
Automatic Security Features:
- Smart Detection: Setup script automatically detects and ranks available container runtimes
- Security Warnings: Clear warnings displayed for rootful modes with migration guidance
- Migration Instructions: Step-by-step commands to upgrade to more secure configurations
- User Confirmation: Explicit acknowledgment required to proceed with less secure setups
Database Security:
- PostgreSQL defaults to localhost-only binding for security
- Use strong passwords (automatically generated by setup script)
- Consider enabling Tailscale for secure remote database access
Backup Security:
- β Automatic Encryption: All backups encrypted with AES-256-CBC using N8N_ENCRYPTION_KEY
- β Secure by Default: No configuration needed - encryption happens automatically
β οΈ Key Management: Keep your N8N_ENCRYPTION_KEY secure and backed up separately
Before deploying to production, ensure:
β Authentication & Access
- Change all default passwords (setup script enforces this)
- Enable Cloudflare tunnels (recommended) or configure firewall rules
- Set up Tailscale VPN for team access (optional)
- Configure proper user management in n8n
β Network Security
- PostgreSQL bound to localhost only (default)
- Redis password authentication enabled (default)
- No unnecessary ports exposed to internet
- External network configuration reviewed
β Data Protection
- Backup encryption configured
- SSL/TLS certificates valid
- Environment variables secured
- Log files protected from unauthorized access
β System Hardening
- Docker socket access reviewed and understood
- Container user permissions verified
- System updates applied
- Monitoring and alerting configured
Webhook URLs: When using Cloudflare tunnels, webhooks automatically use your secure subdomain:
https://webhook.yourdomain.com/webhook/d7e73b77-6cfb-4add-b454-41e4c91461d8
Method | Security | Setup Complexity | Public Access | DDoS Protection | Certificate Management |
---|---|---|---|---|---|
Cloudflare Tunnel | π’ Excellent | π’ Easy | β Yes | β Built-in | β Automatic |
Tailscale VPN | π’ Excellent | π‘ Medium | β No | β None | β Automatic |
Direct Exposure | π΄ Poor | π΄ Hard | β Yes | β None | β Manual |
Recommendation: Use Cloudflare tunnels for production deployments. Only consider alternatives for specific use cases like private team access (Tailscale) or development environments (direct).
- Tunnel not connecting: Verify your tunnel token is correct and active
- DNS not resolving: Check that Cloudflare DNS records were created automatically
- 502 Bad Gateway: Ensure n8n services are running:
docker compose ps
- Tunnel status: Check tunnel logs:
docker compose logs cloudflared
- Check container logs:
docker compose logs [service]
- Verify Redis connection:
docker compose exec redis redis-cli -a "${REDIS_PASSWORD}" ping
- Check queue length:
docker compose exec redis redis-cli -a "${REDIS_PASSWORD}" LLEN bull:jobs:wait
- Database connection:
docker compose exec postgres pg_isready
- Workers not scaling: Check Redis connection and queue monitoring
- Slow responses: Review
N8N_CONCURRENCY_PRODUCTION_LIMIT
setting - Memory issues: Monitor container resource usage:
docker stats
MIT License - See LICENSE for details.