This comprehensive guide helps you set up and manage a Gensyn testnet node on a GPU server. The Gensyn network is a decentralized compute network for AI training, and running a node allows you to participate in the network and earn rewards.
Component | Minimum Requirement | Recommended |
---|---|---|
CPU | arm64 or amd64 |
amd64 |
RAM | 25 GB | 32 GB+ |
GPU | RTX 3090, RTX 4090, A100, or H100 | RTX 4090 or A100 |
Storage | 50 GB | 100 GB+ |
Python | 3.10 or higher | 3.11+ |
Network | Stable internet connection | High-speed connection |
- Visit Quick Pod: Go to Quick Pod Console
- Create Account: Sign up with your email and verify your account
- Add Funds: Deposit funds using crypto or credit card
- Select Template: Go to Templates β Select CUDA 12.6
- Configure Docker: Clone the template and edit Docker options:
-p 8888:8888 -p 3000:3000
- Choose GPU: Select RTX 4090 (recommended) or RTX 3090
- Create POD: Click Create POD and wait for initialization
-
Access Terminal: Click Connect β Web Terminal
-
Install System Dependencies:
# Update package lists apt update && apt install -y sudo sudo apt update # Install essential packages sudo apt install -y python3 python3-venv python3-pip curl wget screen git lsof htop # Install Yarn for Node.js dependencies curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add - echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list sudo apt update && sudo apt install -y yarn
-
Install Node.js:
curl -sSL https://raw.githubusercontent.com/zunxbt/installation/main/node.sh | bash
-
Clone Repository:
cd $HOME [ -d rl-swarm ] && rm -rf rl-swarm git clone https://github.com/gensyn-ai/rl-swarm.git cd rl-swarm
-
Create Virtual Environment:
python3 -m venv .venv source .venv/bin/activate
-
Start tmux Session:
tmux new-session -d -s gensyn tmux attach-session -t gensyn
-
Run the Swarm:
./run_rl_swarm.sh
-
Complete Setup Prompts:
- Connect to Testnet? β Type
Y
and press Enter - Push models to Hugging Face? β Type
Y
and press Enter - Hugging Face token β Paste your write token (get from Hugging Face Settings)
- Connect to Testnet? β Type
-
Detach from tmux: Press
Ctrl + B
, thenD
For better reliability, use the included auto-restart script:
-
Make Script Executable:
chmod +x script.sh
-
Start Auto-Restart Monitoring:
./script.sh
-
Useful Commands:
./script.sh status # Check node status ./script.sh logs # View logs ./script.sh clean # Clean up processes ./script.sh stop # Stop auto-restart
# List sessions
tmux list-sessions
# Attach to session
tmux attach-session -t gensyn
# Detach from session (while inside)
Ctrl + B, then D
# Kill session
tmux kill-session -t gensyn
- NEVER lose your
swarm.pem
file - it's your node's unique identity - Always use the same email for your node to maintain consistency
- Keep backups of your
swarm.pem
in multiple secure locations - Never share your private keys with anyone
- Access Jupyter Lab: Open
http://your-server-ip:8888
in your browser - Upload/Download: Use the file browser to manage your
swarm.pem
- Backup: Download and store in secure location
# Download from server
scp username@server-ip:~/rl-swarm/swarm.pem ./swarm.pem
# Upload to server
scp ./swarm.pem username@server-ip:~/rl-swarm/swarm.pem
# Upload to cloud storage (example with rclone)
rclone copy swarm.pem remote:backups/
# Download from cloud storage
rclone copy remote:backups/swarm.pem ./
-
Install Cloudflare Tunnel:
wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb sudo dpkg -i cloudflared-linux-amd64.deb
-
Create Tunnel:
cloudflared tunnel --url http://localhost:3000
-
Follow Authentication: Complete the Cloudflare authentication process
-
Access: Use the provided public URL to access your node
# Forward local port to remote server
ssh -L 3000:localhost:3000 username@server-ip
# Then access via http://localhost:3000
# Using the script
./script.sh status
# Manual checks
tmux list-sessions
pgrep -af "python.*run_rl_swarm"
nvidia-smi # Check GPU usage
# Real-time logs
./script.sh logs
# Manual log viewing
tail -f ~/gensyn_auto_restart.log
# GPU usage
nvidia-smi -l 1
# System resources
htop
# Disk usage
df -h
# Check if tmux session exists
tmux list-sessions
# Kill existing sessions
tmux kill-session -t gensyn
# Clean up processes
pkill -f "python.*run_rl_swarm"
pkill -f "python.*main"
# Restart
./script.sh start
# Remove cached auth data
rm -rf modal-login/temp-data/*.json
# Re-authenticate
./run_rl_swarm.sh
# Check GPU memory
nvidia-smi
# Restart if memory is stuck
sudo systemctl restart nvidia-persistenced
# Check connectivity
ping 8.8.8.8
# Check DNS
nslookup google.com
# Restart network (if needed)
sudo systemctl restart networking
- GPU Memory: Ensure you have at least 2GB free VRAM
- CPU Priority: Run with nice priority for better performance
- Network: Use stable, high-speed internet connection
- Storage: Use SSD storage for better I/O performance
- Participation: Earn rewards by participating in training rounds
- Performance: Better hardware = higher potential rewards
- Uptime: More uptime = more opportunities to earn
- Network: Rewards distributed based on contribution to the network
- Monitor your node's participation in the Gensyn dashboard
- Check logs for successful round completions
- Track your Hugging Face model uploads
- Documentation: Gensyn Docs
- Discord: Gensyn Community
- Twitter: @0xemir_
- GitHub: rl-swarm Repository
# Node management
./script.sh start # Start node
./script.sh stop # Stop auto-restart
./script.sh status # Check status
./script.sh logs # View logs
./script.sh clean # Clean processes
# tmux management
tmux list-sessions # List sessions
tmux attach -t gensyn # Attach to session
tmux kill-session -t gensyn # Kill session
# System monitoring
nvidia-smi # GPU status
htop # System resources
df -h # Disk usage
free -h # Memory usage
Gensyn.Tutorial.mp4
Happy mining! π
Remember to star the repository: https://github.com/gensyn-ai/rl-swarm