A Nix flake for installing and running ComfyUI with Python 3.12. Supports both macOS (Intel/Apple Silicon) and Linux with automatic GPU detection.
Note: Pull requests are more than welcome! Contributions to this open project are appreciated.
nix run github:utensils/nix-comfyui -- --open
- Provides ComfyUI packaged with Python 3.12
- Reproducible environment through Nix flakes
- Hybrid approach: Nix for environment management, pip for Python dependencies
- Cross-platform support: macOS (Intel/Apple Silicon) and Linux
- Automatic GPU detection: CUDA on Linux, MPS on Apple Silicon
- Persistent user data directory
- Includes ComfyUI-Manager for easy extension installation
- Improved model download experience with automatic backend downloads
# Run a specific version using a commit hash
nix run github:utensils/nix-comfyui/[commit-hash] -- --open
--open
: Automatically opens ComfyUI in your browser when the server is ready--port=XXXX
: Run ComfyUI on a specific port (default: 8188)--debug
or--verbose
: Enable detailed debug logging
# Enter a development shell with all dependencies
nix develop
You can install ComfyUI to your profile:
nix profile install github:utensils/nix-comfyui
The flake is designed to be simple and extensible. You can customize it by:
- Adding Python packages in the
pythonEnv
definition - Modifying the launcher script in
scripts/launcher.sh
- Pinning to a specific ComfyUI version by changing the
rev
infetchFromGitHub
This flake uses a modular, multi-file approach for better maintainability:
flake.nix
- Main flake definition and package configurationscripts/
- Modular launcher scripts:launcher.sh
- Main entry point that orchestrates the launching processconfig.sh
- Configuration variables and settingslogger.sh
- Logging utilities with support for different verbosity levelsinstall.sh
- Installation and setup procedurespersistence.sh
- Symlink creation and data persistence managementruntime.sh
- Runtime execution and process management
This modular structure makes the codebase much easier to maintain, debug, and extend as features are added. Each script has a single responsibility, improving code organization and readability.
User data is stored in ~/.config/comfy-ui
with the following structure:
app/
- ComfyUI application code (auto-updated when flake changes)models/
- Stable Diffusion models and other model filesoutput/
- Generated images and other outputsuser/
- User configuration and custom nodesinput/
- Input files for processing
This structure ensures your models, outputs, and custom nodes persist between application updates.
- macOS 10.15+ (Intel or Apple Silicon)
- Nix package manager
- x86_64 Linux distribution
- Nix package manager
- NVIDIA GPU with drivers (optional, for CUDA acceleration)
- glibc 2.27+
- Uses PyTorch nightly builds with improved MPS (Metal Performance Shaders) support
- Enables FP16 precision mode for better performance
- Sets optimal memory management parameters for macOS
- Automatic NVIDIA GPU detection and CUDA setup
- Supports CUDA 12.4 for maximum compatibility
- Automatic library path configuration for system libraries
- Falls back to CPU-only mode if no GPU is detected
The flake automatically detects your hardware and installs the appropriate PyTorch version:
- Linux with NVIDIA GPU: PyTorch with CUDA 12.4 support
- macOS with Apple Silicon: PyTorch with MPS acceleration
- Other systems: CPU-only PyTorch
This flake currently provides:
- ComfyUI v0.3.28
- Python 3.12.9
- PyTorch nightly builds with Apple Silicon optimizations
- ComfyUI Frontend Package 1.17.0
- ComfyUI-Manager for extension management
This flake includes a custom patch for the model downloading experience. Unlike the default ComfyUI implementation, our patch ensures that when models are selected in the UI, they are automatically downloaded in the background without requiring manual intervention. This significantly improves the user experience by eliminating the need to manually manage model downloads, especially for new users who may not be familiar with the process of obtaining and placing model files.
The codebase follows a modular structure under the src
directory to improve maintainability and organization:
src/
├── custom_nodes/ # Custom node implementations
│ ├── model_downloader/ # Automatic model downloading functionality
│ │ ├── js/ # Frontend JavaScript components
│ │ └── ... # Backend implementation files
│ └── main.py # Entry point for custom nodes
├── patches/ # Runtime patches for ComfyUI
│ ├── custom_node_init.py # Custom node initialization
│ └── main.py # Entry point for patches
└── persistence/ # Data persistence implementation
├── persistence.py # Core persistence logic
└── main.py # Persistence entry point
-
custom_nodes: Contains custom node implementations that extend ComfyUI's functionality
- model_downloader: Provides automatic downloading of models when selected in the UI
- js: Frontend components for download status and progress reporting
- model_downloader_patch.py: Backend API endpoints for model downloading
- model_downloader: Provides automatic downloading of models when selected in the UI
-
patches: Contains runtime patches that modify ComfyUI's behavior
- custom_node_init.py: Initializes custom nodes and registers their API endpoints
- main.py: Coordinates the loading and application of patches
-
persistence: Manages data persistence across ComfyUI runs
- persistence.py: Creates and maintains the directory structure and symlinks
- main.py: Handles the persistence setup before launching ComfyUI
This structure ensures clear separation of concerns and makes the codebase easier to maintain and extend.
⚠️ WARNING: Docker support is partially implemented but not yet functional. Cross-compilation for Linux is needed to complete the implementation. This will be addressed in a future update.
This flake includes preliminary Docker support, aimed at running ComfyUI in a containerized environment while preserving all functionality.
Use the included buildDocker
command to create a Docker image:
# Build the Docker image
nix run .#buildDocker
# Or from remote
nix run github:utensils/nix-comfyui#buildDocker
This creates a Docker image named comfy-ui:latest
in your local Docker daemon.
For Linux systems with NVIDIA GPUs, build the CUDA-enabled image:
# Build the CUDA-enabled Docker image
nix run .#buildDockerCuda
# Or from remote
nix run github:utensils/nix-comfyui#buildDockerCuda
This creates a Docker image named comfy-ui:cuda
with GPU acceleration support.
Run the container with:
# Create a data directory for persistence
mkdir -p ./data
# Run the container
docker run -p 8188:8188 -v "$PWD/data:/data" comfy-ui:latest
For GPU-accelerated execution:
# Create a data directory for persistence
mkdir -p ./data
# Run with GPU support
docker run --gpus all -p 8188:8188 -v "$PWD/data:/data" comfy-ui:cuda
Requirements for CUDA support:
- NVIDIA GPU with CUDA support
- NVIDIA drivers installed on the host system
nvidia-container-toolkit
package installed- Docker configured for GPU support
To install nvidia-container-toolkit on Ubuntu/Debian:
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker
- Full functionality: Includes all the features of the regular ComfyUI installation
- Persistence: Data is stored in a mounted volume at
/data
- Port exposure: Web UI available on port 8188
- Essential utilities: Includes bash, coreutils, git, and other necessary tools
- Proper environment: All environment variables set correctly for containerized operation
- GPU support: CUDA version includes proper environment variables for NVIDIA GPU access
The Docker image follows the same modular structure as the regular installation, ensuring consistency across deployment methods.
This flake is provided under the MIT license. ComfyUI itself is licensed under GPL-3.0.