Train LoRAs with guided notebooks instead of confusing command lines
This is a user-friendly LoRA training system based on proven methods from popular Colab notebooks. Instead of typing scary commands, you get helpful widgets that walk you through each step. Works on your own computer or rented GPU servers.
This project builds upon and integrates the excellent work of:
- Jelosus2's LoRA Easy Training Colab - Original Colab notebook that inspired this adaptation
- Derrian-Distro's LoRA Easy Training Backend - Core training backend and scripts
- HoloStrawberry's Training Methods - Community wisdom and proven training techniques
- Kohya-ss SD Scripts - Foundational training scripts and infrastructure
Special thanks to these creators for making LoRA training accessible to everyone!
Python Version | License | Discord | Twitch | Support |
---|---|---|---|---|
- About
- โจ What You Get
- ๐ Quick Start
- ๐ How to Use
- ๐งฎ Quick Training Calculator
- ๐ง Architecture
- ๐ Troubleshooting
- ๐ Credits
- ๐ Security
- ๐ License
- ๐ค Contributing
- Widget-based interface designed for both beginners and advanced users
- Please note this is STILL a work in progress.
- Testing was only done on a singular RTX 4090 on a Vast AI Docker Container with pre installed SD WEB UI FORGE.
- Results MAY vary, please feel free to report issues as you see fit.
- The system has been recently streamlined with improved widget organization and calculator accuracy.
- Helpful explanations for every setting (no more guessing!)
- Step calculator shows you exactly how long training will take
- Warnings when settings don't work together
- Memory-efficient optimizers (CAME, Prodigy Plus)
- Special LoRA types (DoRA, LoKr, LoHa, IAยณ, BOFT, GLoRA)
- Memory-saving options for smaller GPUs
- Prerequisites: This installation assumes you already have Jupyter Lab or Jupyter Notebook running
- Two simple notebooks: one for datasets, one for training
- Works with VastAI and other GPU rental services
- Checks your system automatically
- Auto-tag your images (WD14 for anime, BLIP for photos)
- Add/remove tags easily
- Upload ZIP files or folders
- Computer: Windows, macOS, or Linux
- Python: Version 3.10 or newer
- GPU: NVIDIA GPU with 8GB+ VRAM recommended (can work with less)
- Git: For downloading this project (explained below)
-
Get Git (if you don't have it)
Git is a tool for downloading code projects. Don't worry - you just need to install it once and you're done!
Check if you already have Git: Open your terminal/command prompt and type
git --version
. If you see a version number, you're good to go!If you need to install Git:
- Windows: Download from git-scm.com and run the installer
- Mac: Open Terminal and type
xcode-select --install
- Linux: Type
sudo apt-get install git
(Ubuntu/Debian) or use your system's package manager
-
Download This Project
Open your terminal/command prompt and navigate to where you want the project folder. Then run:
git clone https://github.com/Ktiseos-Nyx/Lora_Easy_Training_Jupyter.git cd Lora_Easy_Training_Jupyter
-
Run Setup
This automatically installs everything you need:
Mac/Linux:
chmod +x ./jupyter.sh ./jupyter.sh
Windows (or if the above doesn't work):
python ./installer.py
Just wait for it to finish - it downloads the training tools and sets everything up.
If using VastAI or similar: Jupyter is probably already running - just open the notebooks in your browser.
If on your own computer: Start Jupyter like this:
jupyter notebook
Then open these notebooks:
Dataset_Maker_Widget.ipynb
- Prepare your images and captionsLora_Trainer_Widget.ipynb
- Set up and run trainingLoRA_Calculator_Widget.ipynb
- Calculate training steps (optional)
Open Dataset_Maker_Widget.ipynb
and run the first cell:
# This starts the dataset preparation tool
from widgets.dataset_widget import DatasetWidget
dataset_widget = DatasetWidget()
dataset_widget.display()
Upload your images (ZIP files work great!) and the system will auto-tag them for you.
To use custom models or VAEs, you need to provide a direct download link. Hereโs how to find them on popular platforms:
Method 1: Using the Model Version ID
This is the easiest method if a model has multiple versions.
- Navigate to the model or VAE page.
- Look at the URL in your browser's address bar. If it includes
?modelVersionId=XXXXXX
, you can copy the entire URL and paste it directly into the widget. - If you don't see this ID, try switching to a different version of the model and then back to your desired version. The ID should then appear in the URL.
Method 2: Copying the Download Link
Use this method if the model has only one version or if a version has multiple files.
- On the model or VAE page, scroll down to the "Files" section.
- Right-click the Download button for the file you want.
- Select "Copy Link Address" (or similar text) from the context menu.
Method 1: Using the Repository URL
- Go to the main page of the model or VAE repository you want to use.
- Copy the URL directly from your browser's address bar.
Method 2: Copying the Direct File Link
- Navigate to the "Files and versions" tab of the repository.
- Find the specific file you want to download.
- Click the "..." menu to the right of the file size, then right-click the "Download" link and copy the link address.
Open Lora_Trainer_Widget.ipynb
and run the cells to start training:
# First, set up your environment
from widgets.setup_widget import SetupWidget
setup_widget = SetupWidget()
setup_widget.display()
# Then configure training
from widgets.training_widget import TrainingWidget
training_widget = TrainingWidget()
training_widget.display()
Good Starting Settings:
- Learning Rate: UNet
5e-4
, Text Encoder1e-4
- LoRA:
8 dim / 4 alpha
(works for most characters) - Target: 250-1000 training steps (the calculator helps you figure this out)
Not sure about your dataset size or settings? Use our personal calculator:
python3 personal_lora_calculator.py
This tool helps you:
- Calculate optimal repeats and epochs for your dataset size
- Get personalized learning rate recommendations
- Estimate total training steps
- Build confidence for any dataset size (no more guesswork!) ๐ฏ
core/managers.py
: SetupManager, ModelManager for environment setupcore/dataset_manager.py
: Dataset processing and image taggingcore/training_manager.py
: Hybrid training manager with advanced featurescore/utilities_manager.py
: Post-training utilities and optimization
widgets/setup_widget.py
: Environment setup and model downloadswidgets/dataset_widget.py
: Dataset preparation interfacewidgets/training_widget.py
: Training configuration with advanced modewidgets/utilities_widget.py
: Post-training tools
- The
Flux_SD3_Training/
folder contains work-in-progress Flux and SD3.5 LoRA training - May not function correctly - still under active development
- Use at your own risk for testing purposes only
- Docker/VastAI users: Triton compiler may fail with AdamW8bit optimizer
- Symptoms: "TRITON NOT FOUND" or "triton not compatible" errors
- Solution: System will auto-fallback to AdamW (uses more VRAM but stable)
- ONNX Runtime: Dependency conflicts possible between
onnxruntime-gpu
andopen-clip-torch
- DoRA, GLoRA, BOFT (Butterfly): May not function correctly as of yet
- Status: Currently under testing and validation
- Recommendation: Use standard LoRA or LoCon for stable results
- More testing: Additional compatibility testing is ongoing
- GitHub Issues: Report bugs and feature requests
- Documentation: Check tooltips and explanations in widgets
- Community: Share your LoRAs and experiences!
This project is built on the work of many awesome people:
Training Methods:
- Holostrawberry - Training guides and recommended settings
- Kohya-ss - Core training scripts
- LyCORIS Team - Advanced LoRA methods (DoRA, LoKr, etc.)
- Derrian Distro - Custom optimizers
Notebook Inspirations:
- AndroidXXL, Jelosus2, Linaqruf - Original Colab notebooks that made LoRA training accessible
Community:
"Either gonna work or blow up!" - Made with curiosity! ๐
Found a security issue? Check our Security Policy for responsible disclosure guidelines.
MIT License - Feel free to use, modify, and distribute. See LICENSE for details.
We welcome contributions! Check out our Contributing Guide for details on how to get involved. Feel free to open issues or submit pull requests on GitHub.
Made with โค๏ธ by the community, for the community