Skip to content

Ktiseos-Nyx/Lora_Easy_Training_Jupyter

ย 
ย 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

80 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

LoRA Easy Training - Jupyter Widget Edition ๐Ÿš€

Train LoRAs with guided notebooks instead of confusing command lines

This is a user-friendly LoRA training system based on proven methods from popular Colab notebooks. Instead of typing scary commands, you get helpful widgets that walk you through each step. Works on your own computer or rented GPU servers.

๐Ÿ™ Built on the Shoulders of Giants

This project builds upon and integrates the excellent work of:

Special thanks to these creators for making LoRA training accessible to everyone!

Python Version License Discord Twitch Support
Python License Discord Twitch Support us on Ko-fi

Table of Contents

About

  • Widget-based interface designed for both beginners and advanced users
  • Please note this is STILL a work in progress.
  • Testing was only done on a singular RTX 4090 on a Vast AI Docker Container with pre installed SD WEB UI FORGE.
  • Results MAY vary, please feel free to report issues as you see fit.
  • The system has been recently streamlined with improved widget organization and calculator accuracy.

โœจ What You Get

๐ŸŽ“ Beginner-Friendly

  • Helpful explanations for every setting (no more guessing!)
  • Step calculator shows you exactly how long training will take
  • Warnings when settings don't work together

๐Ÿงช Advanced Options (If You Want Them)

  • Memory-efficient optimizers (CAME, Prodigy Plus)
  • Special LoRA types (DoRA, LoKr, LoHa, IAยณ, BOFT, GLoRA)
  • Memory-saving options for smaller GPUs

๐Ÿ› ๏ธ Easy Setup

  • Prerequisites: This installation assumes you already have Jupyter Lab or Jupyter Notebook running
  • Two simple notebooks: one for datasets, one for training
  • Works with VastAI and other GPU rental services
  • Checks your system automatically

๐Ÿ“Š Dataset Tools

  • Auto-tag your images (WD14 for anime, BLIP for photos)
  • Add/remove tags easily
  • Upload ZIP files or folders

๐Ÿš€ Quick Start

What You Need

  • Computer: Windows, macOS, or Linux
  • Python: Version 3.10 or newer
  • GPU: NVIDIA GPU with 8GB+ VRAM recommended (can work with less)
  • Git: For downloading this project (explained below)

Installation

  1. Get Git (if you don't have it)

    Git is a tool for downloading code projects. Don't worry - you just need to install it once and you're done!

    Check if you already have Git: Open your terminal/command prompt and type git --version. If you see a version number, you're good to go!

    If you need to install Git:

    • Windows: Download from git-scm.com and run the installer
    • Mac: Open Terminal and type xcode-select --install
    • Linux: Type sudo apt-get install git (Ubuntu/Debian) or use your system's package manager
  2. Download This Project

    Open your terminal/command prompt and navigate to where you want the project folder. Then run:

    git clone https://github.com/Ktiseos-Nyx/Lora_Easy_Training_Jupyter.git
    cd Lora_Easy_Training_Jupyter
  3. Run Setup

    This automatically installs everything you need:

    Mac/Linux:

    chmod +x ./jupyter.sh
    ./jupyter.sh

    Windows (or if the above doesn't work):

    python ./installer.py

    Just wait for it to finish - it downloads the training tools and sets everything up.

Start Training

If using VastAI or similar: Jupyter is probably already running - just open the notebooks in your browser.

If on your own computer: Start Jupyter like this:

jupyter notebook

Then open these notebooks:

  1. Dataset_Maker_Widget.ipynb - Prepare your images and captions
  2. Lora_Trainer_Widget.ipynb - Set up and run training
  3. LoRA_Calculator_Widget.ipynb - Calculate training steps (optional)

๐Ÿ“– How to Use

Step 1: Prepare Your Images

Open Dataset_Maker_Widget.ipynb and run the first cell:

# This starts the dataset preparation tool
from widgets.dataset_widget import DatasetWidget
dataset_widget = DatasetWidget()
dataset_widget.display()

Upload your images (ZIP files work great!) and the system will auto-tag them for you.

How to Get Model/VAE Links

To use custom models or VAEs, you need to provide a direct download link. Hereโ€™s how to find them on popular platforms:

From Civitai

Method 1: Using the Model Version ID

This is the easiest method if a model has multiple versions.

  1. Navigate to the model or VAE page.
  2. Look at the URL in your browser's address bar. If it includes ?modelVersionId=XXXXXX, you can copy the entire URL and paste it directly into the widget.
  3. If you don't see this ID, try switching to a different version of the model and then back to your desired version. The ID should then appear in the URL.

How to get a link from Civitai using the version ID

Method 2: Copying the Download Link

Use this method if the model has only one version or if a version has multiple files.

  1. On the model or VAE page, scroll down to the "Files" section.
  2. Right-click the Download button for the file you want.
  3. Select "Copy Link Address" (or similar text) from the context menu.

How to get a link from Civitai by copying the download address

From Hugging Face

Method 1: Using the Repository URL

  1. Go to the main page of the model or VAE repository you want to use.
  2. Copy the URL directly from your browser's address bar.

How to get a link from Hugging Face using the repository URL

Method 2: Copying the Direct File Link

  1. Navigate to the "Files and versions" tab of the repository.
  2. Find the specific file you want to download.
  3. Click the "..." menu to the right of the file size, then right-click the "Download" link and copy the link address.

How to get a link from Hugging Face by copying the direct file address

Step 2: Train Your LoRA

Open Lora_Trainer_Widget.ipynb and run the cells to start training:

# First, set up your environment
from widgets.setup_widget import SetupWidget
setup_widget = SetupWidget()
setup_widget.display()

# Then configure training
from widgets.training_widget import TrainingWidget  
training_widget = TrainingWidget()
training_widget.display()

Good Starting Settings:

  • Learning Rate: UNet 5e-4, Text Encoder 1e-4
  • LoRA: 8 dim / 4 alpha (works for most characters)
  • Target: 250-1000 training steps (the calculator helps you figure this out)

3. Extras

๐Ÿงฎ Quick Training Calculator

Not sure about your dataset size or settings? Use our personal calculator:

python3 personal_lora_calculator.py

This tool helps you:

  • Calculate optimal repeats and epochs for your dataset size
  • Get personalized learning rate recommendations
  • Estimate total training steps
  • Build confidence for any dataset size (no more guesswork!) ๐ŸŽฏ

๐Ÿ”ง Architecture

Core Components

  • core/managers.py: SetupManager, ModelManager for environment setup
  • core/dataset_manager.py: Dataset processing and image tagging
  • core/training_manager.py: Hybrid training manager with advanced features
  • core/utilities_manager.py: Post-training utilities and optimization

Widget Interface

  • widgets/setup_widget.py: Environment setup and model downloads
  • widgets/dataset_widget.py: Dataset preparation interface
  • widgets/training_widget.py: Training configuration with advanced mode
  • widgets/utilities_widget.py: Post-training tools

๐Ÿ› Troubleshooting

Known Issues & Compatibility

โš ๏ธ Flux/SD3.5 Training (EXPERIMENTAL)

  • The Flux_SD3_Training/ folder contains work-in-progress Flux and SD3.5 LoRA training
  • May not function correctly - still under active development
  • Use at your own risk for testing purposes only

โš ๏ธ Triton/ONNX Compatibility Warnings

  • Docker/VastAI users: Triton compiler may fail with AdamW8bit optimizer
  • Symptoms: "TRITON NOT FOUND" or "triton not compatible" errors
  • Solution: System will auto-fallback to AdamW (uses more VRAM but stable)
  • ONNX Runtime: Dependency conflicts possible between onnxruntime-gpu and open-clip-torch

โš ๏ธ Advanced LoRA Methods (EXPERIMENTAL)

  • DoRA, GLoRA, BOFT (Butterfly): May not function correctly as of yet
  • Status: Currently under testing and validation
  • Recommendation: Use standard LoRA or LoCon for stable results
  • More testing: Additional compatibility testing is ongoing

Support

  • GitHub Issues: Report bugs and feature requests
  • Documentation: Check tooltips and explanations in widgets
  • Community: Share your LoRAs and experiences!

๐Ÿ† Credits

This project is built on the work of many awesome people:

Training Methods:

Notebook Inspirations:

  • AndroidXXL, Jelosus2, Linaqruf - Original Colab notebooks that made LoRA training accessible

Community:


"Either gonna work or blow up!" - Made with curiosity! ๐Ÿ˜„

๐Ÿ”’ Security

Found a security issue? Check our Security Policy for responsible disclosure guidelines.

๐Ÿ“„ License

MIT License - Feel free to use, modify, and distribute. See LICENSE for details.

๐Ÿค Contributing

We welcome contributions! Check out our Contributing Guide for details on how to get involved. Feel free to open issues or submit pull requests on GitHub.


Made with โค๏ธ by the community, for the community

About

Jupyter notebooks for Datasets & Training Loras based on Derrian Distro, AndroidXL, One Trainer, KohakuBluleaf, KohyaSS, Holostrawberry, Jelosus2's work.

Topics

Resources

License

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 87.8%
  • Jupyter Notebook 12.2%