LLaMA Factory is an easy-to-use and efficient large language model training and fine-tuning platform. With LLaMA Factory, you can fine-tune hundreds of pre-trained models locally without writing any code. The framework features include:
Model types: LLaMA, LLaVA, Mistral, Mixtral-MoE, Qwen, Yi, Gemma, Baichuan, ChatGLM, Phi, etc.
Training algorithms: (incremental) pre-training, (multimodal) instruction-supervised fine-tuning, reward model training, PPO training, DPO training, KTO training, ORPO training, etc.
Operation precision: 16-bit full parameter fine-tuning, frozen fine-tuning, LoRA fine-tuning, and 2/3/4/5/6/8-bit QLoRA fine-tuning based on AQLM/AWQ/GPTQ/LLM.int8/HQQ/EETQ.
Optimized algorithms: GaLore, BAdam, DoRA, LongLoRA, LLaMA Pro, Mixture-of-Depths, LoRA+, LoftQ, and PiSSA.
Acceleration operators: FlashAttention-2 and Unsloth.
Inference engines: Transformers and vLLM.
Experimental panels: LlamaBoard, TensorBoard, Wandb, MLflow, etc.
Github : https://github.com/hiyouga/LLaMA-Factory/tree/main Doc : https://llamafactory.readthedocs.io/zh-cn/latest/