|
1 |
| -# Transformers |
| 1 | +# Integrations |
2 | 2 |
|
3 |
| -With Transformers it's very easy to load any model in 4 or 8-bit, quantizing them on the fly with `bitsandbytes` primitives. |
| 3 | +bitsandbytes is widely integrated with many of the libraries in the Hugging Face and wider PyTorch ecosystem. This guide provides a brief overview of the integrations and how to use bitsandbytes with them. For more details, you should refer to the linked documentation for each library. |
4 | 4 |
|
5 |
| -Please review the [`bitsandbytes` section in the Transformers docs](https://huggingface.co/docs/transformers/main/en/quantization#bitsandbytes). |
| 5 | +## Transformers |
6 | 6 |
|
7 |
| -Details about the BitsAndBytesConfig can be found [here](https://huggingface.co/docs/transformers/v4.37.2/en/main_classes/quantization#transformers.BitsAndBytesConfig). |
| 7 | +> [!TIP] |
| 8 | +> Learn more in the bitsandbytes Transformers integration [guide](https://huggingface.co/docs/transformers/quantization#bitsandbytes). |
| 9 | +
|
| 10 | +With Transformers, it's very easy to load any model in 4 or 8-bit and quantize them on the fly. To configure the quantization parameters, specify them in the [`~transformers.BitsAndBytesConfig`] class. |
| 11 | + |
| 12 | +For example, to load and quantize a model to 4-bits and use the bfloat16 data type for compute: |
8 | 13 |
|
9 | 14 | > [!WARNING]
|
10 |
| -> **Beware: bf16 is the optimal compute data type!** |
11 |
| -> |
12 |
| -> If your hardware supports it, `bf16` is the optimal compute dtype. The default is `float32` for backward compatibility and numerical stability. `float16` often leads to numerical instabilities, but `bfloat16` provides the benefits of both worlds: numerical stability equivalent to float32, but combined with the memory footprint and significant computation speedup of a 16-bit data type. Therefore, be sure to check if your hardware supports `bf16` and configure it using the `bnb_4bit_compute_dtype` parameter in BitsAndBytesConfig: |
| 15 | +> bfloat16 is the optimal compute data type if your hardware supports it. The default is float32 for backward compatibility and numerical stability, but it can often lead to numerical instabilities. bfloat16 provides the best of both worlds, numerical stability equivalent to float32, but combined with the memory footprint and significant computation speedup of a 16-bit data type. Make sure to check if your hardware supports bfloat16 and if it does, configure it using the `bnb_4bit_compute_dtype` parameter in [`~transformers.BitsAndBytesConfig`]! |
13 | 16 |
|
14 | 17 | ```py
|
15 |
| -import torch |
16 |
| -from transformers import BitsAndBytesConfig |
| 18 | +from transformers import AutoModelForCausalLM, BitsAndBytesConfig |
17 | 19 |
|
18 | 20 | quantization_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16)
|
| 21 | +model_4bit = AutoModelForCausalLM.from_pretrained( |
| 22 | + "bigscience/bloom-1b7", |
| 23 | + device_map=device_map, |
| 24 | + quantization_config=quantization_config, |
| 25 | +) |
| 26 | +``` |
| 27 | + |
| 28 | +### 8-bit optimizers |
| 29 | + |
| 30 | +You can use any of the 8-bit or paged optimizers with Transformers by passing them to the [`~transformers.Trainer`] class on initialization. All bitsandbytes optimizers are supported by passing the correct string in the [`~transformers.TrainingArguments`] `optim` parameter. For example, to load a [`~bitsandbytes.optim.PagedAdamW32bit`] optimizer: |
| 31 | + |
| 32 | +```py |
| 33 | +from transformers import TrainingArguments, Trainer |
| 34 | + |
| 35 | +training_args = TrainingArguments( |
| 36 | + ..., |
| 37 | + optim="paged_adamw_32bit", |
| 38 | +) |
| 39 | +trainer = Trainer(model, training_args, ...) |
| 40 | +trainer.train() |
| 41 | +``` |
| 42 | + |
| 43 | +## PEFT |
| 44 | + |
| 45 | +> [!TIP] |
| 46 | +> Learn more in the bitsandbytes PEFT integration [guide](https://huggingface.co/docs/peft/developer_guides/quantization#quantization). |
| 47 | +
|
| 48 | +PEFT builds on the bitsandbytes Transformers integration, and extends it for training with a few more steps. Let's prepare the 4-bit model from the section above for training. |
| 49 | + |
| 50 | +Call the [`~peft.prepare_model_for_kbit_training`] method to prepare the model for training. This only works for Transformers models! |
| 51 | + |
| 52 | +```py |
| 53 | +from peft import prepare_model_for_kbit_training |
| 54 | + |
| 55 | +model_4bit = prepare_model_for_kbit_training(model_4bit) |
19 | 56 | ```
|
20 | 57 |
|
21 |
| -# PEFT |
22 |
| -With `PEFT`, you can use QLoRA out of the box with `LoraConfig` and a 4-bit base model. |
| 58 | +Setup a [`~peft.LoraConfig`] to use QLoRA: |
| 59 | + |
| 60 | +```py |
| 61 | +from peft import LoraConfig |
| 62 | + |
| 63 | +config = LoraConfig( |
| 64 | + r=16, |
| 65 | + lora_alpha=8, |
| 66 | + target_modules="all-linear", |
| 67 | + lora_dropout=0.05 |
| 68 | + bias="none", |
| 69 | + task_type="CAUSAL_LM" |
| 70 | +) |
| 71 | +``` |
23 | 72 |
|
24 |
| -Please review the [bitsandbytes section in the PEFT docs](https://huggingface.co/docs/peft/developer_guides/quantization#quantize-a-model). |
| 73 | +Now call the [`~peft.get_peft_model`] function on your model and config to create a trainable [`PeftModel`]. |
| 74 | + |
| 75 | +```py |
| 76 | +from peft import get_peft_model |
| 77 | + |
| 78 | +model = get_peft_model(model_4bit, config) |
| 79 | +``` |
25 | 80 |
|
26 |
| -# Accelerate |
| 81 | +## Accelerate |
27 | 82 |
|
28 |
| -Bitsandbytes is also easily usable from within Accelerate, where you can quantize any PyTorch model simply by passing a quantization config; e.g: |
| 83 | +> [!TIP] |
| 84 | +> Learn more in the bitsandbytes Accelerate integration [guide](https://huggingface.co/docs/accelerate/usage_guides/quantization). |
| 85 | +
|
| 86 | +bitsandbytes is also easily usable from Accelerate and you can quantize any PyTorch model by passing a [`~accelerate.utils.BnbQuantizationConfig`] with your desired settings, and then calling the [`~accelerate.utils.load_and_quantize_model`] function to quantize it. |
29 | 87 |
|
30 | 88 | ```py
|
31 | 89 | from accelerate import init_empty_weights
|
@@ -55,37 +113,25 @@ quantized_model = load_and_quantize_model(
|
55 | 113 | )
|
56 | 114 | ```
|
57 | 115 |
|
58 |
| -For further details, e.g. model saving, cpu-offloading andfine-tuning, please review the [`bitsandbytes` section in the Accelerate docs](https://huggingface.co/docs/accelerate/en/usage_guides/quantization). |
59 |
| - |
60 |
| - |
61 |
| - |
62 |
| -# PyTorch Lightning and Lightning Fabric |
63 |
| - |
64 |
| -Bitsandbytes is available from within both |
65 |
| -- [PyTorch Lightning](https://lightning.ai/docs/pytorch/stable/), a deep learning framework for professional AI researchers and machine learning engineers who need maximal flexibility without sacrificing performance at scale; |
66 |
| -- and [Lightning Fabric](https://lightning.ai/docs/fabric/stable/), a fast and lightweight way to scale PyTorch models without boilerplate). |
67 |
| - |
68 |
| -Please review the [bitsandbytes section in the PyTorch Lightning docs](https://lightning.ai/docs/pytorch/stable/common/precision_intermediate.html#quantization-via-bitsandbytes). |
69 |
| - |
70 |
| - |
71 |
| -# Lit-GPT |
| 116 | +## PyTorch Lightning and Lightning Fabric |
72 | 117 |
|
73 |
| -Bitsandbytes is integrated into [Lit-GPT](https://github.com/Lightning-AI/lit-gpt), a hackable implementation of state-of-the-art open-source large language models, based on Lightning Fabric, where it can be used for quantization during training, finetuning, and inference. |
| 118 | +bitsandbytes is available from: |
74 | 119 |
|
75 |
| -Please review the [bitsandbytes section in the Lit-GPT quantization docs](https://github.com/Lightning-AI/lit-gpt/blob/main/tutorials/quantize.md). |
| 120 | +- [PyTorch Lightning](https://lightning.ai/docs/pytorch/stable/), a deep learning framework for professional AI researchers and machine learning engineers who need maximal flexibility without sacrificing performance at scale. |
| 121 | +- [Lightning Fabric](https://lightning.ai/docs/fabric/stable/), a fast and lightweight way to scale PyTorch models without boilerplate. |
76 | 122 |
|
| 123 | +Learn more in the bitsandbytes PyTorch Lightning integration [guide](https://lightning.ai/docs/pytorch/stable/common/precision_intermediate.html#quantization-via-bitsandbytes). |
77 | 124 |
|
78 | 125 |
|
79 |
| -# Trainer for the optimizers |
| 126 | +## Lit-GPT |
80 | 127 |
|
81 |
| -You can use any of the 8-bit and/or paged optimizers by simple passing them to the `transformers.Trainer` class on initialization.All bnb optimizers are supported by passing the correct string in `TrainingArguments`'s `optim` attribute - e.g. (`paged_adamw_32bit`). |
| 128 | +bitsandbytes is integrated with [Lit-GPT](https://github.com/Lightning-AI/lit-gpt), a hackable implementation of state-of-the-art open-source large language models. Lit-GPT is based on Lightning Fabric, and it can be used for quantization during training, finetuning, and inference. |
82 | 129 |
|
83 |
| -See the [official API docs for reference](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Trainer). |
| 130 | +Learn more in the bitsandbytes Lit-GPT integration [guide](https://github.com/Lightning-AI/lit-gpt/blob/main/tutorials/quantize.md). |
84 | 131 |
|
85 |
| -Here we point out to relevant doc sections in transformers / peft / Trainer + very briefly explain how these are integrated: |
86 |
| -e.g. for transformers state that you can load any model in 8-bit / 4-bit precision, for PEFT, you can use QLoRA out of the box with `LoraConfig` + 4-bit base model, for Trainer: all bnb optimizers are supported by passing the correct string in `TrainingArguments`'s `optim` attribute - e.g. (`paged_adamw_32bit`): |
| 132 | +## Blog posts |
87 | 133 |
|
88 |
| -# Blog posts |
| 134 | +To learn in more detail about some of bitsandbytes integrations, take a look at the following blog posts: |
89 | 135 |
|
90 |
| -- [Making LLMs even more accessible with `bitsandbytes`, 4-bit quantization and QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes) |
91 |
| -- [A Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using Hugging Face Transformers, Accelerate and `bitsandbytes`](https://huggingface.co/blog/hf-bitsandbytes-integration) |
| 136 | +- [Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes) |
| 137 | +- [A Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using Hugging Face Transformers, Accelerate and bitsandbytes](https://huggingface.co/blog/hf-bitsandbytes-integration) |
0 commit comments