diff --git a/docs/source/en/_toctree.yml b/docs/source/en/_toctree.yml index f13b7d54aec4..8ee46dd6cb61 100644 --- a/docs/source/en/_toctree.yml +++ b/docs/source/en/_toctree.yml @@ -180,6 +180,8 @@ title: Caching - local: optimization/memory title: Reduce memory usage + - local: optimization/speed-memory-optims + title: Compile and offloading - local: optimization/xformers title: xFormers - local: optimization/tome diff --git a/docs/source/en/optimization/memory.md b/docs/source/en/optimization/memory.md index 6b853a7a084b..1d8fe0bed96f 100644 --- a/docs/source/en/optimization/memory.md +++ b/docs/source/en/optimization/memory.md @@ -17,7 +17,7 @@ Modern diffusion models like [Flux](../api/pipelines/flux) and [Wan](../api/pipe This guide will show you how to reduce your memory usage. > [!TIP] -> Keep in mind these techniques may need to be adjusted depending on the model! For example, a transformer-based diffusion model may not benefit equally from these inference speed optimizations as a UNet-based model. +> Keep in mind these techniques may need to be adjusted depending on the model. For example, a transformer-based diffusion model may not benefit equally from these memory optimizations as a UNet-based model. ## Multiple GPUs @@ -145,7 +145,7 @@ print(f"Max memory reserved: {torch.cuda.max_memory_allocated() / 1024**3:.2f} G ``` > [!WARNING] -> [`AutoencoderKLWan`] and [`AsymmetricAutoencoderKL`] don't support slicing. +> The [`AutoencoderKLWan`] and [`AsymmetricAutoencoderKL`] classes don't support slicing. ## VAE tiling @@ -219,7 +219,7 @@ from diffusers import DiffusionPipeline pipeline = DiffusionPipeline.from_pretrained( "black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16 ) -pipline.enable_model_cpu_offload() +pipeline.enable_model_cpu_offload() pipeline( prompt="An astronaut riding a horse on Mars", @@ -493,7 +493,7 @@ with torch.inference_mode(): ## Memory-efficient attention > [!TIP] -> Memory-efficient attention optimizes for memory usage *and* [inference speed](./fp16#scaled-dot-product-attention! +> Memory-efficient attention optimizes for memory usage *and* [inference speed](./fp16#scaled-dot-product-attention)! The Transformers attention mechanism is memory-intensive, especially for long sequences, so you can try using different and more memory-efficient attention types. diff --git a/docs/source/en/optimization/speed-memory-optims.md b/docs/source/en/optimization/speed-memory-optims.md new file mode 100644 index 000000000000..e15ca3d7ea5c --- /dev/null +++ b/docs/source/en/optimization/speed-memory-optims.md @@ -0,0 +1,143 @@ + + +# Compile and offloading + +When optimizing models, you often face trade-offs between [inference speed](./fp16) and [memory-usage](./memory). For instance, while [caching](./cache) can boost inference speed, it comes at the cost of increased memory consumption since it needs to store intermediate attention layer outputs. + +A more balanced optimization strategy combines [torch.compile](./fp16#torchcompile) with various offloading methods. This approach not only accelerates inference but also helps lower memory-usage. + +The table below provides a comparison of optimization strategy combinations and their impact on latency and memory-usage. + +| combination | latency | memory-usage | +|---|---|---| +| quantization, torch.compile | | | +| quantization, torch.compile, model CPU offloading | | | +| quantization, torch.compile, group offloading | | | + +This guide will show you how to compile and offload a model. + +## Quantization and torch.compile + +> [!TIP] +> The quantization backend, such as [bitsandbytes](../quantization/bitsandbytes#torchcompile), must be compatible with torch.compile. Refer to the quantization [overview](https://huggingface.co/docs/transformers/quantization/overview#overview) table to see which backends support torch.compile. + +Start by [quantizing](../quantization/overview) a model to reduce the memory required for storage and [compiling](./fp16#torchcompile) it to accelerate inference. + +```py +import torch +from diffusers import DiffusionPipeline +from diffusers.quantizers import PipelineQuantizationConfig + +# quantize +pipeline_quant_config = PipelineQuantizationConfig( + quant_backend="bitsandbytes_4bit", + quant_kwargs={"load_in_4bit": True, "bnb_4bit_quant_type": "nf4", "bnb_4bit_compute_dtype": torch.bfloat16}, + components_to_quantize=["transformer", "text_encoder_2"], +) +pipeline = DiffusionPipeline.from_pretrained( + "black-forest-labs/FLUX.1-dev", + quantization_config=pipeline_quant_config, + torch_dtype=torch.bfloat16, +).to("cuda") + +# compile +pipeline.transformer.to(memory_format=torch.channels_last) +pipeline.transformer.compile( mode="max-autotune", fullgraph=True) +pipeline(""" + cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California + highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain +""" +).images[0] +``` + +## Quantization, torch.compile, and offloading + +In addition to quantization and torch.compile, try offloading if you need to reduce memory-usage further. Offloading moves various layers or model components from the CPU to the GPU as needed for computations. + + + + +[Model CPU offloading](./memory#model-offloading) moves an individual pipeline component, like the transformer model, to the GPU when it is needed for computation. Otherwise, it is offloaded to the CPU. + +```py +import torch +from diffusers import DiffusionPipeline +from diffusers.quantizers import PipelineQuantizationConfig + +# quantize +pipeline_quant_config = PipelineQuantizationConfig( + quant_backend="bitsandbytes_4bit", + quant_kwargs={"load_in_4bit": True, "bnb_4bit_quant_type": "nf4", "bnb_4bit_compute_dtype": torch.bfloat16}, + components_to_quantize=["transformer", "text_encoder_2"], +) +pipeline = DiffusionPipeline.from_pretrained( + "black-forest-labs/FLUX.1-dev", + quantization_config=pipeline_quant_config, + torch_dtype=torch.bfloat16, +).to("cuda") + +# model CPU offloading +pipeline.enable_model_cpu_offload() + +# compile +pipeline.transformer.to(memory_format=torch.channels_last) +pipeline.transformer.compile( mode="max-autotune", fullgraph=True) +pipeline( + "cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California, highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain" +).images[0] +``` + + + + +[Group offloading](./memory#group-offloading) moves the internal layers of an individual pipeline component, like the transformer model, to the GPU for computation and offloads it when it's not required. At the same time, it uses the [CUDA stream](./memory#cuda-stream) feature to prefetch the next layer for execution. + +By overlapping computation and data transfer, it is faster than model CPU offloading while also saving memory. + +```py +import torch +from diffusers import DiffusionPipeline +from diffusers.hooks import apply_group_offloading +from diffusers.quantizers import PipelineQuantizationConfig + +# quantize +pipeline_quant_config = PipelineQuantizationConfig( + quant_backend="bitsandbytes_4bit", + quant_kwargs={"load_in_4bit": True, "bnb_4bit_quant_type": "nf4", "bnb_4bit_compute_dtype": torch.bfloat16}, + components_to_quantize=["transformer", "text_encoder_2"], +) +pipeline = DiffusionPipeline.from_pretrained( + "black-forest-labs/FLUX.1-dev", + quantization_config=pipeline_quant_config, + torch_dtype=torch.bfloat16, +).to("cuda") + +# group offloading +onload_device = torch.device("cuda") +offload_device = torch.device("cpu") + +pipeline.transformer.enable_group_offload(onload_device=onload_device, offload_device=offload_device, offload_type="leaf_level", use_stream=True) +pipeline.vae.enable_group_offload(onload_device=onload_device, offload_type="leaf_level", use_stream=True) +apply_group_offloading(pipeline.text_encoder, onload_device=onload_device, offload_type="leaf_level", use_stream=True) +apply_group_offloading(pipeline.text_encoder_2, onload_device=onload_device, offload_type="leaf_level", use_stream=True) + +# compile +pipeline.transformer.to(memory_format=torch.channels_last) +pipeline.transformer.compile( mode="max-autotune", fullgraph=True) +pipeline( + "cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California, highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain" +).images[0] +``` + + + \ No newline at end of file