Regional compilation
Instead of compiling the entire model at once, regional compilation targets repeated blocks (such as decoder layers) first. This allows the compiler to cache and reuse optimized code for subsequent blocks, significantly reducing the cold start compilation time typically seen during the first inference. Thanks @IlyasMoutawwakil for the feature ! You can view the full benchmark here, and check out our updated compilation guide for more details!
To enable this feature, set use_regional_compilation=True
in the TorchDynamoPlugin
configuration.
# Configure the compilation backend
dynamo_plugin = TorchDynamoPlugin(
use_regional_compilation=True,
... # other parameters
)
# Initialize accelerator with the plugin
accelerator = Accelerator(dynamo_plugin=dynamo_plugin)
# This will apply compile_regions to your model
model = accelerator.prepare(model)
Layerwise casting hook
We've introduced a new hook that enables per-layer upcasting and downcasting (e.g., for Linear layers) during inference. This allows users to run models with separate storage and compute dtypes, resulting in memory savings. The concept was first implemented in diffusers, where downcasting models to FP8 proved effective without major quality degradation. Contributed by @sayakpaul in #3427
model = ....
storage_dtype = torch.float8_e4m3fn
compute_dtype = torch.bfloat16
attach_layerwise_casting_hooks(
model,
storage_dtype=storage_dtype,
compute_dtype=compute_dtype,
)
Better FSDP2 support
This release includes numerous new features and bug fixes. Notably, we’ve added support for FULL_STATE_DICT
, a widely used option in FSDP, now enabling .save_pretrained()
in transformers to work with FSDP2 wrapped models. QLoRA training is now supported as well but more testing is needed. We have also resolved a backend issue related to parameter offloading to CPU. Additionally, a significant memory spike that occurred when cpu_ram_efficient_loading=True
was enabled has been fixed. Several other minor improvements and fixes are also included—see the What’s Changed section for full details.
FULL_STATE_DICT
have been enabled by @S1ro1 in #3527- QLoRA support by @winglian in #3546
- set backend correctly for CUDA+FSDP2+cpu-offload in #3574
- memory spike fixed when using
cpu_ram_efficient_loading=True
by @S1ro1 in #3482
Better HPU support:
We have added a documentation for Intel Gaudi hardware !
The support is already available since v1.5.0 through this PR.
- Add the HPU into accelerate config by @yuanwu2017 in #3495
- Add Gaudi doc by @regisss in #3537
Torch.compile breaking change for dynamic
argument
We've updated the logic for setting self.dynamic
to explicitly preserve None rather than defaulting to False
when the USE_DYNAMIC
environment variable is unset. This change aligns the behavior with the PyTorch documentation for torch.compile. Thanks to @yafshar for contributing this improvement in #3567.
What's Changed
- use device agnostic torch.OutOfMemoryError from pytorch 2.5.0 by @yao-matrix in #3475
- Adds style bot by @zach-huggingface in #3478
- Fix a tiny typo in
low_precision_training
guide by @sadra-barikbin in #3488 - Fix check_tied_parameters_in_config for multimodal models by @SunMarc in #3479
- Don't create new param for TorchAO sequential offloading due to weak BC guarantees by @a-r-r-o-w in #3444
- add support for custom function for reducing the batch size by @winglian in #3071
- Fix fp8 deepspeed config by @SunMarc in #3492
- fix warning error by @faaany in #3491
- [bug] unsafe_serialization option in "merge-weights" doesn't work by @cyr0930 in #3496
- Add the HPU into accelerate config by @yuanwu2017 in #3495
- Use
torch.distributed.checkpoint.state_dict.set_model_state_dict
inload_checkpoint_in_model
by @ringohoffman in #3432 - nit: needed sanity checks for fsdp2 by @kmehant in #3499
- (Part 1) fix: make TP training compatible with new transformers by @kmehant in #3457
- Fix deepspeed tests by @S1ro1 in #3503
- Add FP8 runners + tweak building FP8 image by @zach-huggingface in #3493
- fix: apply torchfix to set
weights_only=True
by @bzhong-solink in #3497 - Fix: require transformers version for tp tests by @S1ro1 in #3504
- Remove deprecated PyTorch/XLA APIs by @zpcore in #3484
- Fix cache issue by upgrading github actions version by @SunMarc in #3513
- [Feat] Layerwise casting hook by @sayakpaul in #3427
- Add torchao to FP8 error message by @jphme in #3514
- Fix unwanted cuda init due to torchao by @SunMarc in #3530
- Solve link error in internal_mechanism documentation (#3506) by @alvaro-mazcu in #3507
- [FSDP2] Enable FULL_STATE_DICT by @S1ro1 in #3527
- [FSDP2] Fix memory spike with
cpu_ram_efficient_loading=True
by @S1ro1 in #3482 - [FSDP2] Issues in Wrap Policy and Mixed Precision by @jhliu17 in #3528
- Fix logic in
accelerator.prepare
+ IPEX for 2+nn.Models
and/oroptim.Optimizers
by @mariusarvinte in #3517 - Update Docker builds to align with CI requirements by @matthewdouglas in #3532
- Fix CI due to missing package by @SunMarc in #3535
- Update big_modeling.md for layerwise casting by @sayakpaul in #3548
- [FSDP2] Fix: "..." is not a buffer or a paremeter by @S1ro1 in
- fix notebook_launcher for Colab TPU compatibility. by @BogdanDidenko in #3541
- Fix typos by @omahs in #3549
- Dynamo regional compilation by @IlyasMoutawwakil in #3529
- add support for port 0 auto-selection in multi-GPU environments by @hellobiondi in #3501
- Fix the issue where
set_epoch
does not take effect. by @hongjx175 in #3556 - [FSDP2] Fix casting in
_cast_and_contiguous
by @dlvp in #3559 - [FSDP] Make env var and dataclass flag consistent for
cpu_ram_efficient_loading
by @SumanthRH in #3307 - canonicalize optimized names before fixing optimizer in fdsp2 by @pstjohn in #3560
- [docs] update deepspeed config path by @faaany in #3561
- preserve parameter keys when removing prefix by @mjkvaak-amd in #3564
- Add Gaudi doc by @regisss in #3537
- Update dynamic env handling to preserve None when USE_DYNAMIC is unset by @yafshar in #3567
- add a
synchronize
call for xpu in_gpu_gather
by @faaany in #3563 - simplify model.to logic by @yao-matrix in #3562
- tune env command output by @yao-matrix in #3570
- Add regional compilation to cli tools and env vars by @IlyasMoutawwakil in #3572
- reenable FSDP2+qlora support by @winglian in #3546
- Fix prevent duplicate GPU usage in distributed processing by @ved1beta in #3526
- set backend correctly for CUDA+FSDP2+cpu-offload by @SunMarc in #3574
- enable test_dispatch_model_tied_weights_memory_with_nested_offload_cpu on xpu by @yao-matrix in #3569
New Contributors
- @zach-huggingface made their first contribution in #3478
- @sadra-barikbin made their first contribution in #3488
- @ringohoffman made their first contribution in #3432
- @bzhong-solink made their first contribution in #3497
- @zpcore made their first contribution in #3484
- @jphme made their first contribution in #3514
- @alvaro-mazcu made their first contribution in #3507
- @jhliu17 made their first contribution in #3528
- @BogdanDidenko made their first contribution in #3541
- @hellobiondi made their first contribution in #3501
- @hongjx175 made their first contribution in #3556
- @dlvp made their first contribution in #3559
- @pstjohn made their first contribution in #3560
- @mjkvaak-amd made their first contribution in #3564
- @yafshar made their first contribution in #3567
- @ved1beta made their first contribution in #3526
Full Changelog: v1.6.0...v1.7.0