Skip to content

v1.7.0 : Regional compilation, Layerwise casting hook, FSDPv2 + QLoRA

Latest
Compare
Choose a tag to compare
@SunMarc SunMarc released this 15 May 12:33
· 14 commits to main since this release

Regional compilation

Instead of compiling the entire model at once, regional compilation targets repeated blocks (such as decoder layers) first. This allows the compiler to cache and reuse optimized code for subsequent blocks, significantly reducing the cold start compilation time typically seen during the first inference. Thanks @IlyasMoutawwakil for the feature ! You can view the full benchmark here, and check out our updated compilation guide for more details!

compilation_time-1

To enable this feature, set use_regional_compilation=True in the TorchDynamoPlugin configuration.

# Configure the compilation backend
dynamo_plugin = TorchDynamoPlugin(
    use_regional_compilation=True,
    ... # other parameters
)
# Initialize accelerator with the plugin
accelerator = Accelerator(dynamo_plugin=dynamo_plugin)
# This will apply compile_regions to your model
model = accelerator.prepare(model)

Layerwise casting hook

We've introduced a new hook that enables per-layer upcasting and downcasting (e.g., for Linear layers) during inference. This allows users to run models with separate storage and compute dtypes, resulting in memory savings. The concept was first implemented in diffusers, where downcasting models to FP8 proved effective without major quality degradation. Contributed by @sayakpaul in #3427

model = ....
storage_dtype = torch.float8_e4m3fn
compute_dtype = torch.bfloat16
attach_layerwise_casting_hooks(
            model,
            storage_dtype=storage_dtype,
            compute_dtype=compute_dtype,
        )

Better FSDP2 support

This release includes numerous new features and bug fixes. Notably, we’ve added support for FULL_STATE_DICT, a widely used option in FSDP, now enabling .save_pretrained() in transformers to work with FSDP2 wrapped models. QLoRA training is now supported as well but more testing is needed. We have also resolved a backend issue related to parameter offloading to CPU. Additionally, a significant memory spike that occurred when cpu_ram_efficient_loading=True was enabled has been fixed. Several other minor improvements and fixes are also included—see the What’s Changed section for full details.

  • FULL_STATE_DICT have been enabled by @S1ro1 in #3527
  • QLoRA support by @winglian in #3546
  • set backend correctly for CUDA+FSDP2+cpu-offload in #3574
  • memory spike fixed when using cpu_ram_efficient_loading=True by @S1ro1 in #3482

Better HPU support:

We have added a documentation for Intel Gaudi hardware !
The support is already available since v1.5.0 through this PR.

Torch.compile breaking change for dynamic argument

We've updated the logic for setting self.dynamic to explicitly preserve None rather than defaulting to False when the USE_DYNAMIC environment variable is unset. This change aligns the behavior with the PyTorch documentation for torch.compile. Thanks to @yafshar for contributing this improvement in #3567.

What's Changed

New Contributors

Full Changelog: v1.6.0...v1.7.0