Skip to content

Fix GradScaler import on torch >= 2.3.0 for benchmark_optimizer.py #620

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Mar 16, 2025

Conversation

Vectorrent
Copy link
Contributor

No description provided.

Copy link

codecov bot commented Jul 13, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 86.07%. Comparing base (d20e810) to head (f34f525).
Report is 4 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #620      +/-   ##
==========================================
+ Coverage   85.39%   86.07%   +0.67%     
==========================================
  Files          81       81              
  Lines        8006     8014       +8     
==========================================
+ Hits         6837     6898      +61     
+ Misses       1169     1116      -53     

see 3 files with indirect coverage changes

Copy link
Member

@mryab mryab left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR! I have one small concern about compatibility, but apart from that we should be good to go

@@ -98,7 +98,7 @@ def run_trainer(batch_size: int, batch_time: float, client_mode: bool, verbose:
grad_scaler = hivemind.GradScaler()
else:
# check that hivemind.Optimizer supports regular PyTorch grad scaler as well
grad_scaler = torch.cuda.amp.GradScaler(enabled=args.use_amp)
grad_scaler = torch.amp.GradScaler(enabled=args.use_amp)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One small question just to make sure: if the user has torch==1.9.0 (earliest supported version in requirements.txt), does this version already have torch.amp? Maybe we need to bump the version in requirements to make sure it won't break existing code?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This part used to exist, but got lost somewhere in splitting all of this work into multiple PRs. I've restored the logic that imports this GradScalar differently here, depending on the installed Torch version.

To answer your question, no, torch.amp does not exist in 1.9.

@Vectorrent Vectorrent changed the title fix grad scalar import on torch > 2.3.0 fix grad scalar import on torch >= 2.3.0 Jul 13, 2024
@mryab mryab force-pushed the fix-grad-scaler branch from f34f525 to c6ef1f7 Compare July 13, 2024 12:50
@mryab mryab changed the title fix grad scalar import on torch >= 2.3.0 Fix grad scaler import on torch >= 2.3.0 Jul 13, 2024
@mryab mryab changed the title Fix grad scaler import on torch >= 2.3.0 Fix GradScaler import on torch >= 2.3.0 Jul 13, 2024
@mryab mryab force-pushed the fix-grad-scaler branch from 2197f84 to 507dd45 Compare March 15, 2025 12:16
@mryab mryab changed the title Fix GradScaler import on torch >= 2.3.0 Fix GradScaler import on torch >= 2.3.0 for benchmark_optimizer.py Mar 15, 2025
@mryab mryab force-pushed the fix-grad-scaler branch from 507dd45 to 71e9d66 Compare March 15, 2025 14:34
@mryab mryab force-pushed the fix-grad-scaler branch from 71e9d66 to 5ba5abc Compare March 15, 2025 23:16
Copy link
Member

@mryab mryab left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

@mryab mryab merged commit a19b61d into learning-at-home:master Mar 16, 2025
20 of 30 checks passed
mryab added a commit that referenced this pull request Apr 20, 2025
)

* Fix GradScaler import on torch >= 2.3.0 for benchmark_optimizer.py

* Use autocast depending on the version

---------

Co-authored-by: Max Ryabinin <mryabinin0@gmail.com>
(cherry picked from commit a19b61d)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants