-
Notifications
You must be signed in to change notification settings - Fork 3
Make models amenable to scan #157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
We replace the `for` loop in both Llama and Mixtral with an equivalent `HomogenousSequential` layer, which can be either run a for loop or use `torch_xla`'s scan operator. This is a clean-ish way to turn scan on/off without cluttering the modeling code. I also adjusted Mixtral slightly so that we can even run `scan` in Mixtral with its static MoE implementation. Scanning over GMM on the other hand won't work until GMM forward/backward is wrapped in a custom op similar to pytorch/xla#8654. Test: added unit test. Next PR will change the trainer to apply scan.
bhavya01
reviewed
Mar 17, 2025
zpcore
reviewed
Mar 17, 2025
zpcore
reviewed
Mar 17, 2025
zpcore
reviewed
Mar 17, 2025
zpcore
reviewed
Mar 17, 2025
bhavya01
approved these changes
Mar 17, 2025
zpcore
reviewed
Mar 18, 2025
@zpcore i saw you added a number of comments but didn't press "Request Changes" or "Approve" -- let me know if you would like to request changes or approve. |
zpcore
approved these changes
Mar 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
We replace the
for
loop in both Llama and Mixtral with an equivalentHomogenousSequential
layer, which can be either run a for loop or usetorch_xla
's scan operator. This is a clean-ish way to turn scan on/off without cluttering the modeling code.I also adjusted Mixtral slightly so that we can even run
scan
in Mixtral with its static MoE implementation. In order to integrate with scan, we need to refactor the Mixtral decoder for loop into a format where results from the previous iteration feed into the next iteration. Scanning over GMM on the other hand won't work until GMM forward/backward is wrapped in a custom op similar to pytorch/xla#8654.Cleanup the README that got jumbled in #111 while I'm here.
Test: added unit test. Next PR will change the trainer to apply scan.