-
Notifications
You must be signed in to change notification settings - Fork 602
[ET-VK] Allow specifying multiple storage types/memory layouts for an operator + register group norm operator #11828
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: gh/SS-JIA/248/base
Are you sure you want to change the base?
Conversation
… operator + register group norm operator ## Changes * Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output. ## Motivation Required for the group norm operator. ## Future Work Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule. The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level. Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/11828
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 3284023 with merge base 608a745 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
… operator + register group norm operator ## Changes * Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output. ## Motivation Required for the group norm operator. ## Future Work Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule. The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level. Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/) ghstack-source-id: 291701330 Pull Request resolved: #11828
This pull request was exported from Phabricator. Differential Revision: D77038781 |
This PR needs a
|
…outs for an operator + register group norm operator" ## Changes * Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output. ## Motivation Required for the group norm operator. ## Future Work Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule. The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level. Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/) [ghstack-poisoned]
… operator + register group norm operator Pull Request resolved: #11828 ## Changes * Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output. ## Motivation Required for the group norm operator. ## Future Work Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule. The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level. ghstack-source-id: 292082369 @exported-using-ghexport Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/)
This pull request was exported from Phabricator. Differential Revision: D77038781 |
…outs for an operator + register group norm operator" ## Changes * Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output. ## Motivation Required for the group norm operator. ## Future Work Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule. The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level. Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/) [ghstack-poisoned]
… operator + register group norm operator Pull Request resolved: #11828 ## Changes * Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output. ## Motivation Required for the group norm operator. ## Future Work Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule. The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level. ghstack-source-id: 292141398 @exported-using-ghexport Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/)
This pull request was exported from Phabricator. Differential Revision: D77038781 |
Stack from ghstack (oldest at bottom):
Changes
Motivation
Required for the group norm operator.
Future Work
Currently, the
tag_memory_meta_pass
graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule.The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level.
Differential Revision: D77038781