Skip to content

[ET-VK] Allow specifying multiple storage types/memory layouts for an operator + register group norm operator #11828

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: gh/SS-JIA/248/base
Choose a base branch
from

Conversation

SS-JIA
Copy link
Contributor

@SS-JIA SS-JIA commented Jun 20, 2025

Stack from ghstack (oldest at bottom):

Changes

  • Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output.

Motivation

Required for the group norm operator.

Future Work

Currently, the tag_memory_meta_pass graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule.

The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level.

Differential Revision: D77038781

… operator + register group norm operator

## Changes

* Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output.

## Motivation

Required for the group norm operator.

## Future Work

Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule.

The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level.

Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/)

[ghstack-poisoned]
Copy link

pytorch-bot bot commented Jun 20, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/11828

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 3284023 with merge base 608a745 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

SS-JIA added a commit that referenced this pull request Jun 20, 2025
… operator + register group norm operator

## Changes

* Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output.

## Motivation

Required for the group norm operator.

## Future Work

Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule.

The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level.

Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/)

ghstack-source-id: 291701330
Pull Request resolved: #11828
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 20, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77038781

Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

…outs for an operator + register group norm operator"

## Changes

* Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output.

## Motivation

Required for the group norm operator.

## Future Work

Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule.

The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level.

Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/)

[ghstack-poisoned]
SS-JIA added a commit that referenced this pull request Jun 23, 2025
… operator + register group norm operator

Pull Request resolved: #11828

## Changes

* Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output.

## Motivation

Required for the group norm operator.

## Future Work

Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule.

The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level.
ghstack-source-id: 292082369
@exported-using-ghexport

Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77038781

…outs for an operator + register group norm operator"

## Changes

* Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output.

## Motivation

Required for the group norm operator.

## Future Work

Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule.

The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level.

Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/)

[ghstack-poisoned]
SS-JIA added a commit that referenced this pull request Jun 23, 2025
… operator + register group norm operator

Pull Request resolved: #11828

## Changes

* Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output.

## Motivation

Required for the group norm operator.

## Future Work

Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule.

The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level.
ghstack-source-id: 292141398
@exported-using-ghexport

Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77038781

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants