Skip to content

Conversation

pytorchbot
Copy link
Collaborator

This PR was created by the merge bot to help merge the original PR into the main branch.
ghstack PR number: #6635
^ Please use this as the source of truth for the PR details, comments, and reviews
ghstack PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/136/base
ghstack PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/136/head
Merge bot PR base: https://github.com/pytorch/executorch/tree/main
Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/136/orig
@diff-train-skip-merge

Pull Request resolved: #6635

## Context

There are a variety of ways that tensors can be represented in Vulkan. The two main descriptors for how a tensor is laid out in memory is:

1. Storage Type (buffer or texture)
2. Memory Layout (which dim is packed along a texel, which dim has a stride of 1, etc.)

Due to the differences between buffers and textures, and the differences between different memory layouts, an implementation for an operator may only support a specific set of (storage type, memory layout) combinations.

Furthermore, if an operator implementation supports multiple (storage type, memory layout) combinations, there may be a "preferred" setting which results in optimal performance.

These changes lay the foundation for the implementation of a memory metadata tagging graph transform, which will make sure that all tensors participating in an operator call is has a valid/optimal (storage type, memory layout) setting, and insert transition operators to transfer input tensors to the correct memory settings when necessary.

An additional change that is required arises from the fact that in Vulkan, there is a limit on texture and buffer sizes. Therefore, the partitioner needs to account for the storage types and memory layouts supported by the operator implementation, and check if all tensors participating in a computation can be represented with some storage type, memory layout combination supported by the implementation.


## Changes

Improvements to the operator registry:

* Introduce utility functions to check the optimal and enabled storage types and memory layouts for an operator

Improvements to the Partitioner:

* Account for the storage types and memory layouts supported by an operator when deciding if a node should be partitioned
* Improved logic for fusable ops (i.e. the permute/transpose before a mm which can be fused into linear) to check if the final target op is supported in Vulkan, and only partition those nodes if so. Otherwise, don't partition it so that it can be fused by another backend.
ghstack-source-id: 251883705
@exported-using-ghexport

Differential Revision: [D65428843](https://our.internmc.facebook.com/intern/diff/D65428843/)
Copy link

pytorch-bot bot commented Nov 5, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/6668

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures

As of commit 615afb8 with merge base cd565b5 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 5, 2024
@SS-JIA SS-JIA merged commit cefe515 into main Nov 5, 2024
33 of 35 checks passed
@SS-JIA SS-JIA deleted the gh/SS-JIA/136/orig branch November 5, 2024 20:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants