-
Notifications
You must be signed in to change notification settings - Fork 689
[ET-VK][BE] Remove usage of vTensorPtr
and get_tensor
#13156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Note that although the volume of changes in this diff are very high, the changes themselves are extremely mechanical. This diff was written almost entirely with a LLM, but I have looked through each file and validated the changes. ## Changes This diff updates callsites using `graph->get_tensor(value_ref)` in favor of just using the `ValueRef` directly. A simple example (and the vast majority of changes in this diff) is a change such as: ``` vTensorPtr tensor = graph->get_tensor(tensor_ref); some_fn(tensor->sizes()); ``` To instead be ``` std::vector<int64_t> tensor_sizes = graph->sizes_of(tensor_ref); some_fn(tensor_sizes); ``` or ``` some_fn(graph->sizes_of(tensor_ref)); ``` ## Motivation Overall, the goal is to make the `get_tensor()` API protected so that it can only be used in specific situations. In addition to the primary motivation of improving the consistency of API usage throughout the codebase, there is a practical benefit as well. `get_tensor` has a limitation that no values can be added to the graph while the `vTensorPtr` is in scope. Also, forcing tensor modifications via functions like `virtual_resize()` to go through the `ComputeGraph` will allow the graph to track changes for the purposes of determining when a command buffer re-encode or resize propagation is necessary, which will result in performance benefits. Differential Revision: [D79564594](https://our.internmc.facebook.com/intern/diff/D79564594/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/13156
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 1 New Failure, 1 Unrelated FailureAs of commit 7c32276 with merge base 6bc312a ( NEW FAILURE - The following job has failed:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D79564594 |
This PR needs a
|
eb4a1ec
into
gh/SS-JIA/267/base
Stack from ghstack (oldest at bottom):
DynamicDispatchNode
#13159DynamicDispatchNode
#13158get_tensor()
API protected #13157vTensorPtr
andget_tensor
#13156Note that although the volume of changes in this diff are very high, the changes themselves are extremely mechanical. This diff was written almost entirely with a LLM, but I have looked through each file and validated the changes.
Changes
This diff updates callsites using
graph->get_tensor(value_ref)
in favor of just using theValueRef
directly.A simple example (and the vast majority of changes in this diff) is a change such as:
To instead be
or
Motivation
Overall, the goal is to make the
get_tensor()
API protected so that it can only be used in specific situations.In addition to the primary motivation of improving the consistency of API usage throughout the codebase, there is a practical benefit as well.
get_tensor
has a limitation that no values can be added to the graph while thevTensorPtr
is in scope. Also, forcing tensor modifications via functions likevirtual_resize()
to go through theComputeGraph
will allow the graph to track changes for the purposes of determining when a command buffer re-encode or resize propagation is necessary, which will result in performance benefits.Differential Revision: D79564594