-
Notifications
You must be signed in to change notification settings - Fork 689
[executorch] Add TorchAO wrapper config to allow filter_fn for quantize_ #13386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[executorch] Add TorchAO wrapper config to allow filter_fn for quantize_ #13386
Conversation
Fixing tests for stack that got reverted: #13264 Changes: Support filter function in quantize_ function when using torchao quantize. Update unittests accordingly Use ComposableQuantizer if there are multiple quantizers and is of type torchao, for legacy quantizers use them directly with prepare_pt2e. Source transform modifies model inplace, so deep copy first to avoid modifying user provided model. Differential Revision: [D80206543](https://our.internmc.facebook.com/intern/diff/D80206543/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/13386
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 1 Pending, 5 Unrelated FailuresAs of commit c82c8bf with merge base 46dd51a ( NEW FAILURES - The following jobs have failed:
FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D80206543 |
This PR needs a
|
… for quantize_" Fixing tests for stack that got reverted: #13264 Changes: Support filter function in quantize_ function when using torchao quantize. Update unittests accordingly Use ComposableQuantizer if there are multiple quantizers and is of type torchao, for legacy quantizers use them directly with prepare_pt2e. Source transform modifies model inplace, so deep copy first to avoid modifying user provided model. Differential Revision: [D80206543](https://our.internmc.facebook.com/intern/diff/D80206543/) [ghstack-poisoned]
This pull request was exported from Phabricator. Differential Revision: D80206543 |
… for quantize_" Fixing tests for stack that got reverted: #13264 Changes: Support filter function in quantize_ function when using torchao quantize. Update unittests accordingly Use ComposableQuantizer if there are multiple quantizers and is of type torchao, for legacy quantizers use them directly with prepare_pt2e. Source transform modifies model inplace, so deep copy first to avoid modifying user provided model. Differential Revision: [D80206543](https://our.internmc.facebook.com/intern/diff/D80206543/) [ghstack-poisoned]
This pull request was exported from Phabricator. Differential Revision: D80206543 |
… for quantize_" Fixing tests for stack that got reverted: #13264 Changes: Support filter function in quantize_ function when using torchao quantize. Update unittests accordingly Use ComposableQuantizer if there are multiple quantizers and is of type torchao, for legacy quantizers use them directly with prepare_pt2e. Source transform modifies model inplace, so deep copy first to avoid modifying user provided model. Differential Revision: [D80206543](https://our.internmc.facebook.com/intern/diff/D80206543/) [ghstack-poisoned]
This pull request was exported from Phabricator. Differential Revision: D80206543 |
… for quantize_" Fixing tests for stack that got reverted: #13264 Changes: Support filter function in quantize_ function when using torchao quantize. Update unittests accordingly Use ComposableQuantizer if there are multiple quantizers and is of type torchao, for legacy quantizers use them directly with prepare_pt2e. Source transform modifies model inplace, so deep copy first to avoid modifying user provided model. Differential Revision: [D80206543](https://our.internmc.facebook.com/intern/diff/D80206543/) [ghstack-poisoned]
This pull request was exported from Phabricator. Differential Revision: D80206543 |
… for quantize_" Fixing tests for stack that got reverted: #13264 Changes: Support filter function in quantize_ function when using torchao quantize. Update unittests accordingly Use ComposableQuantizer if there are multiple quantizers and is of type torchao, for legacy quantizers use them directly with prepare_pt2e. Source transform modifies model inplace, so deep copy first to avoid modifying user provided model. Differential Revision: [D80206543](https://our.internmc.facebook.com/intern/diff/D80206543/) [ghstack-poisoned]
This pull request was exported from Phabricator. Differential Revision: D80206543 |
93e616d
into
gh/abhinaykukkadapu/6/base
Fixing typo introduced in #[13386](#13386)
Fixing typo introduced in #[13386](pytorch#13386)
Stack from ghstack (oldest at bottom):
Fixing tests for stack that got reverted: #13264
Changes:
Support filter function in quantize_ function when using torchao quantize.
Update unittests accordingly
Use ComposableQuantizer if there are multiple quantizers and is of type torchao, for legacy quantizers use them directly with prepare_pt2e.
Source transform modifies model inplace, so deep copy first to avoid modifying user provided model.
Differential Revision: D80206543