Skip to content

Conversation

helunwencser
Copy link
Contributor

@helunwencser helunwencser commented Oct 30, 2024

Stack from ghstack (oldest at bottom):

This PR adds KVCacheWithAttentionSink, which is required for AttentionSink. It keeps the first sink_size tokens as attention sinks and maintains a sliding window with window_size for new tokens.

Note: I am trying to implement and verify AttentionSink in eager mode first. So the current implementation may still have some lower errors or performance issue. For example, it does not support the case when dynamic shape is disabled. Will leave these problems to resolve when we are ready to deploy AttentionSink to edge.

Differential Revision: D65235798

This PR adds `KVCacheWithAttentionSink`, which is required for `AttentionSink`. It keeps the first `sink_size` tokens as attention sinks and maintains a sliding window with `window_size` for new tokens.

Note: I am trying to implement and verify `AttentionSink` in eager mode first. So the current implementation may still have some lower errors or performance issue. For example, it does not support the case when dynamic shape is disabled. Will leave these problems to resolve when we are ready to deploy `AttentionSink` to edge.

Differential Revision: [D65235798](https://our.internmc.facebook.com/intern/diff/D65235798/)

[ghstack-poisoned]
Copy link

pytorch-bot bot commented Oct 30, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/6579

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 9105c8f with merge base c726a9b (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 30, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D65235798

This PR adds `KVCacheWithAttentionSink`, which is required for `AttentionSink`. It keeps the first `sink_size` tokens as attention sinks and maintains a sliding window with `window_size` for new tokens.

Note: I am trying to implement and verify `AttentionSink` in eager mode first. So the current implementation may still have some lower errors or performance issue. For example, it does not support the case when dynamic shape is disabled. Will leave these problems to resolve when we are ready to deploy `AttentionSink` to edge.

Differential Revision: [D65235798](https://our.internmc.facebook.com/intern/diff/D65235798/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D65235798

This PR adds `KVCacheWithAttentionSink`, which is required for `AttentionSink`. It keeps the first `sink_size` tokens as attention sinks and maintains a sliding window with `window_size` for new tokens.

Note: I am trying to implement and verify `AttentionSink` in eager mode first. So the current implementation may still have some lower errors or performance issue. For example, it does not support the case when dynamic shape is disabled. Will leave these problems to resolve when we are ready to deploy `AttentionSink` to edge.

Differential Revision: [D65235798](https://our.internmc.facebook.com/intern/diff/D65235798/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D65235798

This PR adds `KVCacheWithAttentionSink`, which is required for `AttentionSink`. It keeps the first `sink_size` tokens as attention sinks and maintains a sliding window with `window_size` for new tokens.

Note: I am trying to implement and verify `AttentionSink` in eager mode first. So the current implementation may still have some lower errors or performance issue. For example, it does not support the case when dynamic shape is disabled. Will leave these problems to resolve when we are ready to deploy `AttentionSink` to edge.

Differential Revision: [D65235798](https://our.internmc.facebook.com/intern/diff/D65235798/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D65235798

This PR adds `KVCacheWithAttentionSink`, which is required for `AttentionSink`. It keeps the first `sink_size` tokens as attention sinks and maintains a sliding window with `window_size` for new tokens.

Note: I am trying to implement and verify `AttentionSink` in eager mode first. So the current implementation may still have some lower errors or performance issue. For example, it does not support the case when dynamic shape is disabled. Will leave these problems to resolve when we are ready to deploy `AttentionSink` to edge.

Differential Revision: [D65235798](https://our.internmc.facebook.com/intern/diff/D65235798/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D65235798

This PR adds `KVCacheWithAttentionSink`, which is required for `AttentionSink`. It keeps the first `sink_size` tokens as attention sinks and maintains a sliding window with `window_size` for new tokens.

Note: I am trying to implement and verify `AttentionSink` in eager mode first. So the current implementation may still have some lower errors or performance issue. For example, it does not support the case when dynamic shape is disabled. Will leave these problems to resolve when we are ready to deploy `AttentionSink` to edge.

Differential Revision: [D65235798](https://our.internmc.facebook.com/intern/diff/D65235798/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D65235798

This PR adds `KVCacheWithAttentionSink`, which is required for `AttentionSink`. It keeps the first `sink_size` tokens as attention sinks and maintains a sliding window with `window_size` for new tokens.

Note: I am trying to implement and verify `AttentionSink` in eager mode first. So the current implementation may still have some lower errors or performance issue. For example, it does not support the case when dynamic shape is disabled. Will leave these problems to resolve when we are ready to deploy `AttentionSink` to edge.

Differential Revision: [D65235798](https://our.internmc.facebook.com/intern/diff/D65235798/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D65235798

This PR adds `KVCacheWithAttentionSink`, which is required for `AttentionSink`. It keeps the first `sink_size` tokens as attention sinks and maintains a sliding window with `window_size` for new tokens.

Note: I am trying to implement and verify `AttentionSink` in eager mode first. So the current implementation may still have some lower errors or performance issue. For example, it does not support the case when dynamic shape is disabled. Will leave these problems to resolve when we are ready to deploy `AttentionSink` to edge.

Differential Revision: [D65235798](https://our.internmc.facebook.com/intern/diff/D65235798/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D65235798

This PR adds `KVCacheWithAttentionSink`, which is required for `AttentionSink`. It keeps the first `sink_size` tokens as attention sinks and maintains a sliding window with `window_size` for new tokens.

Note: I am trying to implement and verify `AttentionSink` in eager mode first. So the current implementation may still have some lower errors or performance issue. For example, it does not support the case when dynamic shape is disabled. Will leave these problems to resolve when we are ready to deploy `AttentionSink` to edge.

Differential Revision: [D65235798](https://our.internmc.facebook.com/intern/diff/D65235798/)

[ghstack-poisoned]
helunwencser added a commit that referenced this pull request Nov 28, 2024
Pull Request resolved: #6579

This PR adds `KVCacheWithAttentionSink`, which is required for `AttentionSink`. It keeps the first `sink_size` tokens as attention sinks and maintains a sliding window with `window_size` for new tokens.

Note: I am trying to implement and verify `AttentionSink` in eager mode first. So the current implementation may still have some lower errors. Will leave these problems to resolve when we are ready to deploy `AttentionSink` to edge.
ghstack-source-id: 255715047
@exported-using-ghexport

Differential Revision: [D65235798](https://our.internmc.facebook.com/intern/diff/D65235798/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D65235798

@facebook-github-bot facebook-github-bot merged commit 8d30fc1 into gh/helunwencser/70/base Nov 28, 2024
41 of 43 checks passed
@facebook-github-bot facebook-github-bot deleted the gh/helunwencser/70/head branch November 28, 2024 01:54
kirklandsign pushed a commit that referenced this pull request Dec 2, 2024
add KVCacheWithAttentionSink

Pull Request resolved: #6579

This PR adds `KVCacheWithAttentionSink`, which is required for `AttentionSink`. It keeps the first `sink_size` tokens as attention sinks and maintains a sliding window with `window_size` for new tokens.

Note: I am trying to implement and verify `AttentionSink` in eager mode first. So the current implementation may still have some lower errors. Will leave these problems to resolve when we are ready to deploy `AttentionSink` to edge.
ghstack-source-id: 255715047
@exported-using-ghexport

Differential Revision: [D65235798](https://our.internmc.facebook.com/intern/diff/D65235798/)

Co-authored-by: Lunwen He <lwhecser@gmail.com>
kedarnath03 pushed a commit to kedarnath03/executorch that referenced this pull request Jun 25, 2025
Pull Request resolved: pytorch/executorch#6579

This PR adds `KVCacheWithAttentionSink`, which is required for `AttentionSink`. It keeps the first `sink_size` tokens as attention sinks and maintains a sliding window with `window_size` for new tokens.

Note: I am trying to implement and verify `AttentionSink` in eager mode first. So the current implementation may still have some lower errors. Will leave these problems to resolve when we are ready to deploy `AttentionSink` to edge.
ghstack-source-id: 254019779
@exported-using-ghexport

Differential Revision: [D65235798](https://our.internmc.facebook.com/intern/diff/D65235798/)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants