Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Enable interleaved sliding_window for gemma3 #1344
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: habana_main
Are you sure you want to change the base?
Uh oh!
There was an error while loading. Please reload this page.
Enable interleaved sliding_window for gemma3 #1344
Changes from all commits
e92d432
bd75109
90a3ae1
9434197
0364786
8caf102
bff7983
dccd67e
45aaede
7df1811
e4b0397
6088039
8b13980
a9e5a7d
f783955
1297154
be41114
74e4cfb
347e965
a29d537
c9c5757
5af6870
000b4e0
658442d
61a3e2f
39e0f52
661f59a
affc7a7
994de89
ad492f8
1ceca57
805df55
d531412
f99d76a
File filter
Filter by extension
Conversations
Uh oh!
There was an error while loading. Please reload this page.
Jump to
Uh oh!
There was an error while loading. Please reload this page.
There are no files selected for viewing
Check failure on line 556 in vllm/attention/backends/hpu_attn.py
Ruff (E501)
Check failure on line 577 in vllm/attention/backends/hpu_attn.py
Ruff (E501)
Check failure on line 578 in vllm/attention/backends/hpu_attn.py
Ruff (E501)
Check failure on line 579 in vllm/attention/backends/hpu_attn.py
Ruff (E501)
Check failure on line 580 in vllm/attention/backends/hpu_attn.py
Ruff (E501)
Check failure on line 820 in vllm/attention/backends/hpu_attn.py
Ruff (E501)
Check failure on line 822 in vllm/attention/backends/hpu_attn.py
Ruff (E501)
Check failure on line 823 in vllm/attention/backends/hpu_attn.py
Ruff (E501)
Check failure on line 614 in vllm/model_executor/models/gemma3_mm.py
Ruff (B011)
Check failure on line 397 in vllm/model_executor/models/utils.py
Ruff (E501)
Uh oh!
There was an error while loading. Please reload this page.