Skip to content

Commit 75d05ee

Browse files
authored
[Core] Fix block table shape to make Prefix cache work with Ascend scheduler (#1446)
### What this PR does / why we need it? This fix the shape of block_table which was introduced by hybrid kv groups several weeks ago. Error will be raised when enable prefix-cache (eager or not) and Ascend Scheduler at the same time, just send two identical requests and it will reproduce. v0.9.1: #1297 ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Test manually Signed-off-by: Yizhou Liu <liu_yizhou@outlook.com>
1 parent b308a7a commit 75d05ee

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

vllm_ascend/attention/attention_v1.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -355,11 +355,13 @@ def forward(
355355
assert attn_metadata is not None
356356
assert attn_metadata.attn_mask is not None
357357
compress_mask = attn_metadata.attn_mask
358+
batch_size = attn_metadata.query_lens.shape[0]
359+
block_table = attn_metadata.block_tables[:batch_size, :]
358360
torch_npu._npu_flash_attention_qlens(
359361
query=query,
360362
key_cache=self.key_cache,
361363
value_cache=self.value_cache,
362-
block_table=attn_metadata.block_tables,
364+
block_table=block_table,
363365
mask=compress_mask,
364366
seq_len=attn_metadata.query_lens,
365367
context_lens=attn_metadata.seq_lens,

0 commit comments

Comments
 (0)