Skip to content

Commit 15592c0

Browse files
authored
[bugfix] fix accuracy prolem for deepseek V3/R1 models with torchair graph in long sequence predictions (#1331)
### What this PR does / why we need it? Fix the issue of insufficient cached cosine and sine length in MLA's TorchAir graph mode, which causes accuracy deviation during long-sequence inference. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? We tested the accuracy of this patch with DeepSeek R1 e2e becnhmark serving, and get 83.33 sore for AIME2024 dataset with DP4TP4EP16 setting. Signed-off-by: linfeng-yuan <1102311262@qq.com>
1 parent f04c676 commit 15592c0

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

vllm_ascend/attention/mla_v1.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1077,7 +1077,7 @@ def forward(
10771077
decode_k_nope = None
10781078
assert attn_metadata.decode is not None
10791079
if self.running_in_graph:
1080-
seq_len = self.rotary_emb.max_position_embeddings
1080+
seq_len = self.rotary_emb.max_position_embeddings * self.rotary_emb.scaling_factor
10811081
cos = self.rotary_emb.cos_cached[:seq_len].to(
10821082
dtype=decode_hs_or_q_c.dtype)
10831083
sin = self.rotary_emb.sin_cached[:seq_len].to(
@@ -1122,7 +1122,7 @@ def forward(
11221122
prefill_q_nope = prefill_q[..., :self.qk_nope_head_dim]
11231123
if self.torchair_graph_enabled:
11241124
num_tokens = prefill_hs_or_q_c.shape[0]
1125-
seq_len = self.rotary_emb.max_position_embeddings
1125+
seq_len = self.rotary_emb.max_position_embeddings * self.rotary_emb.scaling_factor
11261126
cos = self.rotary_emb.cos_cached[:seq_len].to(
11271127
dtype=prefill_q_pe.dtype)
11281128
sin = self.rotary_emb.sin_cached[:seq_len].to(

0 commit comments

Comments
 (0)