Skip to content

Commit 2cd036e

Browse files
authored
[Bugfix] fix accuracy problem for quantized deepseek models (#768)
### What this PR does / why we need it? The root cause of the bug is that numerical computations involving NaNs cannot eliminate them. We addressed it by using `masked_fill_` to eliminate NaNs while avoiding memory-wasting `torch.where` approach. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? This patch was tested with vllm v0.8.5 and vllm-ascend master. I run deepseek_v3 model with offline inference scripts (examples/dp_offline/run_dp.sh & data_parallel.py). Signed-off-by: linfeng-yuan <1102311262@qq.com>
1 parent d6e9417 commit 2cd036e

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

vllm_ascend/quantization/w8a8_dynamic.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -285,7 +285,8 @@ def fused_experts(hidden_states: torch.Tensor,
285285
valid_token_mask = torch.arange(
286286
0, sorted_token_indices.shape[0],
287287
device=device).unsqueeze(1) < num_valid_tokens
288-
down_out_list.mul_(valid_token_mask)
288+
down_out_list = down_out_list.masked_fill_(~valid_token_mask,
289+
0).to(dtype)
289290
final_hidden_states.index_add_(0, sorted_token_indices, down_out_list)
290291
else:
291292
# TODO: Reorder device memory 2 times here, replace the current

0 commit comments

Comments
 (0)