Skip to content

Commit d634daa

Browse files
committed
more linter fixes
Signed-off-by: Bill Nell <bnell@redhat.com>
1 parent 47d5eb8 commit d634daa

File tree

1 file changed

+2
-1
lines changed
  • vllm/model_executor/layers/fused_moe

1 file changed

+2
-1
lines changed

vllm/model_executor/layers/fused_moe/layer.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -156,7 +156,8 @@ def init_prepare_finalize(self, moe: FusedMoEConfig,
156156

157157
# Note : We may want to use FP8 dispatch even otherwise just to
158158
# reduce datamovement
159-
assert moe.quant_config.block_shape is not None
159+
assert (moe.quant_config is not None
160+
and moe.quant_config.block_shape is not None)
160161
use_fp8_dispatch = (
161162
moe.quant_config.quant_dtype == current_platform.fp8_dtype()
162163
and moe.quant_config.block_shape[1] == DEEPEP_QUANT_BLOCK_SIZE)

0 commit comments

Comments
 (0)