Skip to content

Commit c799845

Browse files
committed
more linter fixes
Signed-off-by: Bill Nell <bnell@redhat.com>
1 parent ba884d6 commit c799845

File tree

1 file changed

+2
-1
lines changed
  • vllm/model_executor/layers/fused_moe

1 file changed

+2
-1
lines changed

vllm/model_executor/layers/fused_moe/layer.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,8 @@ def init_prepare_finalize(self, moe: FusedMoEConfig,
159159

160160
# Note : We may want to use FP8 dispatch even otherwise just to
161161
# reduce datamovement
162-
assert moe.quant_config.block_shape is not None
162+
assert (moe.quant_config is not None
163+
and moe.quant_config.block_shape is not None)
163164
use_fp8_dispatch = (
164165
moe.quant_config.quant_dtype == current_platform.fp8_dtype()
165166
and moe.quant_config.block_shape[1] == DEEPEP_QUANT_BLOCK_SIZE)

0 commit comments

Comments
 (0)