Skip to content

Commit 82d4d88

Browse files
committed
try again with the linter
Signed-off-by: Bill Nell <bnell@redhat.com>
1 parent ff5fe55 commit 82d4d88

File tree

1 file changed

+4
-1
lines changed
  • vllm/model_executor/layers/fused_moe

1 file changed

+4
-1
lines changed

vllm/model_executor/layers/fused_moe/config.py

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -341,7 +341,10 @@ def make(
341341

342342
if quant_config is not None and isinstance(quant_config,
343343
QuantizationConfig):
344-
block_shape = quant_config.get("weight_block_size", None)
344+
if hasattr(quant_config, 'weight_block_size'):
345+
block_shape = quant_config.weight_block_size
346+
else:
347+
block_shape = None
345348
per_act_token_quant = False
346349
per_out_ch_quant = False
347350
quant_dtype: Optional[torch.dtype] = None

0 commit comments

Comments
 (0)