Skip to content

Commit 9bba089

Browse files
committed
fix bug
Signed-off-by: xin3he <xin3.he@intel.com>
1 parent c8f7ac7 commit 9bba089

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

neural_compressor/torch/quantization/algorithm_entry.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ def gptq_entry(
120120
kwargs.pop("example_inputs")
121121
logger.warning("lm_head in transformer model is skipped by GPTQ")
122122

123-
if CurrentQuantizer.quantizer is None or mode == [Mode.PREPARE, Mode.QUANTIZE]:
123+
if CurrentQuantizer.quantizer is None or mode in [Mode.PREPARE, Mode.QUANTIZE]:
124124
CurrentQuantizer.quantizer = INCGPTQQuantizer(quant_config=weight_config)
125125
model = CurrentQuantizer.quantizer.execute(model, mode=mode, *args, **kwargs)
126126
return model

0 commit comments

Comments
 (0)