Skip to content

Commit d5aefd7

Browse files
authored
only compress modules with weight quantization (#387)
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
1 parent 40ec65b commit d5aefd7

File tree

1 file changed

+6
-2
lines changed

1 file changed

+6
-2
lines changed

src/compressed_tensors/compressors/model_compressors/model_compressor.py

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -747,12 +747,16 @@ def _replace_weights(self, dense_weight_generator, model: Module):
747747

748748
def map_module_to_scheme(model: Module) -> Dict[str, QuantizationScheme]:
749749
"""
750-
Returns a dictionary which maps quantized module names to their quantization schemes
750+
Returns a dictionary which maps quantized module names to their quantization
751+
schemes. Only includes modules with weight quantization
751752
"""
752753
return {
753754
fix_fsdp_module_name(name): module.quantization_scheme
754755
for name, module in model.named_modules()
755-
if is_module_quantized(module)
756+
if (
757+
hasattr(module, "quantization_scheme") and
758+
module.quantization_scheme.weights is not None
759+
)
756760
}
757761

758762

0 commit comments

Comments
 (0)