Skip to content

Commit e8c6c8f

Browse files
committed
fix typo
1 parent b11b96a commit e8c6c8f

File tree

1 file changed

+1
-1
lines changed
  • src/compressed_tensors/quantization/utils

1 file changed

+1
-1
lines changed

src/compressed_tensors/quantization/utils/helpers.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,7 @@ def calculate_qparams(
9292
scales = scales.to(FP8_E4M3_DATA.dtype)
9393
else:
9494
# Divide over bit range over max value?
95-
scales = max_val_pos / (float(bit_radnge) / 2)
95+
scales = max_val_pos / (float(bit_range) / 2)
9696

9797
# TODO: clamp not implemented for FP8 '
9898
# scales = torch.clamp(scales, min=torch.finfo(torch.float32).eps)

0 commit comments

Comments
 (0)