Skip to content

Commit e4eb3fb

Browse files
authored
Add torch.float64 as a viable dtype (#379)
1 parent 8f67b97 commit e4eb3fb

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/compressed_tensors/quantization/lifecycle/initialize.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -189,7 +189,7 @@ def _initialize_scale_zero_point(
189189
else:
190190
# TODO: consider erroring out in the future as if the dtype if not one of these,
191191
# there is likely bug
192-
if scale_dtype not in [torch.float16, torch.bfloat16, torch.float32]:
192+
if scale_dtype not in [torch.float16, torch.bfloat16, torch.float32, torch.float64]:
193193
scale_dtype = torch.float16
194194
zp_dtype = quantization_args.pytorch_dtype()
195195

0 commit comments

Comments
 (0)