Skip to content

Commit ce5a8eb

Browse files
committed
Update
[ghstack-poisoned]
1 parent 00c6bbb commit ce5a8eb

File tree

1 file changed

+1
-2
lines changed

1 file changed

+1
-2
lines changed

torchao/quantization/quantize_/workflows/float8/float8_tensor.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -360,9 +360,8 @@ def _(func, types, args, kwargs):
360360
# TODO(future PR): add testing for torch._scaled_mm with
361361
# blockwise scaling on CUDA 12.9
362362
# TODO(future PR): add fbgemm_gpu_genai path if available
363-
# TODO(before land): proper out_dtype handling
363+
# TODO(future PR): proper out_dtype handling
364364
assert _is_1_128_scaled(input_tensor), "unsupported"
365-
# breakpoint()
366365
res = blockwise_fp8_gemm(
367366
inpt_data,
368367
input_scale,

0 commit comments

Comments
 (0)