Skip to content

Commit be5a9bb

Browse files
committed
Update
[ghstack-poisoned]
2 parents d40ec7c + ce5a8eb commit be5a9bb

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

torchao/quantization/quantize_/workflows/float8/float8_tensor.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -360,7 +360,7 @@ def _(func, types, args, kwargs):
360360
# TODO(future PR): add testing for torch._scaled_mm with
361361
# blockwise scaling on CUDA 12.9
362362
# TODO(future PR): add fbgemm_gpu_genai path if available
363-
# TODO(before land): proper out_dtype handling
363+
# TODO(future PR): proper out_dtype handling
364364
assert _is_1_128_scaled(input_tensor), "unsupported"
365365
res = blockwise_fp8_gemm(
366366
inpt_data,

0 commit comments

Comments
 (0)