Skip to content

Commit 81f0bf2

Browse files
[float8] Fix link markdown in readme (#1881)
1 parent c376285 commit 81f0bf2

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

torchao/float8/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -226,7 +226,7 @@ and tensorwise scaling. The training benchmarks were all run using:
226226
| Llama3-8b | rowwise with bfloat16 all-gather | per op SAC | 47.79 | 6768 | 10.05%
227227

228228
**Important notes**:
229-
- E2E speedups increase as M,K,N (GEMM dimensions) increase. Speedups as high as 1.5x have been measured with larger shapes ((example)[https://pytorch.org/blog/training-using-float8-fsdp2/]).
229+
- E2E speedups increase as M,K,N (GEMM dimensions) increase. Speedups as high as 1.5x have been measured with larger shapes ([example](https://pytorch.org/blog/training-using-float8-fsdp2/)).
230230
- Rowwise scaling is better at handling outliers than tensorwise scaling, so these recipes are different points on the accuracy vs performance curve.
231231

232232
**Reproducing training benchmarks**

0 commit comments

Comments
 (0)