Skip to content

Commit d7d8836

Browse files
committed
Updated on 2024-08-18
1 parent 991fd3e commit d7d8836

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

index.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ <h3>
3939
When?
4040
</h3>
4141
<p>
42-
Last time this was edited was 2024-08-13 (YYYY/MM/DD).
42+
Last time this was edited was 2024-08-18 (YYYY/MM/DD).
4343
</p>
4444
<small><a href="misc.html">misc</a></small>
4545
</body>

papers/list.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
"year": "2018",
66
"topic": "quantization, floating-point, precision",
77
"venue": "NeurIps",
8-
"description": "The authors show that it is possible to train DNNs with 8-bit fp values while maintaining decent accuracy. To do this, they make a new FP8 format, develope a technique \"chunk-based computations\" that allow matrix and convolution ops to be computed using 8-bit multiplications and 16 bit additions, and use fp stochastic rounding in weight updates.",
8+
"description": "The authors show that it is possible to train DNNs with 8-bit fp values while maintaining decent accuracy. To do this, they make a new FP8 format, develope a technique \"chunk-based computations\" that allow matrix and convolution ops to be computed using 8-bit multiplications and 16 bit additions, and use fp stochastic rounding in weight updates. One interesting point they make is that swamping (the issue of truncation in large-to-small number addition) is a serious problem in DNN bit-precision reduction.",
99
"link": "https://arxiv.org/pdf/1812.08011"
1010
},
1111
{

0 commit comments

Comments
 (0)