Skip to content

Commit 40d7196

Browse files
committed
Updated on 2024-12-31
1 parent de27135 commit 40d7196

File tree

2 files changed

+21
-2
lines changed

2 files changed

+21
-2
lines changed

papers/list.json

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,13 @@
11
[
2+
{
3+
"title": "Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention",
4+
"author": "Angelos Katharopoulos et al",
5+
"year": "2020",
6+
"topic": "attention, transformer",
7+
"venue": "ICML",
8+
"description": "This paper rephrases transformers as RNNs (title). They express the self-attention mechanism as a linear dot-product of kernel feature maps to make the complexity go from O(N^2) to O(N). Personal note: this is the 200th paper recorded on here, and the last of 2024! Summer of 2024 was when I began studying machine learning. Let's keep it up!",
9+
"link": "https://arxiv.org/pdf/2006.16236"
10+
},
211
{
312
"title": "Prefix-Tuning: Optimizing Continuous Prompts for Generation",
413
"author": "Xiang Lisa Li et al",

papers_read.html

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,10 +16,10 @@ <h1>Here's where I keep a list of papers I have read.</h1>
1616
I typically use this to organize papers I found interesting. Please feel free to do whatever you want with it. Note that this is not every single paper I have ever read, just a collection of ones that I remember to put down.
1717
</p>
1818
<p id="paperCount">
19-
So far, we have read 199 papers. Let's keep it up!
19+
So far, we have read 200 papers. Let's keep it up!
2020
</p>
2121
<small id="searchCount">
22-
Your search returned 199 papers. Nice!
22+
Your search returned 200 papers. Nice!
2323
</small>
2424

2525
<div class="search-inputs">
@@ -46,6 +46,16 @@ <h1>Here's where I keep a list of papers I have read.</h1>
4646
</thead>
4747
<tbody>
4848

49+
<tr>
50+
<td>Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention</td>
51+
<td>Angelos Katharopoulos et al</td>
52+
<td>2020</td>
53+
<td>attention, transformer</td>
54+
<td>ICML</td>
55+
<td>This paper rephrases transformers as RNNs (title). They express the self-attention mechanism as a linear dot-product of kernel feature maps to make the complexity go from O(N^2) to O(N). Personal note: this is the 200th paper recorded on here, and the last of 2024! Summer of 2024 was when I began studying machine learning. Let&#x27;s keep it up!</td>
56+
<td><a href="https://arxiv.org/pdf/2006.16236" target="_blank">Link</a></td>
57+
</tr>
58+
4959
<tr>
5060
<td>Prefix-Tuning: Optimizing Continuous Prompts for Generation</td>
5161
<td>Xiang Lisa Li et al</td>

0 commit comments

Comments
 (0)