You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: papers/list.json
+9Lines changed: 9 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,13 @@
1
1
[
2
+
{
3
+
"title": "Rho-1: Not All Tokens Are What You Need",
4
+
"author": "Zhenghao Lin et al",
5
+
"year": "2024",
6
+
"topic": "tokens, reference model",
7
+
"venue": "NeurIPS",
8
+
"description": "This paper scores tokens using a reference model and then trains a language model to focus on the tokens with higher scores. They find that they can improve performance while training on less tokens.",
9
+
"link": "https://arxiv.org/pdf/2404.07965"
10
+
},
2
11
{
3
12
"title": "QuaRot: Outlier-Free 4-Bit Inference in Rotated LLMs",
Copy file name to clipboardExpand all lines: papers_read.html
+12-2Lines changed: 12 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -75,10 +75,10 @@ <h1>Here's where I keep a list of papers I have read.</h1>
75
75
I typically use this to organize papers I found interesting. Please feel free to do whatever you want with it. Note that this is not every single paper I have ever read, just a collection of ones that I remember to put down.
76
76
</p>
77
77
<pid="paperCount">
78
-
So far, we have read 189 papers. Let's keep it up!
78
+
So far, we have read 190 papers. Let's keep it up!
79
79
</p>
80
80
<smallid="searchCount">
81
-
Your search returned 189 papers. Nice!
81
+
Your search returned 190 papers. Nice!
82
82
</small>
83
83
84
84
<divclass="search-inputs">
@@ -105,6 +105,16 @@ <h1>Here's where I keep a list of papers I have read.</h1>
105
105
</thead>
106
106
<tbody>
107
107
108
+
<tr>
109
+
<td>Rho-1: Not All Tokens Are What You Need</td>
110
+
<td>Zhenghao Lin et al</td>
111
+
<td>2024</td>
112
+
<td>tokens, reference model</td>
113
+
<td>NeurIPS</td>
114
+
<td>This paper scores tokens using a reference model and then trains a language model to focus on the tokens with higher scores. They find that they can improve performance while training on less tokens.</td>
0 commit comments