You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: papers/list.json
+10-1Lines changed: 10 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,20 @@
1
1
[
2
+
{
3
+
"title": "Prefix-Tuning: Optimizing Continuous Prompts for Generation",
4
+
"author": "Xiang Lisa Li et al",
5
+
"year": "2021",
6
+
"topic": "prefix-tuning, prompting, llm",
7
+
"venue": "Arxiv",
8
+
"description": "This paper proposes prefix-tuning, which keeps language model params frozen but optimizes a continuous task-specific vector (prefix).",
9
+
"link": "https://arxiv.org/pdf/2101.00190"
10
+
},
2
11
{
3
12
"title": "The Power of Scale for Parameter-Efficient Prompt Tuning",
4
13
"author": "Brian Lester et al",
5
14
"year": "2021",
6
15
"topic": "prompting, llm",
7
16
"venue": "Arxiv",
8
-
"description": "This paper explores adding soft prompts to condition forzen language models. Basically, soft prompts are learned through back-propagation and can be used to finetune language models without fully retraining. They also introduce the idea of \"prompt ensembling\" which is basically using multiple soft prompts on a model and ensembling their outputs.",
17
+
"description": "This paper explores adding soft prompts to condition frozen language models. Basically, soft prompts are learned through back-propagation and can be used to finetune language models without fully retraining. They also introduce the idea of \"prompt ensembling\" which is basically using multiple soft prompts on a model and ensembling their outputs.",
Copy file name to clipboardExpand all lines: papers_read.html
+13-3Lines changed: 13 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -16,10 +16,10 @@ <h1>Here's where I keep a list of papers I have read.</h1>
16
16
I typically use this to organize papers I found interesting. Please feel free to do whatever you want with it. Note that this is not every single paper I have ever read, just a collection of ones that I remember to put down.
17
17
</p>
18
18
<pid="paperCount">
19
-
So far, we have read 198 papers. Let's keep it up!
19
+
So far, we have read 199 papers. Let's keep it up!
20
20
</p>
21
21
<smallid="searchCount">
22
-
Your search returned 198 papers. Nice!
22
+
Your search returned 199 papers. Nice!
23
23
</small>
24
24
25
25
<divclass="search-inputs">
@@ -46,13 +46,23 @@ <h1>Here's where I keep a list of papers I have read.</h1>
46
46
</thead>
47
47
<tbody>
48
48
49
+
<tr>
50
+
<td>Prefix-Tuning: Optimizing Continuous Prompts for Generation</td>
51
+
<td>Xiang Lisa Li et al</td>
52
+
<td>2021</td>
53
+
<td>prefix-tuning, prompting, llm</td>
54
+
<td>Arxiv</td>
55
+
<td>This paper proposes prefix-tuning, which keeps language model params frozen but optimizes a continuous task-specific vector (prefix).</td>
<td>The Power of Scale for Parameter-Efficient Prompt Tuning</td>
51
61
<td>Brian Lester et al</td>
52
62
<td>2021</td>
53
63
<td>prompting, llm</td>
54
64
<td>Arxiv</td>
55
-
<td>This paper explores adding soft prompts to condition forzen language models. Basically, soft prompts are learned through back-propagation and can be used to finetune language models without fully retraining. They also introduce the idea of "prompt ensembling" which is basically using multiple soft prompts on a model and ensembling their outputs.</td>
65
+
<td>This paper explores adding soft prompts to condition frozen language models. Basically, soft prompts are learned through back-propagation and can be used to finetune language models without fully retraining. They also introduce the idea of "prompt ensembling" which is basically using multiple soft prompts on a model and ensembling their outputs.</td>
0 commit comments