Skip to content

Commit b920057

Browse files
committed
Updated on 2024-12-31
1 parent 1e448e0 commit b920057

File tree

3 files changed

+24
-5
lines changed

3 files changed

+24
-5
lines changed

index.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ <h1>Where?</h1>
3535
<!-- Last Update Section -->
3636
<section>
3737
<h1>When?</h1>
38-
Last time this was edited was 2024-12-30 (YYYY/MM/DD).
38+
Last time this was edited was 2024-12-31 (YYYY/MM/DD).
3939
</section>
4040

4141
<!-- Footer -->

papers/list.json

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,20 @@
11
[
2+
{
3+
"title": "Prefix-Tuning: Optimizing Continuous Prompts for Generation",
4+
"author": "Xiang Lisa Li et al",
5+
"year": "2021",
6+
"topic": "prefix-tuning, prompting, llm",
7+
"venue": "Arxiv",
8+
"description": "This paper proposes prefix-tuning, which keeps language model params frozen but optimizes a continuous task-specific vector (prefix).",
9+
"link": "https://arxiv.org/pdf/2101.00190"
10+
},
211
{
312
"title": "The Power of Scale for Parameter-Efficient Prompt Tuning",
413
"author": "Brian Lester et al",
514
"year": "2021",
615
"topic": "prompting, llm",
716
"venue": "Arxiv",
8-
"description": "This paper explores adding soft prompts to condition forzen language models. Basically, soft prompts are learned through back-propagation and can be used to finetune language models without fully retraining. They also introduce the idea of \"prompt ensembling\" which is basically using multiple soft prompts on a model and ensembling their outputs.",
17+
"description": "This paper explores adding soft prompts to condition frozen language models. Basically, soft prompts are learned through back-propagation and can be used to finetune language models without fully retraining. They also introduce the idea of \"prompt ensembling\" which is basically using multiple soft prompts on a model and ensembling their outputs.",
918
"link": "https://arxiv.org/pdf/2104.08691"
1019
},
1120
{

papers_read.html

Lines changed: 13 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -16,10 +16,10 @@ <h1>Here's where I keep a list of papers I have read.</h1>
1616
I typically use this to organize papers I found interesting. Please feel free to do whatever you want with it. Note that this is not every single paper I have ever read, just a collection of ones that I remember to put down.
1717
</p>
1818
<p id="paperCount">
19-
So far, we have read 198 papers. Let's keep it up!
19+
So far, we have read 199 papers. Let's keep it up!
2020
</p>
2121
<small id="searchCount">
22-
Your search returned 198 papers. Nice!
22+
Your search returned 199 papers. Nice!
2323
</small>
2424

2525
<div class="search-inputs">
@@ -46,13 +46,23 @@ <h1>Here's where I keep a list of papers I have read.</h1>
4646
</thead>
4747
<tbody>
4848

49+
<tr>
50+
<td>Prefix-Tuning: Optimizing Continuous Prompts for Generation</td>
51+
<td>Xiang Lisa Li et al</td>
52+
<td>2021</td>
53+
<td>prefix-tuning, prompting, llm</td>
54+
<td>Arxiv</td>
55+
<td>This paper proposes prefix-tuning, which keeps language model params frozen but optimizes a continuous task-specific vector (prefix).</td>
56+
<td><a href="https://arxiv.org/pdf/2101.00190" target="_blank">Link</a></td>
57+
</tr>
58+
4959
<tr>
5060
<td>The Power of Scale for Parameter-Efficient Prompt Tuning</td>
5161
<td>Brian Lester et al</td>
5262
<td>2021</td>
5363
<td>prompting, llm</td>
5464
<td>Arxiv</td>
55-
<td>This paper explores adding soft prompts to condition forzen language models. Basically, soft prompts are learned through back-propagation and can be used to finetune language models without fully retraining. They also introduce the idea of &quot;prompt ensembling&quot; which is basically using multiple soft prompts on a model and ensembling their outputs.</td>
65+
<td>This paper explores adding soft prompts to condition frozen language models. Basically, soft prompts are learned through back-propagation and can be used to finetune language models without fully retraining. They also introduce the idea of &quot;prompt ensembling&quot; which is basically using multiple soft prompts on a model and ensembling their outputs.</td>
5666
<td><a href="https://arxiv.org/pdf/2104.08691" target="_blank">Link</a></td>
5767
</tr>
5868

0 commit comments

Comments
 (0)