Skip to content

Commit 20b7cd2

Browse files
committed
Updated on 2025-01-11
1 parent 488a3b7 commit 20b7cd2

File tree

2 files changed

+23
-4
lines changed

2 files changed

+23
-4
lines changed

papers/list.json

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,20 @@
11
[
2+
{
3+
"title": "Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads",
4+
"author": "Tianle Cai et al",
5+
"year": "2024",
6+
"topic": "speculative decoding, drafting, llm",
7+
"venue": "ICML",
8+
"description": "This paper presents Medusa which augments LLM inference by adding extra decoding heads to predict multiple subsequent tokens in parallel. They also introduce a form of tree-based attention to process candidates. Through the Medusa heads, they obtain probability predictions for the subsequent K+1 tokens. These predictions enable them to create length-K+1 continuations as the candidates. In order to process multiple cnadidates concurrently, they structure their attention such that only tokens from the same continuation are regarded as historical data.For instance, they have in Figure 2 an example where the first Medusa head and generates some top two predictions while the second medusa head generates a top three for each of the top two from the first head. Instead of filling the entire attention mask, they only consider the mask from these 2*3 = 6 tokens, plus the standard identity line.",
9+
"link": "https://arxiv.org/pdf/2401.10774"
10+
},
211
{
312
"title": "Recurrent Drafter for Fast Speculative Decoding in Large Language Models",
413
"author": "Yunfei Cheng et al",
514
"year": "2024",
615
"topic": "speculative decoding, drafting, llm",
716
"venue": "Arxiv",
8-
"description": "This paper indroduces ReDrafter (Recurrent Drafter) that uses an RNN as the draft model and conditions on the LLM's hidden states. They use a beam search to explore the candidate seqeunces and then apply a dynamic tree attention alg to remove duplicated prefixes among the candidates to improve the speedup. They also train via knowledge distillation from LLMs to improve the alignment of the draft model's predictions with those of the LLM.",
17+
"description": "This paper introduces ReDrafter (Recurrent Drafter) that uses an RNN as the draft model and conditions on the LLM's hidden states. They use a beam search to explore the candidate seqeunces and then apply a dynamic tree attention alg to remove duplicated prefixes among the candidates to improve the speedup. They also train via knowledge distillation from LLMs to improve the alignment of the draft model's predictions with those of the LLM.",
918
"link": "https://arxiv.org/pdf/2403.09919"
1019
},
1120
{

papers_read.html

Lines changed: 13 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -16,10 +16,10 @@ <h1>Here's where I keep a list of papers I have read.</h1>
1616
I typically use this to organize papers I found interesting. Please feel free to do whatever you want with it. Note that this is not every single paper I have ever read, just a collection of ones that I remember to put down.
1717
</p>
1818
<p id="paperCount">
19-
So far, we have read 205 papers. Let's keep it up!
19+
So far, we have read 206 papers. Let's keep it up!
2020
</p>
2121
<small id="searchCount">
22-
Your search returned 205 papers. Nice!
22+
Your search returned 206 papers. Nice!
2323
</small>
2424

2525
<div class="search-inputs">
@@ -46,13 +46,23 @@ <h1>Here's where I keep a list of papers I have read.</h1>
4646
</thead>
4747
<tbody>
4848

49+
<tr>
50+
<td>Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads</td>
51+
<td>Tianle Cai et al</td>
52+
<td>2024</td>
53+
<td>speculative decoding, drafting, llm</td>
54+
<td>ICML</td>
55+
<td>This paper presents Medusa which augments LLM inference by adding extra decoding heads to predict multiple subsequent tokens in parallel. They also introduce a form of tree-based attention to process candidates. Through the Medusa heads, they obtain probability predictions for the subsequent K+1 tokens. These predictions enable them to create length-K+1 continuations as the candidates. In order to process multiple cnadidates concurrently, they structure their attention such that only tokens from the same continuation are regarded as historical data.For instance, they have in Figure 2 an example where the first Medusa head and generates some top two predictions while the second medusa head generates a top three for each of the top two from the first head. Instead of filling the entire attention mask, they only consider the mask from these 2*3 = 6 tokens, plus the standard identity line.</td>
56+
<td><a href="https://arxiv.org/pdf/2401.10774" target="_blank">Link</a></td>
57+
</tr>
58+
4959
<tr>
5060
<td>Recurrent Drafter for Fast Speculative Decoding in Large Language Models</td>
5161
<td>Yunfei Cheng et al</td>
5262
<td>2024</td>
5363
<td>speculative decoding, drafting, llm</td>
5464
<td>Arxiv</td>
55-
<td>This paper indroduces ReDrafter (Recurrent Drafter) that uses an RNN as the draft model and conditions on the LLM&#x27;s hidden states. They use a beam search to explore the candidate seqeunces and then apply a dynamic tree attention alg to remove duplicated prefixes among the candidates to improve the speedup. They also train via knowledge distillation from LLMs to improve the alignment of the draft model&#x27;s predictions with those of the LLM.</td>
65+
<td>This paper introduces ReDrafter (Recurrent Drafter) that uses an RNN as the draft model and conditions on the LLM&#x27;s hidden states. They use a beam search to explore the candidate seqeunces and then apply a dynamic tree attention alg to remove duplicated prefixes among the candidates to improve the speedup. They also train via knowledge distillation from LLMs to improve the alignment of the draft model&#x27;s predictions with those of the LLM.</td>
5666
<td><a href="https://arxiv.org/pdf/2403.09919" target="_blank">Link</a></td>
5767
</tr>
5868

0 commit comments

Comments
 (0)