Skip to content

Commit 4a2cbd6

Browse files
committed
Updated on 2025-03-04
1 parent ebed873 commit 4a2cbd6

File tree

2 files changed

+40
-2
lines changed

2 files changed

+40
-2
lines changed

papers/list.json

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,22 @@
11
[
2+
{
3+
"title": "Sequence-Level Knowledge Distillation",
4+
"author": "Yoon Kim et al",
5+
"year": "2016",
6+
"topic": "knowledge distillation",
7+
"venue": "Arxiv",
8+
"description": "This paper introduces sequence-level knowledge distillation for neural machine translation, allowing smaller student models to achieve performance comparable to larger teacher models. The authors demonstrate that their approach works better than standard word-level knowledge distillation by having students learn from complete translations generated by the teacher rather than just matching word-level probabilities. Remarkably, their method enables student models to perform well even with greedy decoding, eliminating the need for computationally expensive beam search at inference time. Combining their distillation techniques with weight pruning, they produce models with 13× fewer parameters than the original teacher model while maintaining strong translation performance, making efficient NMT deployment possible even on mobile devices.",
9+
"link": "https://arxiv.org/pdf/1606.07947"
10+
},
11+
{
12+
"title": "The Mamba in the Llama: Distilling and Accelerating Hybrid Models",
13+
"author": "Junxiong Wang et al",
14+
"year": "2025",
15+
"topic": "knowledge distillation, llm",
16+
"venue": "Arxiv",
17+
"description": "This paper demonstrates how large Transformer models can be effectively distilled into hybrid models that incorporate linear RNNs like Mamba while maintaining much of their generation quality, notably by reusing the weights from attention layers. The researchers developed a multistage distillation approach combining progressive distillation, supervised fine-tuning, and directed preference optimization, which outperforms models trained from scratch with trillions of tokens. They also introduced a hardware-aware speculative decoding algorithm that significantly accelerates inference speed for both Mamba and hybrid architectures, achieving impressive throughput for large language models. The resulting hybrid models show comparable performance to the original Transformers on chat benchmarks while requiring less computational resources for deployment, highlighting how transformer knowledge can be effectively transferred to other architectures with customized inference profiles.",
18+
"link": "https://arxiv.org/pdf/2408.15237"
19+
},
220
{
321
"title": "Compact Language Models via Pruning and Knowledge Distillation",
422
"author": "Saurav Muralidharan et al",

papers_read.html

Lines changed: 22 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,10 +16,10 @@ <h1>Here's where I keep a list of papers I have read.</h1>
1616
I typically use this to organize papers I found interesting. Please feel free to do whatever you want with it. Note that this is not every single paper I have ever read, just a collection of ones that I remember to put down.
1717
</p>
1818
<p id="paperCount">
19-
So far, we have read 229 papers. Let's keep it up!
19+
So far, we have read 231 papers. Let's keep it up!
2020
</p>
2121
<small id="searchCount">
22-
Your search returned 229 papers. Nice!
22+
Your search returned 231 papers. Nice!
2323
</small>
2424

2525
<div class="search-inputs">
@@ -46,6 +46,26 @@ <h1>Here's where I keep a list of papers I have read.</h1>
4646
</thead>
4747
<tbody>
4848

49+
<tr>
50+
<td>Sequence-Level Knowledge Distillation</td>
51+
<td>Yoon Kim et al</td>
52+
<td>2016</td>
53+
<td>knowledge distillation</td>
54+
<td>Arxiv</td>
55+
<td>This paper introduces sequence-level knowledge distillation for neural machine translation, allowing smaller student models to achieve performance comparable to larger teacher models. The authors demonstrate that their approach works better than standard word-level knowledge distillation by having students learn from complete translations generated by the teacher rather than just matching word-level probabilities. Remarkably, their method enables student models to perform well even with greedy decoding, eliminating the need for computationally expensive beam search at inference time. Combining their distillation techniques with weight pruning, they produce models with 13× fewer parameters than the original teacher model while maintaining strong translation performance, making efficient NMT deployment possible even on mobile devices.</td>
56+
<td><a href="https://arxiv.org/pdf/1606.07947" target="_blank">Link</a></td>
57+
</tr>
58+
59+
<tr>
60+
<td>The Mamba in the Llama: Distilling and Accelerating Hybrid Models</td>
61+
<td>Junxiong Wang et al</td>
62+
<td>2025</td>
63+
<td>knowledge distillation, llm</td>
64+
<td>Arxiv</td>
65+
<td>This paper demonstrates how large Transformer models can be effectively distilled into hybrid models that incorporate linear RNNs like Mamba while maintaining much of their generation quality, notably by reusing the weights from attention layers. The researchers developed a multistage distillation approach combining progressive distillation, supervised fine-tuning, and directed preference optimization, which outperforms models trained from scratch with trillions of tokens. They also introduced a hardware-aware speculative decoding algorithm that significantly accelerates inference speed for both Mamba and hybrid architectures, achieving impressive throughput for large language models. The resulting hybrid models show comparable performance to the original Transformers on chat benchmarks while requiring less computational resources for deployment, highlighting how transformer knowledge can be effectively transferred to other architectures with customized inference profiles.</td>
66+
<td><a href="https://arxiv.org/pdf/2408.15237" target="_blank">Link</a></td>
67+
</tr>
68+
4969
<tr>
5070
<td>Compact Language Models via Pruning and Knowledge Distillation</td>
5171
<td>Saurav Muralidharan et al</td>

0 commit comments

Comments
 (0)