Skip to content

Commit d9634b1

Browse files
committed
Updated on 2024-11-09
1 parent 3eaefad commit d9634b1

File tree

2 files changed

+21
-2
lines changed

2 files changed

+21
-2
lines changed

papers/list.json

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,13 @@
11
[
2+
{
3+
"title": "One Step Diffusion via ShortCut Models",
4+
"author": "Kevin Frans et al",
5+
"year": "2024",
6+
"topic": "diffusion, ode, flow-matching",
7+
"venue": "Arxiv",
8+
"description": "This paper introduces shortcut models, a new type of diffusion model that enables high-quality image generation in a single forward pass by conditioning the model not only on the timestep but also on the desired step size, allowing it to learn larger jumps during the denoising process. Unlike previous approaches that require multiple training phases or complex scheduling, shortcut models can be trained end-to-end in a single phase by leveraging a self-consistency property where one large step should equal two consecutive smaller steps, combined with flow-matching loss as a base case. The key insight is that by conditioning on step size, the model can account for future curvature in the denoising path and jump directly to the correct next point rather than following the curved path naively, which would lead to errors with large steps. The approach simplifies the training pipeline while maintaining flexibility in inference budget, as the same model can generate samples using either single or multiple steps after training.",
9+
"link": "https://arxiv.org/abs/2410.12557"
10+
},
211
{
312
"title": "Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models",
413
"author": "Hongjie Wang et al",

papers_read.html

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -75,10 +75,10 @@ <h1>Here's where I keep a list of papers I have read.</h1>
7575
I typically use this to organize papers I found interesting. Please feel free to do whatever you want with it. Note that this is not every single paper I have ever read, just a collection of ones that I remember to put down.
7676
</p>
7777
<p id="paperCount">
78-
So far, we have read 161 papers. Let's keep it up!
78+
So far, we have read 162 papers. Let's keep it up!
7979
</p>
8080
<small id="searchCount">
81-
Your search returned 161 papers. Nice!
81+
Your search returned 162 papers. Nice!
8282
</small>
8383

8484
<div class="search-inputs">
@@ -105,6 +105,16 @@ <h1>Here's where I keep a list of papers I have read.</h1>
105105
</thead>
106106
<tbody>
107107

108+
<tr>
109+
<td>One Step Diffusion via ShortCut Models</td>
110+
<td>Kevin Frans et al</td>
111+
<td>2024</td>
112+
<td>diffusion, ode, flow-matching</td>
113+
<td>Arxiv</td>
114+
<td>This paper introduces shortcut models, a new type of diffusion model that enables high-quality image generation in a single forward pass by conditioning the model not only on the timestep but also on the desired step size, allowing it to learn larger jumps during the denoising process. Unlike previous approaches that require multiple training phases or complex scheduling, shortcut models can be trained end-to-end in a single phase by leveraging a self-consistency property where one large step should equal two consecutive smaller steps, combined with flow-matching loss as a base case. The key insight is that by conditioning on step size, the model can account for future curvature in the denoising path and jump directly to the correct next point rather than following the curved path naively, which would lead to errors with large steps. The approach simplifies the training pipeline while maintaining flexibility in inference budget, as the same model can generate samples using either single or multiple steps after training.</td>
115+
<td><a href="https://arxiv.org/abs/2410.12557" target="_blank">Link</a></td>
116+
</tr>
117+
108118
<tr>
109119
<td>Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models</td>
110120
<td>Hongjie Wang et al</td>

0 commit comments

Comments
 (0)