Skip to content

Commit edf963b

Browse files
committed
Updated on 2024-11-23
1 parent 4c07ddb commit edf963b

File tree

2 files changed

+21
-2
lines changed

2 files changed

+21
-2
lines changed

papers/list.json

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,13 @@
11
[
2+
{
3+
"title": "AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising",
4+
"author": "Zigeng Chen, et al",
5+
"year": "2024",
6+
"topic": "diffusion, parallelization, denoising",
7+
"venue": "Arxiv",
8+
"description": "This paper introduces AsyncDiff, a novel approach to accelerate diffusion models through parallel processing across multiple devices. The key insight is that hidden states between consecutive diffusion steps are highly similar, which allows them to break the traditional sequential dependency chain of the denoising process by transforming it into an asynchronous one. They execute this by dividing the denoising model into multiple components distributed across different devices, where each component uses the output from the previous component's prior step as an approximation of its input, enabling parallel computation. To further enhance efficiency, they introduce stride denoising, which completes multiple denoising steps simultaneously through a single parallel computation batch and reduces the frequency of communication between devices. This solution is particularly elegant because it's universal and plug-and-play, requiring no model retraining or architectural changes to achieve significant speedups while maintaining generation quality.",
9+
"link": "https://arxiv.org/pdf/2406.06911"
10+
},
211
{
312
"title": "DoRA: Weight-Decomposed Low-Rank Adaptation",
413
"author": "Shih-Yang Liu et al",

papers_read.html

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -75,10 +75,10 @@ <h1>Here's where I keep a list of papers I have read.</h1>
7575
I typically use this to organize papers I found interesting. Please feel free to do whatever you want with it. Note that this is not every single paper I have ever read, just a collection of ones that I remember to put down.
7676
</p>
7777
<p id="paperCount">
78-
So far, we have read 175 papers. Let's keep it up!
78+
So far, we have read 176 papers. Let's keep it up!
7979
</p>
8080
<small id="searchCount">
81-
Your search returned 175 papers. Nice!
81+
Your search returned 176 papers. Nice!
8282
</small>
8383

8484
<div class="search-inputs">
@@ -105,6 +105,16 @@ <h1>Here's where I keep a list of papers I have read.</h1>
105105
</thead>
106106
<tbody>
107107

108+
<tr>
109+
<td>AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising</td>
110+
<td>Zigeng Chen, et al</td>
111+
<td>2024</td>
112+
<td>diffusion, parallelization, denoising</td>
113+
<td>Arxiv</td>
114+
<td>This paper introduces AsyncDiff, a novel approach to accelerate diffusion models through parallel processing across multiple devices. The key insight is that hidden states between consecutive diffusion steps are highly similar, which allows them to break the traditional sequential dependency chain of the denoising process by transforming it into an asynchronous one. They execute this by dividing the denoising model into multiple components distributed across different devices, where each component uses the output from the previous component&#x27;s prior step as an approximation of its input, enabling parallel computation. To further enhance efficiency, they introduce stride denoising, which completes multiple denoising steps simultaneously through a single parallel computation batch and reduces the frequency of communication between devices. This solution is particularly elegant because it&#x27;s universal and plug-and-play, requiring no model retraining or architectural changes to achieve significant speedups while maintaining generation quality.</td>
115+
<td><a href="https://arxiv.org/pdf/2406.06911" target="_blank">Link</a></td>
116+
</tr>
117+
108118
<tr>
109119
<td>DoRA: Weight-Decomposed Low-Rank Adaptation</td>
110120
<td>Shih-Yang Liu et al</td>

0 commit comments

Comments
 (0)