Skip to content

Commit 3eaefad

Browse files
committed
Updated on 2024-11-09
1 parent 0ddea49 commit 3eaefad

File tree

3 files changed

+22
-3
lines changed

3 files changed

+22
-3
lines changed

index.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ <h1>Where?</h1>
7474
</p>
7575
<h1>When?</h1>
7676
<p>
77-
Last time this was edited was 2024-11-08 (YYYY/MM/DD).
77+
Last time this was edited was 2024-11-09 (YYYY/MM/DD).
7878
</p>
7979
<small><a href="misc.html">misc</a></small>
8080
</div>

papers/list.json

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,13 @@
11
[
2+
{
3+
"title": "Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models",
4+
"author": "Hongjie Wang et al",
5+
"year": "2024",
6+
"topic": "diffusion, training-free, attention, token pruning",
7+
"venue": "CVPR",
8+
"description": "This paper introduces AT-EDM, a training-free framework to accelerate diffusion models by pruning redundant tokens during inference without requiring model retraining. The key innovation is a Generalized Weighted PageRank (G-WPR) algorithm that uses attention maps to identify and prune less important tokens, along with a novel similarity-based token recovery method that fills in pruned tokens based on attention patterns to maintain compatibility with convolutional layers. The authors also propose a Denoising-Steps-Aware Pruning (DSAP) schedule that prunes fewer tokens in early denoising steps when attention maps are more chaotic and less informative, and more tokens in later steps when attention patterns are better established. The overall approach focuses on making diffusion models more efficient by leveraging the rich information contained in attention maps to guide token pruning decisions while maintaining image generation quality.",
9+
"link": "https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_Attention-Driven_Training-Free_Efficiency_Enhancement_of_Diffusion_Models_CVPR_2024_paper.pdf"
10+
},
211
{
312
"title": "Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks",
413
"author": "Tim Salimans et al",

papers_read.html

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -75,10 +75,10 @@ <h1>Here's where I keep a list of papers I have read.</h1>
7575
I typically use this to organize papers I found interesting. Please feel free to do whatever you want with it. Note that this is not every single paper I have ever read, just a collection of ones that I remember to put down.
7676
</p>
7777
<p id="paperCount">
78-
So far, we have read 160 papers. Let's keep it up!
78+
So far, we have read 161 papers. Let's keep it up!
7979
</p>
8080
<small id="searchCount">
81-
Your search returned 160 papers. Nice!
81+
Your search returned 161 papers. Nice!
8282
</small>
8383

8484
<div class="search-inputs">
@@ -105,6 +105,16 @@ <h1>Here's where I keep a list of papers I have read.</h1>
105105
</thead>
106106
<tbody>
107107

108+
<tr>
109+
<td>Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models</td>
110+
<td>Hongjie Wang et al</td>
111+
<td>2024</td>
112+
<td>diffusion, training-free, attention, token pruning</td>
113+
<td>CVPR</td>
114+
<td>This paper introduces AT-EDM, a training-free framework to accelerate diffusion models by pruning redundant tokens during inference without requiring model retraining. The key innovation is a Generalized Weighted PageRank (G-WPR) algorithm that uses attention maps to identify and prune less important tokens, along with a novel similarity-based token recovery method that fills in pruned tokens based on attention patterns to maintain compatibility with convolutional layers. The authors also propose a Denoising-Steps-Aware Pruning (DSAP) schedule that prunes fewer tokens in early denoising steps when attention maps are more chaotic and less informative, and more tokens in later steps when attention patterns are better established. The overall approach focuses on making diffusion models more efficient by leveraging the rich information contained in attention maps to guide token pruning decisions while maintaining image generation quality.</td>
115+
<td><a href="https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_Attention-Driven_Training-Free_Efficiency_Enhancement_of_Diffusion_Models_CVPR_2024_paper.pdf" target="_blank">Link</a></td>
116+
</tr>
117+
108118
<tr>
109119
<td>Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks</td>
110120
<td>Tim Salimans et al</td>

0 commit comments

Comments
 (0)