You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: papers/list.json
+14-5Lines changed: 14 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,16 @@
1
1
[
2
+
{
3
+
"title": "A Simple Early Exiting Framework for Accelerating Sampling in Diffusion Models",
4
+
"author": "Taehong Moon et al",
5
+
"year": "2024",
6
+
"topic": "diffusion, early exit",
7
+
"venue": "ICML",
8
+
"description": "This paper presents Adaptive Score Estimation (ASE), a novel framework that accelerates diffusion model sampling by adaptively allocating computational resources based on the time step being processed. The authors observe that score estimation near the noise distribution (t→1) requires less computational power than estimation near the data distribution (t→0), leading them to develop a time-dependent early-exiting scheme where more neural network blocks are skipped during the noise-phase sampling steps. Their approach differs between architectures - for DiT models they skip entire blocks, while for U-ViT models they preserve the linear layers connected to skip connections while dropping other block components to maintain the residual pathway information. The authors fine-tune their models using a specially designed training procedure that employs exponential moving averages and weighted coefficients to ensure minimal information updates near t→0 while allowing more updates near t→1.",
9
+
"link": "https://arxiv.org/pdf/2408.05927"
10
+
},
2
11
{
3
12
"title": "Active Prompting with Chain-of-Thought for Large Language Models",
4
-
"author": "Shizhe Diao, et al",
13
+
"author": "Shizhe Diao et al",
5
14
"year": "2023",
6
15
"topic": "prompting, cot",
7
16
"venue": "Arxiv",
@@ -10,7 +19,7 @@
10
19
},
11
20
{
12
21
"title": "RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment",
13
-
"author": "Hanze Dong, et al",
22
+
"author": "Hanze Dong et al",
14
23
"year": "2023",
15
24
"topic": "watermark, offset learning",
16
25
"venue": "TMLR",
@@ -19,7 +28,7 @@
19
28
},
20
29
{
21
30
"title": "Finding needles in a haystack: A Black-Box Approach to Invisible Watermark Detection",
22
-
"author": "Minzhou Pan, et al",
31
+
"author": "Minzhou Pan et al",
23
32
"year": "2024",
24
33
"topic": "watermark, offset learning",
25
34
"venue": "Arxiv",
@@ -28,7 +37,7 @@
28
37
},
29
38
{
30
39
"title": "Mitigating the Alignment Tax of RLHF",
31
-
"author": "Yong Lin, et al",
40
+
"author": "Yong Lin et al",
32
41
"year": "2024",
33
42
"topic": "rlhf, alignment",
34
43
"venue": "Arxiv",
@@ -37,7 +46,7 @@
37
46
},
38
47
{
39
48
"title": "AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising",
Copy file name to clipboardExpand all lines: papers_read.html
+17-7Lines changed: 17 additions & 7 deletions
Original file line number
Diff line number
Diff line change
@@ -75,10 +75,10 @@ <h1>Here's where I keep a list of papers I have read.</h1>
75
75
I typically use this to organize papers I found interesting. Please feel free to do whatever you want with it. Note that this is not every single paper I have ever read, just a collection of ones that I remember to put down.
76
76
</p>
77
77
<pid="paperCount">
78
-
So far, we have read 180 papers. Let's keep it up!
78
+
So far, we have read 181 papers. Let's keep it up!
79
79
</p>
80
80
<smallid="searchCount">
81
-
Your search returned 180 papers. Nice!
81
+
Your search returned 181 papers. Nice!
82
82
</small>
83
83
84
84
<divclass="search-inputs">
@@ -105,9 +105,19 @@ <h1>Here's where I keep a list of papers I have read.</h1>
105
105
</thead>
106
106
<tbody>
107
107
108
+
<tr>
109
+
<td>A Simple Early Exiting Framework for Accelerating Sampling in Diffusion Models</td>
110
+
<td>Taehong Moon et al</td>
111
+
<td>2024</td>
112
+
<td>diffusion, early exit</td>
113
+
<td>ICML</td>
114
+
<td>This paper presents Adaptive Score Estimation (ASE), a novel framework that accelerates diffusion model sampling by adaptively allocating computational resources based on the time step being processed. The authors observe that score estimation near the noise distribution (t→1) requires less computational power than estimation near the data distribution (t→0), leading them to develop a time-dependent early-exiting scheme where more neural network blocks are skipped during the noise-phase sampling steps. Their approach differs between architectures - for DiT models they skip entire blocks, while for U-ViT models they preserve the linear layers connected to skip connections while dropping other block components to maintain the residual pathway information. The authors fine-tune their models using a specially designed training procedure that employs exponential moving averages and weighted coefficients to ensure minimal information updates near t→0 while allowing more updates near t→1.</td>
0 commit comments