You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: papers/list.json
+18Lines changed: 18 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,22 @@
1
1
[
2
+
{
3
+
"title": "Finding needles in a haystack: A Black-Box Approach to Invisible Watermark Detection",
4
+
"author": "Minzhou Pan, et al",
5
+
"year": "2024",
6
+
"topic": "watermark, offset learning",
7
+
"venue": "Arxiv",
8
+
"description": "The key insight of this paper centers on using \"offset learning\" to detect invisible watermarks in images. The intuition is that by having a clean reference dataset of similar images, you can effectively \"cancel out\" the normal image features that are common between clean and watermarked images, leaving only the watermark perturbations. They design an asymmetric loss function where clean images use exponential/softmax loss (to focus on hard examples) while detection dataset uses linear loss (to give equal weight to all examples), helping isolate the watermark signal. This is combined with an iterative pruning strategy that gradually removes likely-clean images from the detection set, allowing the model to better focus on and learn the watermark patterns. By formulating watermark detection this way, they avoid needing any prior knowledge of watermarking techniques or labeled data, making it a truly black-box approach.",
9
+
"link": "https://arxiv.org/pdf/2403.15955"
10
+
},
11
+
{
12
+
"title": "Mitigating the Alignment Tax of RLHF",
13
+
"author": "Yong Lin, et al",
14
+
"year": "2024",
15
+
"topic": "rlhf, alignment",
16
+
"venue": "Arxiv",
17
+
"description": "This paper investigates the \"alignment tax\" problem where large language models lose some of their pre-trained abilities when aligned with human preferences through RLHF. The key insight is that model averaging (interpolating between pre-RLHF and post-RLHF model weights) is surprisingly effective at mitigating this trade-off because tasks share overlapping feature spaces, particularly in lower layers of the model. Building on this understanding, they propose Heterogeneous Model Averaging (HMA) which applies different averaging ratios to different layers of the transformer model, allowing optimization of the alignment-forgetting trade-off. The intuition is that since different layers capture different levels of features and task similarities, they should not be averaged equally, and finding optimal layer-specific averaging ratios can better preserve both alignment and pre-trained capabilities.",
18
+
"link": "https://arxiv.org/pdf/2309.06256"
19
+
},
2
20
{
3
21
"title": "AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising",
Copy file name to clipboardExpand all lines: papers_read.html
+22-2Lines changed: 22 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -75,10 +75,10 @@ <h1>Here's where I keep a list of papers I have read.</h1>
75
75
I typically use this to organize papers I found interesting. Please feel free to do whatever you want with it. Note that this is not every single paper I have ever read, just a collection of ones that I remember to put down.
76
76
</p>
77
77
<pid="paperCount">
78
-
So far, we have read 176 papers. Let's keep it up!
78
+
So far, we have read 178 papers. Let's keep it up!
79
79
</p>
80
80
<smallid="searchCount">
81
-
Your search returned 176 papers. Nice!
81
+
Your search returned 178 papers. Nice!
82
82
</small>
83
83
84
84
<divclass="search-inputs">
@@ -105,6 +105,26 @@ <h1>Here's where I keep a list of papers I have read.</h1>
105
105
</thead>
106
106
<tbody>
107
107
108
+
<tr>
109
+
<td>Finding needles in a haystack: A Black-Box Approach to Invisible Watermark Detection</td>
110
+
<td>Minzhou Pan, et al</td>
111
+
<td>2024</td>
112
+
<td>watermark, offset learning</td>
113
+
<td>Arxiv</td>
114
+
<td>The key insight of this paper centers on using "offset learning" to detect invisible watermarks in images. The intuition is that by having a clean reference dataset of similar images, you can effectively "cancel out" the normal image features that are common between clean and watermarked images, leaving only the watermark perturbations. They design an asymmetric loss function where clean images use exponential/softmax loss (to focus on hard examples) while detection dataset uses linear loss (to give equal weight to all examples), helping isolate the watermark signal. This is combined with an iterative pruning strategy that gradually removes likely-clean images from the detection set, allowing the model to better focus on and learn the watermark patterns. By formulating watermark detection this way, they avoid needing any prior knowledge of watermarking techniques or labeled data, making it a truly black-box approach.</td>
<td>This paper investigates the "alignment tax" problem where large language models lose some of their pre-trained abilities when aligned with human preferences through RLHF. The key insight is that model averaging (interpolating between pre-RLHF and post-RLHF model weights) is surprisingly effective at mitigating this trade-off because tasks share overlapping feature spaces, particularly in lower layers of the model. Building on this understanding, they propose Heterogeneous Model Averaging (HMA) which applies different averaging ratios to different layers of the transformer model, allowing optimization of the alignment-forgetting trade-off. The intuition is that since different layers capture different levels of features and task similarities, they should not be averaged equally, and finding optimal layer-specific averaging ratios can better preserve both alignment and pre-trained capabilities.</td>
0 commit comments