Skip to content

Commit 3951a18

Browse files
committed
Updated on 2024-10-28
1 parent 8839c06 commit 3951a18

File tree

3 files changed

+41
-3
lines changed

3 files changed

+41
-3
lines changed

index.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ <h1>Where?</h1>
7474
</p>
7575
<h1>When?</h1>
7676
<p>
77-
Last time this was edited was 2024-10-26 (YYYY/MM/DD).
77+
Last time this was edited was 2024-10-28 (YYYY/MM/DD).
7878
</p>
7979
<small><a href="misc.html">misc</a></small>
8080
</div>

papers/list.json

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,22 @@
11
[
2+
{
3+
"title": "MaskGIT: Masked Generative Image Transformer",
4+
"author": "Huiwen Chang et al",
5+
"year": "2022",
6+
"topic": "image transformer, generation",
7+
"venue": "Arxiv",
8+
"description": "The MaskGIT paper introduces a novel bidirectional transformer architecture for image generation that can predict multiple image tokens in parallel, rather than generating them sequentially like previous methods. They develop a new iterative decoding strategy where the model predicts all masked tokens simultaneously at each step, keeps the most confident predictions, and refines the remaining tokens over multiple iterations using a decreasing mask scheduling function. The approach significantly outperforms previous transformer-based methods in both generation quality and speed on ImageNet, while maintaining good diversity in the generated samples. The bidirectional nature of their model enables flexible image editing applications like inpainting, outpainting, and class-conditional object manipulation without requiring any architectural changes or task-specific training.",
9+
"link": "https://arxiv.org/pdf/2202.04200"
10+
},
11+
{
12+
"title": "Generative Pretraining from Pixels",
13+
"author": "Mark Chen et al",
14+
"year": "2020",
15+
"topic": "pretraining, gpt",
16+
"venue": "PMLR",
17+
"description": "The paper demonstrates that transformer models can learn high-quality image representations by simply predicting pixels in a generative way, without incorporating any knowledge of the 2D structure of images. They show that as the generative models get better at predicting pixels (measured by log probability), they also learn better representations that can be used for downstream image classification tasks. The authors discover that, unlike in supervised learning where the best representations are in the final layers, their generative models learn the best representations in the middle layers - suggesting the model first builds up representations before using them to predict pixels. Finally, while their approach requires significant compute and works best at lower resolutions, it achieves competitive results with other self-supervised methods and shows that generative pre-training can be a promising direction for learning visual representations without labels.",
18+
"link": "https://cdn.openai.com/papers/Generative_Pretraining_from_Pixels_V2.pdf"
19+
},
220
{
321
"title": "Why Does Unsupervised Pre-Training Help Deep Learning?",
422
"author": "Dumitru Erhan et al",

papers_read.html

Lines changed: 22 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -75,10 +75,10 @@ <h1>Here's where I keep a list of papers I have read.</h1>
7575
I typically use this to organize papers I found interesting. Please feel free to do whatever you want with it. Note that this is not every single paper I have ever read, just a collection of ones that I remember to put down.
7676
</p>
7777
<p id="paperCount">
78-
So far, we have read 145 papers. Let's keep it up!
78+
So far, we have read 147 papers. Let's keep it up!
7979
</p>
8080
<small id="searchCount">
81-
Your search returned 145 papers. Nice!
81+
Your search returned 147 papers. Nice!
8282
</small>
8383

8484
<div class="search-inputs">
@@ -105,6 +105,26 @@ <h1>Here's where I keep a list of papers I have read.</h1>
105105
</thead>
106106
<tbody>
107107

108+
<tr>
109+
<td>MaskGIT: Masked Generative Image Transformer</td>
110+
<td>Huiwen Chang et al</td>
111+
<td>2022</td>
112+
<td>image transformer, generation</td>
113+
<td>Arxiv</td>
114+
<td>The MaskGIT paper introduces a novel bidirectional transformer architecture for image generation that can predict multiple image tokens in parallel, rather than generating them sequentially like previous methods. They develop a new iterative decoding strategy where the model predicts all masked tokens simultaneously at each step, keeps the most confident predictions, and refines the remaining tokens over multiple iterations using a decreasing mask scheduling function. The approach significantly outperforms previous transformer-based methods in both generation quality and speed on ImageNet, while maintaining good diversity in the generated samples. The bidirectional nature of their model enables flexible image editing applications like inpainting, outpainting, and class-conditional object manipulation without requiring any architectural changes or task-specific training.</td>
115+
<td><a href="https://arxiv.org/pdf/2202.04200" target="_blank">Link</a></td>
116+
</tr>
117+
118+
<tr>
119+
<td>Generative Pretraining from Pixels</td>
120+
<td>Mark Chen et al</td>
121+
<td>2020</td>
122+
<td>pretraining, gpt</td>
123+
<td>PMLR</td>
124+
<td>The paper demonstrates that transformer models can learn high-quality image representations by simply predicting pixels in a generative way, without incorporating any knowledge of the 2D structure of images. They show that as the generative models get better at predicting pixels (measured by log probability), they also learn better representations that can be used for downstream image classification tasks. The authors discover that, unlike in supervised learning where the best representations are in the final layers, their generative models learn the best representations in the middle layers - suggesting the model first builds up representations before using them to predict pixels. Finally, while their approach requires significant compute and works best at lower resolutions, it achieves competitive results with other self-supervised methods and shows that generative pre-training can be a promising direction for learning visual representations without labels.</td>
125+
<td><a href="https://cdn.openai.com/papers/Generative_Pretraining_from_Pixels_V2.pdf" target="_blank">Link</a></td>
126+
</tr>
127+
108128
<tr>
109129
<td>Why Does Unsupervised Pre-Training Help Deep Learning?</td>
110130
<td>Dumitru Erhan et al</td>

0 commit comments

Comments
 (0)