Skip to content

Commit d55e19f

Browse files
committed
Updated on 2024-10-28
1 parent 4b18605 commit d55e19f

File tree

2 files changed

+25
-6
lines changed

2 files changed

+25
-6
lines changed

papers/list.json

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,21 @@
11
[
2+
{
3+
"title": "MaskGIT: Masked Generative Image Transformer",
4+
"author": "Huiwen Chang et al",
5+
"year": "2022",
6+
"topic": "generative models, masking, image transformer",
7+
"venue": "Arxiv",
8+
"description": "The MaskGIT paper introduces a novel bidirectional transformer architecture for image generation that can predict multiple image tokens in parallel, rather than generating them sequentially like previous methods. They develop a new iterative decoding strategy where the model predicts all masked tokens simultaneously at each step, keeps the most confident predictions, and refines the remaining tokens over multiple iterations using a decreasing mask scheduling function. The approach significantly outperforms previous transformer-based methods in both generation quality and speed on ImageNet, while maintaining good diversity in the generated samples. The bidirectional nature of their model enables flexible image editing applications like inpainting, outpainting, and class-conditional object manipulation without requiring any architectural changes or task-specific training.",
9+
"link": "https://arxiv.org/pdf/2202.04200"
10+
},
211
{
312
"title": "Improved Precision and Recall Metric for Assessing Generative Models",
413
"author": "Tuomas Kynkaanniemi et al",
514
"year": "2019",
615
"topic": "generative models, precision, recall",
7-
"venue": "NeurIPS 2019",
16+
"venue": "NeurIPS",
817
"description": "This paper introduces an improved metric for evaluating generative models by separately measuring precision (quality of generated samples) and recall (coverage/diversity of generated distribution) using k-nearest neighbors to construct non-parametric manifold approximations of real and generated data distributions. The authors demonstrate their metric's effectiveness using StyleGAN and BigGAN, showing how it provides more nuanced insights than existing metrics like FID, particularly in revealing tradeoffs between image quality and variation that other metrics obscure. They use their metric to analyze and improve StyleGAN's architecture and training configurations, identifying new variants that achieve state-of-the-art results, and perform the first principled analysis of truncation methods. Finally, they extend their metric to evaluate individual sample quality, enabling quality assessment of interpolations and providing insights into the shape of the latent space that produces realistic images.",
9-
"link": ""
18+
"link": "https://arxiv.org/pdf/1904.06991"
1019
},
1120
{
1221
"title": "Generative Pretraining from Pixels",

papers_read.html

Lines changed: 14 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -75,10 +75,10 @@ <h1>Here's where I keep a list of papers I have read.</h1>
7575
I typically use this to organize papers I found interesting. Please feel free to do whatever you want with it. Note that this is not every single paper I have ever read, just a collection of ones that I remember to put down.
7676
</p>
7777
<p id="paperCount">
78-
So far, we have read 147 papers. Let's keep it up!
78+
So far, we have read 148 papers. Let's keep it up!
7979
</p>
8080
<small id="searchCount">
81-
Your search returned 147 papers. Nice!
81+
Your search returned 148 papers. Nice!
8282
</small>
8383

8484
<div class="search-inputs">
@@ -105,14 +105,24 @@ <h1>Here's where I keep a list of papers I have read.</h1>
105105
</thead>
106106
<tbody>
107107

108+
<tr>
109+
<td>MaskGIT: Masked Generative Image Transformer</td>
110+
<td>Huiwen Chang et al</td>
111+
<td>2022</td>
112+
<td>generative models, masking, image transformer</td>
113+
<td>Arxiv</td>
114+
<td>The MaskGIT paper introduces a novel bidirectional transformer architecture for image generation that can predict multiple image tokens in parallel, rather than generating them sequentially like previous methods. They develop a new iterative decoding strategy where the model predicts all masked tokens simultaneously at each step, keeps the most confident predictions, and refines the remaining tokens over multiple iterations using a decreasing mask scheduling function. The approach significantly outperforms previous transformer-based methods in both generation quality and speed on ImageNet, while maintaining good diversity in the generated samples. The bidirectional nature of their model enables flexible image editing applications like inpainting, outpainting, and class-conditional object manipulation without requiring any architectural changes or task-specific training.</td>
115+
<td><a href="https://arxiv.org/pdf/2202.04200" target="_blank">Link</a></td>
116+
</tr>
117+
108118
<tr>
109119
<td>Improved Precision and Recall Metric for Assessing Generative Models</td>
110120
<td>Tuomas Kynkaanniemi et al</td>
111121
<td>2019</td>
112122
<td>generative models, precision, recall</td>
113-
<td>NeurIPS 2019</td>
123+
<td>NeurIPS</td>
114124
<td>This paper introduces an improved metric for evaluating generative models by separately measuring precision (quality of generated samples) and recall (coverage/diversity of generated distribution) using k-nearest neighbors to construct non-parametric manifold approximations of real and generated data distributions. The authors demonstrate their metric&#x27;s effectiveness using StyleGAN and BigGAN, showing how it provides more nuanced insights than existing metrics like FID, particularly in revealing tradeoffs between image quality and variation that other metrics obscure. They use their metric to analyze and improve StyleGAN&#x27;s architecture and training configurations, identifying new variants that achieve state-of-the-art results, and perform the first principled analysis of truncation methods. Finally, they extend their metric to evaluate individual sample quality, enabling quality assessment of interpolations and providing insights into the shape of the latent space that produces realistic images.</td>
115-
<td>N/A</td>
125+
<td><a href="https://arxiv.org/pdf/1904.06991" target="_blank">Link</a></td>
116126
</tr>
117127

118128
<tr>

0 commit comments

Comments
 (0)