Skip to content

Commit b708545

Browse files
Create README.md
1 parent 1bc69fa commit b708545

File tree

1 file changed

+39
-0
lines changed

1 file changed

+39
-0
lines changed

README.md

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
## Reproducing Neural Discrete Representation Learning
2+
### Course Project for [IFT 6135 - Representation Learning](https://ift6135h18.wordpress.com/)
3+
4+
Project Report link: [final_project.pdf](final_project.pdf)
5+
6+
### Instructions
7+
1. To train the VQVAE with default arguments as discussed in the report, execute:
8+
```
9+
python vqvae.py --data-folder /tmp/miniimagenet --output-folder models/vqvae
10+
```
11+
2. To train the PixelCNN prior on the latents, execute:
12+
```
13+
python pixelcnn_prior.py --data-folder /tmp/miniimagenet --model models/vqvae --output-folder models/pixelcnn_prior
14+
```
15+
16+
### Reconstructions from VQ-VAE
17+
Top 4 rows are Original Images. Bottom 4 rows are Reconstructions.
18+
#### MNIST
19+
![png](samples/vqvae_reconstructions_MNIST.png)
20+
#### Fashion MNIST
21+
![png](samples/vqvae_reconstructions_FashionMNIST.png)
22+
23+
### Class-conditional samples from VQVAE with PixelCNN prior on the latents
24+
#### MNIST
25+
![png](samples/samples_MNIST.png)
26+
#### Fashion MNIST
27+
![png](samples/samples_FashionMNIST.png)
28+
29+
### Comments
30+
1. We noticed that implementing our own VectorQuantization PyTorch function speeded-up training of VQ-VAE by nearly 3x. The slower, but simpler code is in this [commit](https://github.com/ritheshkumar95/pytorch-vqvae/tree/cde142670f701e783f29e9c815f390fc502532e8).
31+
2. We added some basic tests for the vector quantization functions (based on `pytest`). To run these tests
32+
```
33+
py.test . -vv
34+
```
35+
36+
### Authors
37+
1. Rithesh Kumar
38+
2. Tristan Deleu
39+
3. Evan Racah

0 commit comments

Comments
 (0)