This repository contains an implementation of a Generative Adversarial Network (GAN) that can generate MNIST digits using Keras.
GANs are a class of neural networks that learn to generate new data by training a generator network to produce fake data that is difficult for a discriminator network to distinguish from real data. During training, the generator and discriminator networks play a game where the generator tries to produce more realistic data while the discriminator tries to correctly classify the data as real or fake.
This project uses Keras, a popular deep learning library, to implement a GAN that can generate realistic MNIST digits. The generator and discriminator networks are both fully connected neural networks with several layers, and are trained using the Adam optimization algorithm. This particular implementation of a GAN is also referred to as a DCGAN (or Deep Convolutional GAN).
This project also contains an implementation of a Class Conditional GAN (CCGAN) that can generate MNIST digits conditioned on a class label. The generator and discriminator networks are both convolutional neural networks with several layers, and are trained using the Adam optimization algorithm. This network was trained for a significantly longer time than the DCGAN, and the results are much more impressive.
These are samples of the digits generated by the DCGAN after 50 epochs of training:
It generates images that look like real MNIST digits, but they are not very clear. The digits are blurry and the colors are not very accurate. It can also generate images that don't look like digits at all, such as the image below:
Compared to DCGAN, the CCGAN generates much more realistic images. However, it takes a longer amount of time to train. While, DCGAN can create recognizable images with just 50 epochs of training, on the current parameters, the CCGAN takes about 3000 epochs to generate images that look like real MNIST digits. Here is the sample of the digits generated by the CCGAN after 3000 epochs of training:
After training for 5000 epochs:
The loss curves for the CCGAN are shown below:
This shows that the loss for the generator and discriminator stablisize after about 1000 epochs of training. However, the images still keep improving after 1000 epochs, so it is possible to train the network for longer to generate even better images.