Skip to content

A DCGAN in TensorFlow/Keras to generate artificial human faces, featuring an interactive web UI built with Streamlit for easy inference. This project was developed as part of a winter internship at IIT Guwahati.

Notifications You must be signed in to change notification settings

Suyashkb/Artificial-Face-Generator-DCGAN-

Repository files navigation

Artificial Face Generator (DCGAN)

This project is a Deep Convolutional Generative Adversarial Network (DCGAN) implemented in TensorFlow and Keras. It is trained to generate unique, artificial human faces. The project includes the full training pipeline in a Jupyter Notebook and an interactive web interface built with Streamlit for easy inference and visualization.

Interface_image


Project Context

This work was developed as part of a Winter Internship Project at the Indian Institute of Technology (IIT), Guwahati under the supervision of Dr. Anirban Dasgupta.

  • Institution: IIT Guwahati
  • Supervisor: Dr. Anirban Dasgupta
  • Timeline: December 2023 - January 2024

Generated Samples

Below are some sample images produced by the trained generator after 60 epochs. Further training on a larger dataset like CelebA would yield higher-fidelity results.

img1 img2 img3

Technical Architecture

The project is a classic DCGAN, which consists of two neural networks, a Generator and a Discriminator, competing against each other in a zero-sum game.

Generator

The Generator's role is to create realistic images from a random noise vector. It is essentially an inverse-convolutional network that progressively upsamples a low-dimensional vector into a full-resolution image.

  • Input: A 100-dimensional latent vector (random noise).
  • Architecture:
    1. A Dense layer projects the latent vector and reshapes it into a small feature map (16x16x1024).
    2. A series of five Conv2DTranspose blocks, each using BatchNormalization and LeakyReLU activation, progressively upsample the feature maps (16x16 -> 32x32 -> 64x64 -> 128x128 -> 256x256).
    3. The final layer uses a tanh activation function to scale the output pixel values to the [-1, 1] range.

Discriminator

The Discriminator's role is to act as a binary classifier, determining whether a given image is a real image from the training dataset or a fake one created by the Generator.

  • Input: A 256x256x3 image.
  • Architecture:
    1. A standard convolutional neural network that downsamples the input image.
    2. It consists of multiple Conv2D layers with a stride of 2, BatchNormalization (to stabilize training), and LeakyReLU activations.
    3. The network ends with a Flatten layer and a single Dense neuron with a sigmoid activation, which outputs a single probability score (0 for fake, 1 for real).

Training Details

  • Framework: TensorFlow / Keras
  • Optimizer: RMSprop was used for both models, with carefully tuned learning rates to balance the adversarial training.
  • Loss Function: BinaryCrossentropy
  • Multi-GPU Training: The training script is configured to use tf.distribute.MirroredStrategy for efficient data parallelism on multiple GPUs.

Dataset

The model included in this repository was trained on a private dataset provided during the internship and is therefore not uploaded.

However, the entire pipeline is configured to work with any standard facial dataset. It is highly recommended to use the CelebA Dataset for training a high-quality model. With over 200,000 celebrity images, it is an excellent public resource for this task. The model will likely produce even better results with more training epochs on this dataset.


Setup and Installation

Follow these steps to set up the project environment on your local machine.

1. Clone the Repository

git clone [https://github.com/Suyashkb/Artificial-Face-Generator-DCGAN-.git](https://github.com/Suyashkb/Artificial-Face-Generator-DCGAN-.git)
cd Artificial-Face-Generator-DCGAN-

2. Create a Virtual Environment (Recommended)

python -m venv venv
source venv/bin/activate  # On Windows, use `venv\Scripts\activate`

3. Install Dependencies Install all the required packages using the provided requirements.txt file.

pip install -r requirements.txt

How to Use This Project

There are two main ways to use this repository: deploying the pre-trained model with the web interface or training a new model from scratch.

1. Deploy the Interactive Interface

The pre-trained generator weights are included. You can use the Streamlit interface to generate new faces immediately.

Instructions:

  1. Make sure you have installed all the requirements.
  2. Place the generator_final.weights.h5 file in the main project directory alongside interface.py.
  3. Run the following command in your terminal:
    streamlit run interface.py
  4. Your web browser will open with the user interface. Use the sidebar controls to generate new faces.

2. Train a New Model from Scratch

The train.ipynb Jupyter Notebook contains the complete end-to-end pipeline for data loading, preprocessing, model definition, and training.

Instructions:

  1. Download the Dataset: Download the recommended CelebA Dataset and place the image folder in your project directory.
  2. Open the Notebook: Run jupyter notebook from your terminal and open the train.ipynb file.
  3. Configure Paths: In the notebook, update the path to the dataset directory to where you saved the images.
  4. Run the Cells: Execute the cells in order to preprocess the data and start the training loop. The script is configured for multi-GPU training if available.
  5. Save Weights: The training loop will periodically save the generator and discriminator weights in a training_checkpoints folder.

File Structure

.
├── images/                   # Sample images and screenshots
├── generator_final.weights.h5 # Pre-trained weights for the generator
├── discriminator_final.weights.h5# Pre-trained weights for the discriminator
├── interface.py              # The Streamlit application for inference
├── requirements.txt          # Required Python packages
├── train.ipynb               # Jupyter Notebook for the full training pipeline
└── README.md                 # This file

Acknowledgments

I would like to express my sincere gratitude to my mentor, Dr. Anirban Dasgupta, for his invaluable guidance and support throughout this internship project at IIT Guwahati. His expertise was instrumental in the development of this work.

About

A DCGAN in TensorFlow/Keras to generate artificial human faces, featuring an interactive web UI built with Streamlit for easy inference. This project was developed as part of a winter internship at IIT Guwahati.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published