Skip to content

Generate adversarial images using DCGANs and evaluate their effectiveness by assessing fooling rate and perceptual similarity.

Notifications You must be signed in to change notification settings

ipleiria-ciic/adversarial-dcgan

Repository files navigation

A Study on the Effectiveness and Quality of DCGAN-based Adversarial Attacks



Publication Information

This work was presented and published at the 20th International Conference on Availability, Reliability and Security (ARES 2025). The paper is available at SpringerLink. If you reference this work in any context, please use the following citation:

@InProceedings{Areia2025,
    author="Areia, Jos{\'e} and Santos, Leonel and Costa, Rog{\'e}rio Lu{\'i}s de C.",
    editor="Dalla Preda, Mila and Schrittwieser, Sebastian and Naessens, Vincent and De Sutter, Bjorn",
    title="Fooling Rate and Perceptual Similarity: A Study on the Effectiveness and Quality of DCGAN-based Adversarial Attacks",
    booktitle="Availability, Reliability and Security",
    year="2025",
    publisher="Springer Nature Switzerland",
    address="Cham",
    pages="420--430",
    isbn="978-3-032-00627-1",
    doi="10.1007/978-3-032-00627-1_21"
}

Description

Deep neural networks (DNNs), while widely used for classification and recognition tasks in computer vision, are vulnerable to adversarial attacks. These attacks craft imperceptible perturbations that can easily mislead DNN models across various real-world scenarios, potentially leading to severe consequences.

This project explores the use of deep convolutional generative adversarial networks (DCGANs) with an additional encoder to generate adversarial images that can deceive DNN models. We trained the DCGAN using images from four different adversarial attacks with varying perturbation levels and tested them on five DNN models. Our experiments demonstrate that the generated adversarial images achieved a high fooling rate (FR) of up to 91.21%.

However, we also assessed image quality using the Fréchet Inception Distance (FID) and Learned Perceptual Image Patch Similarity (LPIPS) metrics. Our results indicate that while achieving a high FR is feasible, maintaining image quality is equally important - yet more challenging - for generating effective adversarial examples.

Repository Structure

adversarial-dcgan/
│
├── 🎨 Assets/                  # Logos and other visual assets
├── ⚔️ Attacks/                 # Adversarial attack implementations
├── 🧠 DCGAN/                   # DCGAN and encoder source code
├── 📓 Notebooks/               # Jupyter notebooks with pre-trained models
├── 🧪 Testing/                 # Test scripts and sample evaluations
├── 🙈 .gitignore               # Git ignore file
├── 🛠️ DCGAN-Training.sh        # Shell script for training DCGAN
├── 🛠️ Encoder-Training.sh      # Shell script for training the encoder
├── 📜 README.md                # Project documentation
├── 🚀 Testing.sh               # Test script for validating the implementation

Usage

Reproducing this work is simple. Just follow these steps:

Prepare the Attack

  • Ensure you have the necessary attack — either the code to generate the perturbation or the perturbation itself in any format.

Train the DCGAN

  • Run DCGAN-Training.sh to generate adversarial images and train the DCGAN on them.
  • Want to tweak settings? You can modify the script to change the model, attack type, number of epochs, or delta values.

Train the Encoder

  • Run Encoder-Training.sh to train the encoder using the best checkpoint from the DCGAN training.
  • Make sure to specify the correct checkpoint within the script.

Test & Evaluate

  • Run Testing.sh to test and evaluate the generated images.
  • Results will be saved in a JSON file for further analysis.

And that's it. The adversarial DCGAN pipeline is ready to go.

Acknowledgements

This work is funded by Fundação para a Ciência e a Tecnologia through project UIDB/04524/2020.

About

Generate adversarial images using DCGANs and evaluate their effectiveness by assessing fooling rate and perceptual similarity.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages