[ICCV'19] Improving Adversarial Robustness via Guided Complement Entropy
-
Updated
Aug 2, 2019 - Python
[ICCV'19] Improving Adversarial Robustness via Guided Complement Entropy
An ASR (Automatic Speech Recognition) adversarial attack repository.
Evaluating CNN robustness against various adversarial attacks, including FGSM and PGD.
In the dynamic landscape of medical artificial intelligence, this study explores the vulnerabilities of the Pathology Language-Image Pretraining (PLIP) model, a Vision Language Foundation model, under targeted attacks like PGD adversarial attack.
WideResNet implementation on MNIST dataset. FGSM and PGD adversarial attacks on standard training, PGD adversarial training, and Feature Scattering adversarial training.
Adversarial Sample Generation
2022 Spring Semester, Personal Project Research
my MA thesis (code, paper & presentation) about adversarial out-of-distribution detection
An University Project for the AI4Cybersecurity class.
Learning Adversarial Robustness in Machine Learning both Theory and Practice.
Implementation and evaluation for Deep Learning Project 3 (Spring 2025, NYU Tandon). We attack a pretrained ResNet-34 model using ℓ∞-bounded adversarial perturbations, including FGSM, PGD, Momentum PGD, and Patch PGD, and assess transferability to DenseNet-121.
This repository contains the codebase for Jailbreaking Deep Models, which investigates the vulnerability of deep convolutional neural networks to adversarial attacks. The project systematically implements and analyzes Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and localized patch-based attacks on the pretrained
Adversarially-robust Image Classifier
The Fast Gradient Sign Method (FGSM) combines a white box approach with a misclassification goal. It tricks a neural network model into making wrong predictions. We use this technique to anonymize images.
Add a description, image, and links to the pgd-adversarial-attacks topic page so that developers can more easily learn about it.
To associate your repository with the pgd-adversarial-attacks topic, visit your repo's landing page and select "manage topics."