Scaleable input gradient regularization
-
Updated
Jul 8, 2019 - Python
Scaleable input gradient regularization
The official code of IEEE S&P 2024 paper "Why Does Little Robustness Help? A Further Step Towards Understanding Adversarial Transferability". We study how to train surrogates model for boosting transfer attack.
An implementation of a deepfake detection model that uses gradient regularization to improve robustness against adversarial attacks. This approach perturbs the mean and standard deviation of shallow layers in an EfficientNetB0 backbone to enhance generalization and defend against attacks like FGSM and PGD.
Add a description, image, and links to the gradient-regularization topic page so that developers can more easily learn about it.
To associate your repository with the gradient-regularization topic, visit your repo's landing page and select "manage topics."