Skip to content
#

pgd-adversarial-attacks

Here are 14 public repositories matching this topic...

In the dynamic landscape of medical artificial intelligence, this study explores the vulnerabilities of the Pathology Language-Image Pretraining (PLIP) model, a Vision Language Foundation model, under targeted attacks like PGD adversarial attack.

  • Updated May 18, 2024
  • Jupyter Notebook

Implementation and evaluation for Deep Learning Project 3 (Spring 2025, NYU Tandon). We attack a pretrained ResNet-34 model using ℓ∞-bounded adversarial perturbations, including FGSM, PGD, Momentum PGD, and Patch PGD, and assess transferability to DenseNet-121.

  • Updated May 22, 2025
  • Jupyter Notebook

This repository contains the codebase for Jailbreaking Deep Models, which investigates the vulnerability of deep convolutional neural networks to adversarial attacks. The project systematically implements and analyzes Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and localized patch-based attacks on the pretrained

  • Updated May 17, 2025
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the pgd-adversarial-attacks topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the pgd-adversarial-attacks topic, visit your repo's landing page and select "manage topics."

Learn more