[CVPR23] "Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations" by Lei Hsiung, Yun-Yun Tsai, Pin-Yu Chen, and Tsung-Yi Ho.
-
Updated
May 7, 2025 - Python
[CVPR23] "Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations" by Lei Hsiung, Yun-Yun Tsai, Pin-Yu Chen, and Tsung-Yi Ho.
Pipeline for testing drug response prediction models in a statistically and biologically sound way.
Sinkhorn Adversarial Training (SAT): Optimal Transport as a Defense Against Adversarial Attacks
This repository consists the code for the paper titled Introspective Learning : A Two-Stage Approach for Inference in Neural Networks
Code for "FACESEC: A Fine-grained Robustness Evaluation Framework for Face Recognition Systems" @ CVPR 2021
This github repository contains the official code for the papers, "Robustness Assessment for Adversarial Machine Learning: Problems, Solutions and a Survey of Current Neural Networks and Defenses" and "One Pixel Attack for Fooling Deep Neural Networks"
Test the Robustness of DAISIE to Geodynamics and Traits
LLMs Robustness Evaluation on Sentiment Analysis task
GuardAI Adversarial Security Assessment Platform for AI API client
This repository evaluates the impact of Microsoft’s Responsible AI principles on the security of tabular ML models, using adversarial attacks and tailored defenses with ART.
Automatic multi-metric evaluation of human-bot dialogues using LLMs (Claude, GPT-4o) across different datasets and settings. Built for the Artificial Intelligence course at the University of Salerno.
Add a description, image, and links to the robustness-assessment topic page so that developers can more easily learn about it.
To associate your repository with the robustness-assessment topic, visit your repo's landing page and select "manage topics."