This Github repository summarizes a list of BIG-4 academic conferences papers on AI security, namely IEEE Symposium on Security and Privacy (S&P), Network and Distributed System Security Symposium (NDSS), USENIX Security Symposium, and ACM Conference on Computer and Communications Security (CCS).
This repository is supported by the Trustworthy Artificial Intelligence (T-AI) Lab at Huazhong University of Science and Technology (HUST).
Feel free to contact zhouziqi@hust.edu.cn for any issues.
- 2025/5/21: Zongren Ma added S&P 2025 papers.
- 2025/4/23: Zongren Ma added S&P 2023&2024, CCS 2024 papers.
- 2025/4/22: Pinzheng Wu added NDSS 2023&2024&2025, USENIX Security 2023&2024 papers.
- 2023/8/6: Junyu Shi added CCS papers.
- 2023/7/25: Hangtao Zhang added NDSS & USENIX Security papers.
- 2023/7/24: Ziqi Zhou added S&P papers.
- 2023/7/23: We create the AI-Security-Resources repository.
-
GRID: Protecting Training Graph from Link Stealing Attacks on GNN Models. [Topic: GNN] [pdf]
- Jiadong Lou, Xu Yuan, Rui Zhang, Xingliang Yuan, Neil Gong, Nian-Feng Tzeng. IEEE Symposium on Security and Privacy, 2025.
-
Preference Poisoning Attacks on Reward Model Learning. [Topic: AEs] [pdf]
- Junlin Wu, Jiongxiao Wang, Chaowei Xiao, Chenguang Wang, Ning Zhang, Yevgeniy Vorobeychik. IEEE Symposium on Security and Privacy, 2025.
-
Prevalence Overshadows Concerns? Understanding Chinese Users' Privacy Awareness and Expectations Towards LLM-based Healthcare Consultation. [Topic: LLM] [pdf]
- Z. Liu, L. Hu, T. Zhou, Y. Tang and Z. Cai. IEEE Symposium on Security and Privacy, 2025.
-
Adversarial Robust ViT-based Automatic Modulation Recognition in Practical Deep Learning-based Wireless Systems. [Topic: vit] [pdf]
- G. Li, C. Lin, X. Zhang, X. Ma and L. Guo IEEE Symposium on Security and Privacy, 2025.
-
HarmonyCloak: Making Music Unlearnable for Generative AI. [Topic: AI] [pdf]
- S. I. A. Meerza, L. Sun and J. . Liu. IEEE Symposium on Security and Privacy, 2025.
-
Exploring Parent-Child Perceptions on Safety in Generative AI: Concerns, Mitigation Strategies, and Design Implications. [Topic: AI] [pdf]
- Yaman Yu, Tanusree Sharma, Melinda Hu, Justin Wang, Yang Wang. IEEE Symposium on Security and Privacy, 2025.
-
Supporting Human Raters with the Detection of Harmful Content using Large Language Models. [Topic: LLM] [pdf]
- Kurt Thomas, Patrick Gage Kelley, David Tao, Sarah Meiklejohn, Owen Vallis, Shunwen Tan, Blaž Bratanič, Felipe Tiengo Ferreira, Vijay Kumar Eranti, Elie Bursztein. IEEE Symposium on Security and Privacy, 2025.
-
Watermarking Language Models for Many Adaptive Users. [Topic: LLM] [pdf]
- Aloni Cohen, Alexander Hoover, Gabe Schoenbach. IEEE Symposium on Security and Privacy, 2025.
-
Benchmarking Attacks on Learning with Errors. [Topic: AEs] [pdf]
- Emily Wenger, Eshika Saxena, Mohamed Malhou, Ellie Thieu, Kristin Lauter. IEEE Symposium on Security and Privacy, 2025.
-
Fight Fire with Fire: Combating Adversarial Patch Attacks using Pattern-randomized Defensive Patches. [Topic: AEs] [pdf]
- Jianan Feng, Jiachun Li, Changqing Miao, Jianjun Huang, Wei You, Wenchang Shi, Bin Liang. IEEE Symposium on Security and Privacy, 2025.
-
UnMarker: A Universal Attack on Defensive Image Watermarking. [Topic: AEs] [pdf]
- Andre Kassis, Urs Hengartner. IEEE Symposium on Security and Privacy, 2025.
-
My Model is Malware to You: Transforming AI Models into Malware by Abusing TensorFlow APIs. [Topic: AI] [pdf]
- R. Zhu, G. Chen, W. Shen, X. Xie and R. Chang. IEEE Symposium on Security and Privacy, 2025.
-
BAIT: Large Language Model Backdoor Scanning by Inverting Attack Target. [Topic: LLM] [pdf]
- Guangyu Shen, Siyuan Cheng, Zhuo Zhang, Guanhong Tao, Kaiyuan Zhang, Hanxi Guo, Lu Yan, Xiaolong Jin, Shengwei An, Shiqing Ma, Xiangyu Zhang. IEEE Symposium on Security and Privacy, 2025.
-
GuardAIn: Protecting Emerging Generative AI Workloads on Heterogeneous NPU. [Topic: AI] [pdf]
- Aritra Dhar, Clément Thorens, Lara Magdalena Lazier, Lukas Cavigelli. IEEE Symposium on Security and Privacy, 2025.
-
Comet: Accelerating Private Inference for Large Language Model by Predicting Activation Sparsity. [Topic: LLM] [pdf]
- Guang Yan, Yuhui Zhang, Zimu Guo, Lutan Zhao, Xiaojun Chen, Chen Wang, Wenhao Wang, Dan Meng, Rui Hou. IEEE Symposium on Security and Privacy, 2025.
-
Prompt Inversion Attack against Collaborative Inference of Large Language Models. [Topic: LLM] [pdf]
- Wenjie Qu, Yuguang Zhou, Yongji Wu, Tingsong Xiao, Binhang Yuan, Yiming Li, Jiaheng Zhang. IEEE Symposium on Security and Privacy, 2025.
-
The Inadequacy of Similarity-based Privacy Metrics: Privacy Attacks against ``Truly Anonymous'' Synthetic Datasets. [Topic: AEs] [pdf]
- Georgi Ganev, Emiliano De Cristofaro. IEEE Symposium on Security and Privacy, 2025.
-
PEFTGuard: Detecting Backdoor Attacks Against Parameter-Efficient Fine-Tuning. [Topic: AEs] [pdf]
- Zhen Sun, Tianshuo Cong, Yule Liu, Chenhao Lin, Xinlei He, Rongmao Chen, Xingshuo Han, Xinyi Huang. IEEE Symposium on Security and Privacy, 2025.
-
An Attack-Agnostic Defense Framework Against Manipulation Attacks under Local Differential Privacy. [Topic: AEs] [pdf]
- Puning Zhao, Zhikun Zhang, Jiawei Dong, Jiafei Wu, Shaowei Wang, Zhe Liu, Yunjun Gao. IEEE Symposium on Security and Privacy, 2025.
-
CODEBREAKER: Dynamic Extraction Attacks on Code Language Models. [Topic: AEs] [pdf] -Changzhou Han, Zehang Deng, Wanlun Ma, Xiaogang Zhu, Jason (Minhui) Xue, Tianqing Zhu, Sheng Wen, Yang Xiang. IEEE Symposium on Security and Privacy, 2025.
-
Secure Transfer Learning: Training Clean Model Against Backdoor in Pre-Trained Encoder and Downstream Dataset. [Topic: backdoor] [pdf] -Yechao Zhang, Yuxuan Zhou, Tianyu Li, Minghui Li, Shengshan Hu, Wei Luo, Leo Yu Zhang. IEEE Symposium on Security and Privacy, 2025.
-
Make a Feint to the East While Attacking in the West: Blinding LLM-Based Code Auditors with Flashboom Attacks. [Topic: LLM] [pdf] -Xiao Li, Yue Li, Hao Wu, Yue Zhang, Kaidi Xu, Xiuzhen Cheng, Sheng Zhong, Fengyuan Xu. IEEE Symposium on Security and Privacy, 2025.
-
Alleviating the Fear of Losing Alignment in LLM Fine-tuning. [Topic: LLM] [pdf] -Kang Yang, Guanhong Tao, Xun Chen, Jun Xu. IEEE Symposium on Security and Privacy, 2025.
-
Fun-tuning: Characterizing the Vulnerability of Proprietary LLMs to Optimization-based Prompt Injection Attacks via the Fine-Tuning Interface. [Topic: LLM] [pdf] -Andrey Labunets, Nishit V. Pandya, Ashish Hooda, Xiaohan Fu, Earlence Fernandes. IEEE Symposium on Security and Privacy, 2025.
-
Fuzz-Testing Meets LLM-Based Agents: An Automated and Efficient Framework for Jailbreaking Text-To-Image Generation Models. [Topic: LLM] [pdf] -Yingkai Dong, Xiangtao Meng, Ning Yu, Zheng Li, Shanqing Guo. IEEE Symposium on Security and Privacy, 2025.
-
Understanding Users' Security and Privacy Concerns and Attitudes Towards Conversational AI Platforms. [Topic: AI] [pdf] -Mutahar Ali, Arjun Arunasalam, Habiba Farrukh. IEEE Symposium on Security and Privacy, 2025.
-
Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples. [Topic: AEs] [pdf]
- Ziqi Zhou, Minghui Li, Wei Liu, Shengshan Hu, Yechao Zhang, Wei Wan, Lulu Xue, Leo Yu Zhang, Dezhong Yao, Hai Jin. IEEE Symposium on Security and Privacy, 2024.
-
Why Does Little Robustness Help? A Further Step Towards Understanding Adversarial Transferability. [Topic: AEs] [Code][pdf]
- Yechao Zhang, Shengshan Hu, Leo Yu Zhang, Junyu Shi, Xiaogeng Liu, Minghui Li, Wei Wan, Hai Jin. IEEE Symposium on Security and Privacy, 2024.
-
LABRADOR: Response Guided Directed Fuzzing for Black-box IoT Devices. [Topic: AEs] [pdf]
- Hangtian Liu; Shuitao Gan; Chao Zhang; Zicong Gao; Hongqi Zhang; Xiangzhi Wang. IEEE Symposium on Security and Privacy, 2024.
-
SneakyPrompt: Jailbreaking Text-to-image Generative Models. [Topic: AEs] [Code][pdf]
- Yuchen Yang, Bo Hui, Haolin Yuan, Neil Gong, Yinzhi Cao. IEEE Symposium on Security and Privacy, 2024.
-
SmartInv: Multimodal Learning for Smart Contract Invariant Inference.[Topic: AEs] [Code][pdf]
- Sally Junsong Wang; Kexin Pei; Junfeng Yang. IEEE Symposium on Security and Privacy, 2024.
-
AVA: Inconspicuous Attribute Variation-based Adversarial Attack bypassing DeepFake Detection.[Topic: AEs] [pdf]
- Xiangtao Meng, Li Wang, Shanqing Guo, Lei Ju, Qingchuan Zhao. IEEE Symposium on Security and Privacy, 2024.
-
Robust Backdoor Detection for Deep Learning via Topological Evolution Dynamics.[Topic: Backdoor] [pdf]
- Xiaoxing Mo, Yechao Zhang, Leo Yu Zhang, Wei Luo, Nan Sun, Shengshan Hu, Shang Gao, Yang Xiang. IEEE Symposium on Security and Privacy, 2024.
-
MEA-Defender: A Robust Watermark against Model Extraction Attack.[Topic: AEs] [pdf]
- Peizhuo Lv, Hualong Ma, Kai Chen, Jiachen Zhou, Shengzhi Zhang, Ruigang Liang, Shenchen Zhu, Pan Li, Yingjun Zhang. IEEE Symposium on Security and Privacy, 2024.
-
BounceAttack: A Query-Efficient Decision-based Adversarial Attack by Bouncing into the Wild.[Topic: AEs] [pdf]
- Jie Wan; Jianhao Fu; Lijin Wang; Ziqi Yang. IEEE Symposium on Security and Privacy, 2024.
-
SoK: Explainable Machine Learning in Adversarial Environments.[Topic: AEs] [pdf]
- Maximilian Noppel; Christian Wressnegger. IEEE Symposium on Security and Privacy, 2024.
-
Poisoned ChatGPT Finds Work for Idle Hands: Exploring Developers' Coding Practices with Insecure Suggestions from Poisoned AI Models.[Topic: AEs] [pdf]
- Sanghak Oh, Kiho Lee, Seonhye Park, Doowon Kim, Hyoungshick Kim. IEEE Symposium on Security and Privacy, 2024.
-
Transferable Multimodal Attack on Vision-Language Pre-training Models.[Topic: AEs] [pdf]
- Haodi Wang; Kai Dong; Zhilei Zhu; Haotong Qin; Aishan Liu; Xiaolin Fang. IEEE Symposium on Security and Privacy, 2024.
-
Exploring the Orthogonality and Linearity of Backdoor Attacks.[Topic: Backdoor] [pdf]
- Siyuan Cheng; Guangyu Shen; Guanhong Tao; Kaiyuan Zhang; Zhuo Zhang; Shengwei An. IEEE Symposium on Security and Privacy, 2024.
-
OdScan: Backdoor Scanning for Object Detection Models.[Topic: Backdoor] [pdf]
- Kaiyuan Zhang; Siyuan Cheng; Guangyu Shen; Guanhong Tao; Shengwei An; Anuran Makur. IEEE Symposium on Security and Privacy, 2024.
-
Need for Speed: Taming Backdoor Attacks with Speed and Precision.[Topic: Backdoor] [pdf]
- Zhuo Ma; Yilong Yang; Yang Liu; Tong Yang; Xinjing Liu; Teng Li. IEEE Symposium on Security and Privacy, 2024.
-
BAFFLE: Hiding Backdoors in Offline Reinforcement Learning Datasets.[Topic: Backdoor] [pdf]
- Chen Gong, Zhou Yang, Yunpeng Bai, Junda He, Jieke Shi, Kecen Li, Arunesh Sinha, Bowen Xu, Xinwen Hou, David Lo, Tianhao Wang. IEEE Symposium on Security and Privacy, 2024.
-
DeepVenom: Persistent DNN Backdoors Exploiting Transient Weight Perturbations in Memories.[Topic: Backdoor] [Code][pdf]
- Kunbei Cai; Md Hafizul Islam Chowdhuryy; Zhenkai Zhang; Fan Yao. IEEE Symposium on Security and Privacy, 2024.
-
LLMs Cannot Reliably Identify and Reason About Security Vulnerabilities (Yet?): A Comprehensive Evaluation, Framework, and Benchmarks.[Topic: AEs] [pdf]
- Saad Ullah, Mingji Han, Saurabh Pujar, Hammond Pearce, Ayse Coskun, Gianluca Stringhini. IEEE Symposium on Security and Privacy, 2024.
-
BELT: Old-School Backdoor Attacks can Evade the State-of-the-Art Defense with Backdoor Exclusivity Lifting.[Topic: Backdoor] [pdf]
- Huming Qiu, Junjie Sun, Mi Zhang, Xudong Pan, Min Yang. IEEE Symposium on Security and Privacy, 2024.
-
You Only Prompt Once: On the Capabilities of Prompt Learning on Large Language Models to Tackle Toxic Content.[Topic: AEs] [pdf]
- Xinlei He, Savvas Zannettou, Yun Shen, Yang Zhang. IEEE Symposium on Security and Privacy, 2024.
-
LOKI: Large-scale Data Reconstruction Attack against Federated Learning through Model Manipulation.[Topic: AEs] [pdf]
- Joshua C. Zhao, Atul Sharma, Ahmed Roushdy Elkordy, Yahya H. Ezzeldin, Salman Avestimehr, Saurabh Bagchi. IEEE Symposium on Security and Privacy, 2024.
-
Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks.[Topic: AEs] [pdf]
- Xinyu Zhang; Hanbin Hong; Yuan Hong; Peng Huang; Binghui Wang; Zhongjie Ba. IEEE Symposium on Security and Privacy, 2024.
-
MM-BD: Post-Training Detection of Backdoor Attacks with Arbitrary Backdoor Pattern Types Using a Maximum Margin Statistic.[Topic: Backdoor] [pdf]
- Hang Wang, Zhen Xiang, David J. Miller, George Kesidis. IEEE Symposium on Security and Privacy, 2024.
-BadVFL: Backdoor Attacks in Vertical Federated Learning.[Topic: Backdoor] [pdf]
- Mohammad Naseri, Yufei Han, Emiliano De Cristofaro. IEEE Symposium on Security and Privacy, 2024.
-Multi-Instance Adversarial Attack on GNN-Based Malicious Domain Detection.[Topic: GNN] [pdf]
- Mahmoud Nazzal, Issa Khalil, Abdallah Khreishah, NhatHai Phan, and Yao Ma. IEEE Symposium on Security and Privacy, 2024.
-Distribution Preserving Backdoor Attack in Self-supervised Learning.[Topic: Backdoor] [pdf]
- Guanhong Tao; Zhenting Wang; Shiwei Feng; Guangyu Shen; Shiqing Ma; Xiangyu Zhang. IEEE Symposium on Security and Privacy, 2024.
-
SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning.[Topic: ML] [pdf]
- Ahmed Salem, Giovanni Cherubin, David Evans, Boris Köpf, Andrew Paverd, Anshuman Suri, Shruti Tople, Santiago Zanella-Béguelin. IEEE Symposium on Security and Privacy, 2023.
-
Analyzing Leakage of Personally Identifiable Information in Language Models.[Topic: LM] [pdf]
- Nils Lukas, Ahmed Salem, Robert Sim, Shruti Tople, Lukas Wutschitz, Santiago Zanella-Béguelin. IEEE Symposium on Security and Privacy, 2023.
-
D-DAE: Defense-Penetrating Model Extraction Attacks.[Topic: AEs] [Code][pdf]
- Kunbei Cai; Md Hafizul Islam Chowdhuryy; Zhenkai Zhang; Fan Yao. IEEE Symposium on Security and Privacy, 2023.
-
Disguising Attacks with Explanation-Aware Backdoors.[Topic: Backdoor] [pdf]
- Maximilian Noppel; Lukas Peter; Christian Wressnegger. IEEE Symposium on Security and Privacy, 2023.
-
AI-Guardian: Defeating Adversarial Attacks using Backdoors.[Topic: Backdoor] [pdf]
- Hong Zhu; Shengzhi Zhang; Kai Chen. IEEE Symposium on Security and Privacy, 2023.
-
BayBFed: Bayesian Backdoor Defense for Federated Learning.[Topic: Backdoor] [pdf]
- Kavita Kumari; Phillip Rieger; Hossein Fereidooni; Murtuza Jadliwala; Ahmad-Reza Sadeghi. IEEE Symposium on Security and Privacy, 2023.
-
edeem Myself: Purifying Backdoors in Deep Learning Models using Self Attention Distillation.[Topic: Backdoor] [pdf]
- Xueluan Gong; Yanjiao Chen; Wang Yang; Qian Wang; Yuzhe Gu; Huayang Huang. IEEE Symposium on Security and Privacy, 2023.
-
ImU: Physical Impersonating Attack for Face Recognition System with Natural Style Changes.[Topic: AEs] [pdf]
- Shengwei An; Yuan Yao; Qiuling Xu; Shiqing Ma; Guanhong Tao; Siyuan Cheng. IEEE Symposium on Security and Privacy, 2023.
-
FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information.[Topic: AEs] [pdf]
- Xiaoyu Cao, Jinyuan Jia, Zaixi Zhang, Neil Zhenqiang Gong. IEEE Symposium on Security and Privacy, 2023.
-
On The Empirical Effectiveness of Unrealistic Adversarial Hardening Against Realistic Adversarial Attacks.[Topic: AEs] [pdf]
- Salijona Dyrmishi, Salah Ghamizi, Thibault Simonetto, Yves Le Traon, Maxime Cordy. IEEE Symposium on Security and Privacy, 2023.
-
“Adversarial Examples” for Proof-of-Learning. [Topic: AEs] [pdf]
- Rui Zhang, Jian Liu, Yuan Ding, Zhibo Wang, Qingbiao Wu, and Kui Ren. IEEE Symposium on Security and Privacy, 2022.
-
Transfer Attacks Revisited: A Large-Scale Empirical Study in Real Computer Vision Settings. [Topic:AEs] [pdf]
- Yuhao Mao, Chong Fu, Saizhuo Wang, Shouling Ji, Xuhong Zhang, Zhenguang Liu, Jun Zhou, Alex X.Liu, Raheem Beyah, Ting Wang. IEEE Symposium on Security and Privacy, 2022.
-
Bad Characters: Imperceptible NLP Attacks. [Topic: AEs] [Code][pdf]
- Nicholas Boucher, Ilia Shumailov, Ross Anderson, Nicolas Papernot. IEEE Symposium on Security and Privacy, 2022.
-
Universal 3-Dimensional Perturbations for Black-Box Attacks on Video Recognition Systems. [Topic: AEs] [pdf]
- Shangyu Xie, Han Wang, Yu Kong, Yuan Hong. IEEE Symposium on Security and Privacy, 2022.
-
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning. [Topic: Backdoor] [Code][pdf]
- Jinyuan Jia, Yupei Liu, Neil Zhenqiang Gong. IEEE Symposium on Security and Privacy, 2022.
-
PICCOLO: Exposing Complex Backdoors in NLP Transformer Models. [Topic: Backdoor] [pdf]
- Yingqi Liu, Guangyu Shen, Guanhong Tao, Shengwei An, Shiqing Ma, Xiangyu Zhang. IEEE Symposium on Security and Privacy, 2022.
-
Membership Inference Attacks From First Principles. [Topic: MIA] [pdf]
- Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, Florian Tramer. IEEE Symposium on Security and Privacy, 2022.
-
Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated Learning. [Topic: PA & FL] [pdf]
- Virat Shejwalkar, Amir Houmansadr, Peter Kairouz, Daniel Ramage. IEEE Symposium on Security and Privacy, 2022.
-
Model Stealing Attacks Against Inductive Graph Neural Networks. [Topic: MSA & GNN] [pdf]
- Yun Shen, Xinlei He, Yufei Han, Yang Zhang. IEEE Symposium on Security and Privacy, 2022.
-
SoK: How Robust is Image Classification Deep Neural Network Watermarking? [Topic: Watermark] [pdf]
- Nils Lukas, Edward Jiang, Xinda Li, Florian Kerschbaum. IEEE Symposium on Security and Privacy, 2022.
-
Hear "No Evil", See "Kenansville": Efficient and Transferable Black-Box Attacks on Speech Recognition and Voice Identification Systems. [Topic: AEs] [pdf]
- Hadi Abdullah, Muhammad Sajidur Rahman, Washington Garcia, Logan Blue, Kevin Warren, Anurag Swarnim Yadav, Tom Shrimpton, Patrick Traynor. IEEE Symposium on Security and Privacy, 2021.
-
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems. [Topic: AEs] [pdf]
- Hadi Abdullah, Kevin Warren, Vincent Bindschaedler, Nicolas Papernot, Patrick Traynor. IEEE Symposium on Security and Privacy, 2021.
-
Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion based Perception in Autonomous Driving Under Physical-World Attacks. [Topic: AEs] [pdf]
- Yulong Cao, Ningfei Wang, Chaowei Xiao, Dawei Yang, Jin Fang, Ruigang Yang, Qi Alfred Chen, Mingyan Liu, Bo Li. IEEE Symposium on Security and Privacy, 2021.
-
Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems. [Topic: AEs] [pdf]
- Guangke Chen, Sen Chen, Lingling Fan, Xiaoning Du, Zhe Zhao, Fu Song, Yang Liu. IEEE Symposium on Security and Privacy, 2021.
-
Adversarial Watermarking Transformer: Towards Tracing Text Provenance with Data Hiding. [Topic: Watermark] [pdf]
- Sahar Abdelnabi, Mario Fritz. IEEE Symposium on Security and Privacy, 2021.
-
A Method to Facilitate Membership Inference Attacks in Deep Learning Models.[Topic: ML] [pdf]
- Zitao Chen, Karthik Pattabiraman. Network and Distributed System Security, 2025.
-
Black-box Membership Inference Attacks against Fine-tuned Diffusion Models.[Topic:Diffusion Model] [pdf]
- Yan Pang, Tianhao Wang. Network and Distributed System Security, 2025.
-
BumbleBee: Secure Two-party Inference Framework for Large Transformers.[Topic: Transformer] [pdf]
- Wen-jie Lu, Zhicong Huang, Zhen Gu, Jingyu Li, Jian Liu, Cheng Hong, Kui Ren, Tao Wei, Wenguang Chen. Network and Distributed System Security, 2025.
-
CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling.[Topic: FL] [pdf]
- Kaiyuan Zhang, Siyuan Cheng, Guangyu Shen, Bruno Ribeiro, Shengwei An, Pin-Yu Chen, Xiangyu Zhang, Ninghui Li. Network and Distributed System Security, 2025.
-
CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models.[Topic: Backdoor] [pdf]
- Rui Zeng, Xi Chen, Yuwen Pu, Xuhong Zhang, Tianyu Du, Shouling Ji. Network and Distributed System Security, 2025.
-
Compiled Models, Built-In Exploits: Uncovering Pervasive Bit-Flip Attack Surfaces in DNN Executables.[Topic: DNN] [pdf]
- Yanzuo Chen, Zhibo Liu, Yuanyuan Yuan, Sihang Hu, Tianxiang Li, Shuai Wang. Network and Distributed System Security, 2025.
-
Difference: Fencing Membership Privacy With Diffusion Models.[Topic:Diffusion Model] [pdf]
- Yuefeng Peng, Ali Naseh, Amir Houmansadr. Network and Distributed System Security, 2025.
-
Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Watermarking Feature Attribution.[Topic:Backdoor] [pdf]
- Shuo Shao, Yiming Li, Hongwei Yao, Yiling He, Zhan Qin, Kui Ren. Network and Distributed System Security, 2025.
-
Generating API Parameter Security Rules with LLM for API Misuse Detection.[Topic:LLM] [pdf]
- Jinghua Liu, Yi Yang, Kai Chen, Miaoqian Lin. Network and Distributed System Security, 2025.
-
Magmaw: Modality-Agnostic Adversarial Attacks on Machine Learning-Based Wireless Communication Systems.[Topic:ML] [pdf]
- Jung-Woo Chang, Ke Sun, Nasimeh Heydaribeni, Seira Hidano, Xinyu Zhang, Farinaz Koushanfar. Network and Distributed System Security, 2025.
-
Passive Inference Attacks on Split Learning via Adversarial Regularization.[Topic:SL] [pdf]
- Xiaochen Zhu, Xinjian Luo, Yuncheng Wu, Yangfan Jiang, Xiaokui Xiao, Beng Chin Ooi. Network and Distributed System Security, 2025.
-
Reinforcement Unlearning.[Topic:Machine unlearning] [pdf]
- Dayong Ye, Tianqing Zhu, Congcong Zhu, Derui Wang, Kun Gao, Zewei Shi, Sheng Shen, Wanlei Zhou, Minhui Xue. Network and Distributed System Security, 2025.
-
The Midas Touch: Triggering the Capability of LLMs for RM-API Misuse Detection.[Topic:LLM] [pdf]
- Yi Yang, Jinghua Liu, Kai Chen, Miaoqian Lin. Network and Distributed System Security, 2025.
-
The Philosopher's Stone: Trojaning Plugins of Large Language Models.[Topic:LLM] [pdf]
- Tian Dong, Minhui Xue, Guoxing Chen, Rayne Holland, Yan Meng, Shaofeng Li, Zhen Liu, Haojin Zhu. Network and Distributed System Security, 2025.
-
TrajDeleter: Enabling Trajectory Forgetting in Offline Reinforcement Learning Agents.[Topic:RL] [pdf]
- Chen Gong, Kecen Li, Jin Yao, Tianhao Wang. Network and Distributed System Security, 2025.
-
Understanding Data Importance in Machine Learning Attacks: Does Valuable Data Pose Greater Harm?[Topic:ML] [pdf]
- Rui Wen, Michael Backes, Yang Zhang. Network and Distributed System Security, 2025.
-
A New PPML Paradigm for Quantized Models.[Topic:PPML] [pdf]
- Tianpei Lu, Bingsheng Zhang, Xiaoyuan Zhang, Kui Ren. Network and Distributed System Security, 2025.
-
ASGARD: Protecting On-Device Deep Neural Networks with Virtualization-Based Trusted Execution Environments.[Topic:DNN] [pdf]
- Myungsuk Moon, Minhee Kim, Joonkyo Jung, Dokyung Song. Network and Distributed System Security, 2025.
-
BARBIE: Robust Backdoor Detection Based on Latent Separability.[Topic:Backdoor] [pdf]
- Hanlei Zhang, Yijie Bai, Yanjiao Chen, Zhongming Ma, Wenyuan Xu. Network and Distributed System Security, 2025.
-
Beyond Classification: Inferring Function Names in Stripped Binaries via Domain Adapted LLMs.[Topic:LLM] [pdf]
- Linxi Jiang, Xin Jin, Zhiqiang Lin. Network and Distributed System Security, 2025.
-
BitShield: Defending Against Bit-Flip Attacks on DNN Executables.[Topic:DNN] [pdf]
- Yanzuo Chen, Yuanyuan Yuan, Zhibo Liu, Sihang Hu, Tianxiang Li, Shuai Wang. Network and Distributed System Security, 2025.
-
Defending Against Membership Inference Attacks on Iteratively Pruned Deep Neural Networks.[Topic:MIA] [pdf]
- Jing Shang, Jian Wang, Kailun Wang, Jiqiang Liu, Nan Jiang, Md. Armanuzzaman, Ziming Zhao. Network and Distributed System Security, 2025.
-
DLBox: New Model Training Framework for Protecting Training Data.[Topic:model training framework] [pdf]
- Jaewon Hur, Juheon Yi, Cheolwoo Myung, Sangyun Kim, Youngki Lee, Byoungyoung Lee. Network and Distributed System Security, 2025.
-
Do We Really Need to Design New Byzantine-robust Aggregation Rules?[Topic:FL] [pdf]
- Minghong Fang, Seyedsina Nabavirazavi, Zhuqing Liu, Wei Sun, Sundaraja Sitharama Iyengar, Haibo Yang. Network and Distributed System Security, 2025.
-
DShield: Defending against Backdoor Attacks on Graph Neural Networks via Discrepancy Learning.[Topic:Backdoor] [pdf]
- Hao Yu, Chuan Ma, Xinhang Wan, Jun Wang, Tao Xiang, Meng Shen, Xinwang Liu. Network and Distributed System Security, 2025.
-
From Large to Mammoth: A Comparative Evaluation of Large Language Models in Vulnerability Detection.[Topic:LLM] [pdf]
- Jie Lin, David Mohaisen. Network and Distributed System Security, 2025.
-
I Know What You Asked: Prompt Leakage via KV-Cache Sharing in Multi-Tenant LLM Serving.[Topic:LLM] [pdf]
- Guanlong Wu, Zheng Zhang, Yao Zhang, Weili Wang, Jianyu Niu, Ye Wu, Yinqian Zhang. Network and Distributed System Security, 2025.
-
I know what you MEME! Understanding and Detecting Harmful Memes with Multimodal Large Language Models.[Topic:MLLM] [pdf]
- Yong Zhuang, Keyan Guo, Juan Wang, Yiheng Jing, Xiaoyang Xu, Wenzhe Yi, Mengda Yang, Bo Zhao, Hongxin Hu. Network and Distributed System Security, 2025.
-
IsolateGPT: An Execution Isolation Architecture for LLM-Based Agentic Systems.[Topic:LLM] [pdf]
- Yuhao Wu, Franziska Roesner, Tadayoshi Kohno, Ning Zhang, Umar Iqbal. Network and Distributed System Security, 2025.
-
L-HAWK: A Controllable Physical Adversarial Patch Against a Long-Distance Target.[Topic:physical adversarial patch attacks] [pdf]
- Taifeng Liu, Yang Liu, Zhuo Ma, Tong Yang, Xinjing Liu, Teng Li, Jianfeng Ma. Network and Distributed System Security, 2025.
-
LADDER: Multi-Objective Backdoor Attack via Evolutionary Algorithm.[Topic:Backdoor] [pdf]
- Dazhuang Liu, Yanqi Qiao, Rui Wang, Kaitai Liang, Georgios Smaragdakis. Network and Distributed System Security, 2025.
-
LLMPirate: LLMs for Black-box Hardware IP Piracy.[Topic:LLM] [pdf]
- Vasudev Gohil, Matthew DeLorenzo, Veera Vishwa Achuta Sai Venkat Nallam, Joey See, Jeyavijayan Rajendran. Network and Distributed System Security, 2025.
-
PBP: Post-training Backdoor Purification for Malware Classifiers.[Topic:Backdoor] [pdf]
- Dung Thuy Nguyen, Ngoc N. Tran, Taylor T. Johnson, Kevin Leach. Network and Distributed System Security, 2025.
-
Privacy-Preserving Data Deduplication for Enhancing Federated Learning of Language Models.[Topic:FL] [pdf]
- Aydin Abadi, Vishnu Asutosh Dasu, Sumanta Sarkar. Network and Distributed System Security, 2025.
-
Probe-Me-Not: Protecting Pre-trained Encoders from Malicious Probing.[Topic:transfer learning] [pdf]
- Ruyi Ding, Tong Zhou, Lili Su, Aidong Adam Ding, Xiaolin Xu, Yunsi Fei. Network and Distributed System Security, 2025.
-
PropertyGPT: LLM-driven Formal Verification of Smart Contracts through Retrieval-Augmented Property Generation.[Topic:LLM] [pdf]
- Ye Liu, Yue Xue, Daoyuan Wu, Yuqiang Sun, Yi Li, Miaolei Shi, Yang Liu. Network and Distributed System Security, 2025.
-
RAIFLE: Reconstruction Attacks on Interaction-based Federated Learning with Adversarial Data Manipulation.[Topic:FL] [pdf]
- Dzung Pham, Shreyas Kulkarni, Amir Houmansadr. Network and Distributed System Security, 2025.
-
SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks in Split Learning.[Topic:Backdoor] [pdf]
- Phillip Rieger, Alessandro Pegoraro, Kavita Kumari, Tigist Abera, Jonathan Knauer, Ahmad-Reza Sadeghi. Network and Distributed System Security, 2025.
-
Safety Misalignment Against Large Language Models.[Topic:LLM] [pdf]
- Yichen Gong, Delong Ran, Xinlei He, Tianshuo Cong, Anyu Wang, Xiaoyun Wang. Network and Distributed System Security, 2025.
-
Scale-MIA: A Scalable Model Inversion Attack against Secure Federated Learning via Latent Space Reconstruction.[Topic:MIA&FL] [pdf]
- Shanghao Shi, Ning Wang, Yang Xiao, Chaoyu Zhang, Yi Shi, Y. Thomas Hou, Wenjing Lou. Network and Distributed System Security, 2025.
-
SHAFT: Secure, Handy, Accurate and Fast Transformer Inference.[Topic:transformer-based machine learning] [pdf]
- Andes Y. L. Kei, Sherman S. M. Chow. Network and Distributed System Security, 2025.
-
Try to Poison My Deep Learning Data? Nowhere to Hide Your Trajectory Spectrum![Topic:DaaS] [pdf]
- Yansong Gao, Huaibing Peng, Hua Ma, Zhi Zhang, Shuo Wang, Rayne Holland, Anmin Fu, Minhui Xue, Derek Abbott. Network and Distributed System Security, 2025.
-
URVFL: Undetectable Data Reconstruction Attack on Vertical Federated Learning.[Topic:VFL] [pdf]
- Duanyi Yao, Songze Li, Xueluan Gong, Sizai Hou, Gaoning Pan. Network and Distributed System Security, 2025.
-
VoiceRadar: Voice Deepfake Detection using Micro-Frequency and Compositional Analysis.[Topic:ML] [pdf]
- Kavita Kumari, Maryam Abbasihafshejani, Alessandro Pegoraro, Phillip Rieger, Kamyar Arshi, Murtuza Jadliwala, Ahmad-Reza Sadeghi. Network and Distributed System Security, 2025.
-
Attributions for ML-based ICS Anomaly Detection: From Theory to Practice.[Topic:ML] [pdf]
- Clement Fung, Eric Zeng, Lujo Bauer. Network and Distributed System Security, 2024.
-
Compensating Removed Frequency Components: Thwarting Voice Spectrum Reduction Attacks.[Topic:ASR] [pdf]
- Shu Wang, Kun Sun, Qi Li. Network and Distributed System Security, 2024.
-
Crafter: Facial Feature Crafting against Inversion-based Identity Theft on Deep Models.[Topic:防御攻击] [pdf]
- Shiming Wang, Zhe Ji, Liyao Xiang, Hao Zhang, Xinbing Wang, Chenghu Zhou, Bo Li. Network and Distributed System Security, 2024.
-
CrowdGuard: Federated Backdoor Detection in Federated Learning.[Topic:Backdoor] [pdf]
- Phillip Rieger, Torsten Krauß, Markus Miettinen, Alexandra Dmitrienko, Ahmad-Reza Sadeghi. Network and Distributed System Security, 2024.
-
Enhance Stealthiness and Transferability of Adversarial Attacks with Class Activation Mapping Ensemble Attack.[Topic:对抗攻击] [pdf]
- Hui Xia, Rui Zhang, Zi Kang, Shuliang Jiang, Shuo Xu. Network and Distributed System Security, 2024.
-
GNNIC: Finding Long-Lost Sibling Functions with Abstract Similarity.[Topic:GNN] [pdf]
- Qiushi Wu, Zhongshu Gu, Hani Jamjoom, Kangjie Lu. Network and Distributed System Security, 2024.
-
LiDAR Spoofing Meets the New-Gen: Capability Improvements, Broken Assumptions, and New Attack Strategies.[Topic:欺骗攻击] [pdf]
- Takami Sato, Yuki Hayakawa, Ryo Suzuki, Yohsuke Shiiki, Kentaro Yoshioka, Qi Alfred Chen. Network and Distributed System Security, 2024.
-
LMSanitator: Defending Prompt-Tuning Against Task-Agnostic Backdoors.[Topic:Backdoor] [pdf]
- Chengkun Wei, Wenlong Meng, Zhikun Zhang, Min Chen, Minghu Zhao, Wenjing Fang, Lei Wang, Zihui Zhang, Wenzhi Chen. Network and Distributed System Security, 2024.
-
Low-Quality Training Data Only? A Robust Framework for Detecting Encrypted Malicious Network Traffic.[Topic:数据集] [pdf]
- Yuqi Qing, Qilei Yin, Xinhao Deng, Yihao Chen, Zhuotao Liu, Kun Sun, Ke Xu, Jia Zhang, Qi Li. Network and Distributed System Security, 2024.
-
MPCDiff: Testing and Repairing MPC-Hardened Deep Learning Models.[Topic:MPC-Hardened] [pdf]
- Qi Pang, Yuanyuan Yuan, Shuai Wang. Network and Distributed System Security, 2024.
-
On Precisely Detecting Censorship Circumvention in Real-World Networks.[Topic:Censorship Circumvention] [pdf]
- Ryan Wails, George Arnold Sullivan, Micah Sherr, Rob Jansen. Network and Distributed System Security, 2024.
-
Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction.[Topic:MIA] [pdf]
- Zitao Chen, Karthik Pattabiraman. Network and Distributed System Security, 2024.
-
SigmaDiff: Semantics-Aware Deep Graph Matching for Pseudocode Diffing.[Topic:DNN] [pdf]
- Lian Gao, Yu Qu, Sheng Yu, Yue Duan, Heng Yin. Network and Distributed System Security, 2024.
-
Transpose Attack: Stealing Datasets with Bidirectional Training.[Topic:数据集窃取] [pdf]
- Guy Amit, Moshe Levy, Yisroel Mirsky. Network and Distributed System Security, 2024.
-
A Duty to Forget, a Right to be Assured? Exposing Vulnerabilities in Machine Unlearning Services.[Topic:MLaaS] [pdf]
- Hongsheng Hu, Shuo Wang, Jiamin Chang, Haonan Zhong, Ruoxi Sun, Shuang Hao, Haojin Zhu, Minhui Xue. Network and Distributed System Security, 2024.
-
ActiveDaemon: Unconscious DNN Dormancy and Waking Up via User-specific Invisible Token.[Topic:Watermark] [pdf]
- Ge Ren, Gaolei Li, Shenghong Li, Libo Chen, Kui Ren. Network and Distributed System Security, 2024.
-
Automatic Adversarial Adaption for Stealthy Poisoning Attacks in Federated Learning.[Topic:FL] [pdf]
- Torsten Krauß, Jan König, Alexandra Dmitrienko, Christian Kanzow. Network and Distributed System Security, 2024.
-
CamPro: Camera-based Anti-Facial Recognition.[Topic:AFR] [pdf]
- Wenjun Zhu, Yuan Sun, Jiani Liu, Yushi Cheng, Xiaoyu Ji, Wenyuan Xu. Network and Distributed System Security, 2024.
-
DeepGo: Predictive Directed Greybox Fuzzing.[Topic:RL] [pdf]
- Peihong Lin, Pengfei Wang, Xu Zhou, Wei Xie, Gen Zhang, Kai Lu. Network and Distributed System Security, 2024.
-
DeGPT: Optimizing Decompiler Output with LLM.[Topic:LLM] [pdf]
- Peiwei Hu, Ruigang Liang, Kai Chen. Network and Distributed System Security, 2024.
-
DEMASQ: Unmasking the ChatGPT Wordsmith.[Topic:LLM] [pdf]
- Kavita Kumari, Alessandro Pegoraro, Hossein Fereidooni, Ahmad-Reza Sadeghi. Network and Distributed System Security, 2024.
-
Don't Interrupt Me - A Large-Scale Study of On-Device Permission Prompt Quieting in Chrome.[Topic:ML] [pdf]
- Marian Harbach, Igor Bilogrevic, Enrico Bacis, Serena Chen, Ravjit Uppal, Andy Paicu, Elias Klim, Meggyn Watkins, Balazs Engedy. Network and Distributed System Security, 2024.
-
DorPatch: Distributed and Occlusion-Robust Adversarial Patch to Evade Certifiable Defenses.[Topic:DNN] [pdf]
- Chaoxiang He, Xiaojing Ma, Bin B. Zhu, Yimiao Zeng, Hanqing Hu, Xiaofan Bai, Hai Jin, Dongmei Zhang. Network and Distributed System Security, 2024.
-
DRAINCLoG: Detecting Rogue Accounts with Illegally-obtained NFTs using Classifiers Learned on Graphs.[Topic:DNN] [pdf]
- Hanna Kim, Jian Cui, Eugene Jang, Chanhee Lee, Yongjae Lee, Jin-Woo Chung, Seungwon Shin. Network and Distributed System Security, 2024.
-
Flow Correlation Attacks on Tor Onion Service Sessions with Sliding Subset Sum.[Topic:machine learning classifiers] [pdf]
- Daniela Lopes, Jin-Dong Dong, Pedro Medeiros, Daniel Castro, Diogo Barradas, Bernardo Portela, João Vinagre, Bernardo Ferreira, Nicolas Christin, Nuno Santos. Network and Distributed System Security, 2024.
-
FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning Attacks in Federated Learning.[Topic:FL] [pdf]
- Hossein Fereidooni, Alessandro Pegoraro, Phillip Rieger, Alexandra Dmitrienko, Ahmad-Reza Sadeghi. Network and Distributed System Security, 2024.
-
Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering.[Topic:Backdoor] [pdf]
- Rui Zhu, Di Tang, Siyuan Tang, Zihao Wang, Guanhong Tao, Shiqing Ma, XiaoFeng Wang, Haixu Tang. Network and Distributed System Security, 2024.
-
GraphGuard: Detecting and Counteracting Training Data Misuse in Graph Neural Networks.[Topic:GNN] [pdf]
- Bang Wu, He Zhang, Xiangwen Yang, Shuo Wang, Minhui Xue, Shirui Pan, Xingliang Yuan. Network and Distributed System Security, 2024.
-
Group-based Robustness: A General Framework for Customized Robustness in the Real World.[Topic:规避攻击] [pdf]
- Weiran Lin, Keane Lucas, Neo Eyal, Lujo Bauer, Michael K. Reiter, Mahmood Sharif. Network and Distributed System Security, 2024.
-
Improving the Robustness of Transformer-based Large Language Models with Dynamic Attention.[Topic:LLM] [pdf]
- Lujia Shen, Yuwen Pu, Shouling Ji, Changjiang Li, Xuhong Zhang, Chunpeng Ge, Ting Wang. Network and Distributed System Security, 2024.
-
Large Language Model guided Protocol Fuzzing.[Topic:LLM] [pdf]
- Ruijie Meng, Martin Mirchev, Marcel Böhme, Abhik Roychoudhury. Network and Distributed System Security, 2024.
-
MASTERKEY: Automated Jailbreaking of Large Language Model Chatbots.[Topic:LLM] [pdf]
- Gelei Deng, Yi Liu, Yuekang Li, Kailong Wang, Ying Zhang, Zefeng Li, Haoyu Wang, Tianwei Zhang, Yang Liu. Network and Distributed System Security, 2024.
-
Parrot-Trained Adversarial Examples: Pushing the Practicality of Black-Box Audio Attacks against Speaker Recognition Models.[Topic:AE] [pdf]
- Rui Duan, Zhe Qu, Leah Ding, Yao Liu, Zhuo Lu. Network and Distributed System Security, 2024.
-
Pencil: Private and Extensible Collaborative Learning without the Non-Colluding Assumption.[Topic:Collaborative Learning] [pdf]
- Xuanqi Liu, Zhuotao Liu, Qi Li, Ke Xu, Mingwei Xu. Network and Distributed System Security, 2024.
-
SLMIA-SR: Speaker-Level Membership Inference Attacks against Speaker Recognition Systems.[Topic:MIA] [pdf]
- Guangke Chen, Yedi Zhang, Fu Song. Network and Distributed System Security, 2024.
-
Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data.[Topic:Backdoor] [pdf]
- Gorka Abad, Oguzhan Ersoy, Stjepan Picek, Aitor Urbieta. Network and Distributed System Security, 2024.
-
SSL-WM: A Black-Box Watermarking Approach for Encoders Pre-trained by Self-Supervised Learning.[Topic:水印] [pdf]
- Peizhuo Lv, Pan Li, Shenchen Zhu, Shengzhi Zhang, Kai Chen, Ruigang Liang, Chang Yue, Fan Xiang, Yuling Cai, Hualong Ma, Yingjun Zhang, Guozhu Meng. Network and Distributed System Security, 2024.
-
TextGuard: Provable Defense against Backdoor Attacks on Text Classification.[Topic:Backdoor] [pdf]
- Hengzhi Pei, Jinyuan Jia, Wenbo Guo, Bo Li, Dawn Song. Network and Distributed System Security, 2024.
-
You Can Use But Cannot Recognize: Preserving Visual Privacy in Deep Neural Networks.[Topic:DNN] [pdf]
- Qiushi Li, Yan Zhang, Ju Ren, Qi Li, Yaoxue Zhang. Network and Distributed System Security, 2024.
-
Fusion: Efficient and Secure Inference Resilient to Malicious Servers. [Topic: MLaaS] [pdf]
- Caiqin Dong, Jian Weng, Jia-Nan Liu, Yue Zhang, Yao Tong, Anjia Yang, Yudan Cheng, Shun Hu. Network and Distributed System Security, 2023.
-
Machine Unlearning of Features and Labels. [Topic: Machine-Unlearning] [pdf]
- Alexander Warnecke, Lukas Pirch, Christian Wressnegger, Konrad Rieck. Network and Distributed System Security, 2023.
-
PPA: Preference Profiling Attack Against Federated Learning. [Topic: FL] [pdf]
- Chunyi Zhou, Yansong Gao, Anmin Fu, Kai Chen, Zhiyang Dai, Zhi Zhang, Minhui Xue, Yuqing Zhang. Network and Distributed System Security, 2023.
-
RoVISQ: Reduction of Video Service Quality via Adversarial Attacks on Deep Learning-based Video Compression. [Topic: AEs] [pdf]
- Jung-Woo Chang, Mojan Javaheripi, Seira Hidano, Farinaz Koushanfar. Network and Distributed System Security, 2023.
-
Securing Federated Sensitive Topic Classification against Poisoning Attacks. [Topic: FL] [pdf]
- Tianyue Chu, Alvaro Garcia-Recuero, Costas Iordanou, Georgios Smaragdakis, Nikolaos Laoutaris. Network and Distributed System Security, 2023.
-
The “Beatrix” Resurrections: Robust Backdoor Detection via Gram Matrices. [Topic: Backdoor] [pdf]
- Wanlun Ma, Derui Wang, Ruoxi Sun, Minhui Xue, Sheng Wen, Yang Xiang. Network and Distributed System Security, 2023.
-
Adversarial Robustness for Tabular Data through Cost and Utility Awareness. [Topic: AEs] [pdf]
- Klim Kireev, Bogdan Kulynych, Carmela Troncoso. Network and Distributed System Security, 2023.
-
Backdoor Attacks Against Dataset Distillation. [Topic: Backdoor] [pdf]
- Yugeng Liu, Zheng Li, Michael Backes, Yun Shen, Yang Zhang. Network and Distributed System Security, 2023.
-
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense. [Topic: Backdoor] [pdf]
- Siyuan Cheng, Guanhong Tao, Yingqi Liu, Shengwei An, Xiangzhe Xu, Shiwei Feng, Guangyu Shen, Kaiyuan Zhang, Qiuling Xu, Shiqing Ma, Xiangyu Zhang. Network and Distributed System Security, 2023.
-
Focusing on Pinocchio's Nose: A Gradients Scrutinizer to Thwart Split-Learning Hijacking Attacks Using Intrinsic Attributes. [Topic: SL] [pdf]
- Jiayun Fu, Xiaojing Ma, Bin B. Zhu, Pingyi Hu, Ruixin Zhao, Yaru Jia, Peng Xu, Hai Jin, Dongmei Zhang. Network and Distributed System Security, 2023.
-
REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service. [Topic: AEs] [pdf]
- Wenjie Qu, Jinyuan Jia, Neil Zhenqiang Gong. Network and Distributed System Security, 2023.
-
DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection. [Topic: Backdoor] [pdf]
- Phillip Rieger, Thien Duc Nguyen, Markus Miettinen, Ahmad-Reza Sadeghi. Network and Distributed System Security, 2022.
-
FedCRI: Federated Mobile Cyber-Risk Intelligence. [Topic: FL] [pdf]
- Hossein Fereidooni, Alexandra Dmitrienko, Phillip Rieger, Markus Miettinen, Ahmad-Reza Sadeghi, Felix Madlener. Network and Distributed System Security, 2022.
-
Get a Model! Model Hijacking Attack Against Machine Learning Models. [Topic: Model-Hijacking] [pdf]
- Ahmed Salem, Michael Backes, Yang Zhang. Network and Distributed System Security, 2022.
-
Local and Central Differential Privacy for Robustness and Privacy in Federated Learning. [Topic: FL] [pdf]
- Mohammad Naseri, Jamie Hayes, Emiliano De Cristofaro. Network and Distributed System Security, 2022.
-
Property Inference Attacks Against GANs. [Topic: IA & GAN] [pdf]
- Junhao Zhou, Yufei Chen, Chao Shen, Yang Zhang. Network and Distributed System Security, 2022.
-
ATTEQ-NN: Attention-based QoE-aware Evasive Backdoor Attacks. [Topic: Backdoor] [pdf]
- Xueluan Gong, Yanjiao Chen, Jianshuo Dong, Qian Wang. Network and Distributed System Security, 2022.
-
Fooling the Eyes of Autonomous Vehicles: Robust Physical Adversarial Examples Against Traffic Sign Recognition Systems. [Topic: AEs] [pdf]
- Wei Jia, Zhaojun Lu, Haichun Zhang, Zhenglin Liu, Jie Wang, Gang Qu. Network and Distributed System Security, 2022.
-
MIRROR: Model Inversion for Deep Learning Network with High Fidelity. [Topic: MIA] [pdf]
- Shengwei An, Guanhong Tao, Qiuling Xu, Yingqi Liu, Guangyu Shen, Yuan Yao, Jingwei Xu, Xiangyu Zhang. Network and Distributed System Security, 2022.
-
RamBoAttack: A Robust and Query Efficient Deep Neural Network Decision Exploit. [Topic: AEs] [pdf]
- Viet Quoc Vo, Ehsan Abbasnejad, Damith C. Ranasinghe. Network and Distributed System Security, 2022.
-
Data Poisoning Attacks to Deep Learning Based Recommender Systems. [Topic: PAs] [pdf]
- Hai Huang, Jiaming Mu, Neil Zhenqiang Gong, Qi Li, Bin Liu, Mingwei Xu. Network and Distributed System Security, 2021.
-
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping. [Topic: PA & FL] [pdf]
- Xiaoyu Cao, Minghong Fang, Jia Liu, Neil Zhenqiang Gong. Network and Distributed System Security, 2021.
-
Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning. [Topic: PA & FL] [pdf]
- Virat Shejwalkar, Amir Houmansadr. Network and Distributed System Security, 2021.
-
Practical Blind Membership Inference Attack via Differential Comparisons. [Topic: MIA] [pdf]
- Bo Hui, Yuchen Yang, Haolin Yuan, Philippe Burlina, Neil Zhenqiang Gong, Yinzhi Cao. Network and Distributed System Security, 2021.
-
POSEIDON: Privacy-Preserving Federated Neural Network Learning. [Topic: FL] [pdf]
- Sinem Sav, Apostolos Pyrgelis, Juan Ramón Troncoso-Pastoriza, David Froelicher, Jean-Philippe Bossuat, Joao Sa Sousa, Jean-Pierre Hubaux. Network and Distributed System Security, 2021.
-
AttackGNN: Red-Teaming GNNs in Hardware Security Using Reinforcement Learning.[Topic:GNN&RL] [pdf]
- Vasudev Gohil, Satwik Patnaik, Dileep Kalathil, Jeyavijayan Rajendran. USENIX Security, 2024.
-
INSIGHT: Attacking Industry-Adopted Learning Resilient Logic Locking Techniques Using Explainable Graph Neural Network.[Topic:ML] [pdf]
- Lakshmi Likhitha Mankali, Ozgur Sinanoglu, Satwik Patnaik. USENIX Security, 2024.
-
FAMOS: Robust Privacy-Preserving Authentication on Payment Apps via Federated Multi-Modal Contrastive Learning.[Topic:FL] [pdf]
- Yifeng Cai, Ziqi Zhang, Jiaping Gui, Bingyan Liu, Xiaoke Zhao, Ruoyu Li, Zhe Li, Ding Li. USENIX Security, 2024.
-
Efficient Privacy Auditing in Federated Learning.[Topic:FL] [pdf]
- Hongyan Chang, Brandon Edwards, Anindya S. Paul, Reza Shokri. USENIX Security, 2024.
-
Defending Against Data Reconstruction Attacks in Federated Learning: An Information Theory Approach.[Topic:FL] [pdf]
- Qi Tan, Qi Li, Yi Zhao, Zhuotao Liu, Xiaobing Guo, Ke Xu. USENIX Security, 2024.
-
Lotto: Secure Participant Selection against Adversarial Servers in Federated Learning.[Topic:FL] [pdf]
- Zhifeng Jiang, Peng Ye, Shiqi He, Wei Wang, Ruichuan Chen, Bo Li. USENIX Security, 2024.
-
KnowPhish: Large Language Models Meet Multimodal Knowledge Graphs for Enhancing Reference-Based Phishing Detection.[Topic:LLM for Security] [pdf]
- Yuexin Li, Chengyu Huang, Shumin Deng, Mei Lin Lock, Tri Cao, Nay Oo, Hoon Wei Lim, Bryan Hooi. USENIX Security, 2024.
-
Exploring ChatGPT's Capabilities on Vulnerability Management.[Topic:LLM for Security] [pdf]
- Peiyu Liu, Junming Liu, Lirong Fu, Kangjie Lu, Yifan Xia, Xuhong Zhang, Wenzhi Chen, Haiqin Weng, Shouling Ji, Wenhai Wang. USENIX Security, 2024.
-
Large Language Models for Code Analysis: Do LLMs Really Do Their Job?[Topic:LLM for Security] [pdf]
- Chongzhou Fang, Ning Miao, Shaurya Srivastav, Jialin Liu, Ruoyu Zhang, Ruijie Fang, Asmita, Ryan Tsang, Najmeh Nazari, Han Wang, Houman Homayoun. USENIX Security, 2024.
-
PentestGPT: Evaluating and Harnessing Large Language Models for Automated Penetration Testing.[Topic:LLM for Security] [pdf]
- Gelei Deng, Yi Liu, Víctor Mayoral Vilches, Peng Liu, Yuekang Li, Yuan Xu, Martin Pinzger, Stefan Rass, Tianwei Zhang, Yang Liu. USENIX Security, 2024.
-
Fuzzing BusyBox: Leveraging LLM and Crash Reuse for Embedded Bug Unearthing.[Topic:LLM] [pdf]
- Asmita, Yaroslav Oliinyk, Michael Scott, Ryan Tsang, Chongzhou Fang, Houman Homayoun. USENIX Security, 2024.
-
DNN-GP: Diagnosing and Mitigating Model's Faults Using Latent Concepts.[Topic:DNN] [pdf]
- Shuo Wang, Hongsheng Hu, Jiamin Chang, Benjamin Zi Hao Zhao, Qi Alfred Chen, Minhui Xue. USENIX Security, 2024.
-
Yes, One-Bit-Flip Matters! Universal DNN Model Inference Depletion with Runtime Code Fault Injection.[Topic:DNN] [pdf]
- Shaofeng Li, Xinyu Wang, Minhui Xue, Haojin Zhu, Zhi Zhang, Yansong Gao, Wen Wu, Xuemin (Sherman) Shen. USENIX Security, 2024.
-
Tossing in the Dark: Practical Bit-Flipping on Gray-box Deep Neural Networks for Runtime Trojan Injection.[Topic:DNN] [pdf]
- Zihao Wang, Di Tang, XiaoFeng Wang, Wei He, Zhaoyang Geng, Wenhao Wang. USENIX Security, 2024.
-
Forget and Rewire: Enhancing the Resilience of Transformer-based Models against Bit-Flip Attacks.[Topic:DNN] [pdf]
- Najmeh Nazari, Hosein Mohammadi Makrani, Chongzhou Fang, Hossein Sayadi, Setareh Rafatirad, Khaled N. Khasawneh, Houman Homayoun. USENIX Security, 2024.
-
Automated Large-Scale Analysis of Cookie Notice Compliance.[Topic:ML for Security] [pdf]
- Ahmed Bouhoula, Karel Kubicek, Amit Zac, Carlos Cotrini, David A. Basin. USENIX Security, 2024.
-
Detecting and Mitigating Sampling Bias in Cybersecurity with Unlabeled Data.[Topic:ML for Security] [pdf]
- Saravanan Thirumuruganathan, Fatih Deniz, Issa Khalil, Ting Yu, Mohamed Nabeel, Mourad Ouzzani. USENIX Security, 2024.
-
An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection.[Topic:LLM] [pdf]
- Shenao Yan, Shen Wang, Yue Duan, Hanbin Hong, Kiho Lee, Doowon Kim, Yuan Hong. USENIX Security, 2024.
-
REMARK-LLM: A Robust and Efficient Watermarking Framework for Generative Large Language Models.[Topic:LLM] [pdf]
- Ruisi Zhang, Shehzeen Samarah Hussain, Paarth Neekhara, Farinaz Koushanfar. USENIX Security, 2024.
-
Formalizing and Benchmarking Prompt Injection Attacks and Defenses.[Topic:LLM] [pdf]
- Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan Jia, Neil Zhenqiang Gong. USENIX Security, 2024.
-
Instruction Backdoor Attacks Against Customized LLMs.[Topic:LLM] [pdf]
- Rui Zhang, Hongwei Li, Rui Wen, Wenbo Jiang, Yuan Zhang, Michael Backes, Yun Shen, Yang Zhang. USENIX Security, 2024.
-
AutoFHE: Automated Adaption of CNNs for Efficient Evaluation over FHE.[Topic:CNN] [pdf]
- Wei Ao, Vishnu Naresh Boddeti. USENIX Security, 2024.
-
Fast and Private Inference of Deep Neural Networks by Co-designing Activation Functions.[Topic:MLaaS] [pdf]
- Abdulrahman Diaa, Lucas Fenaux, Thomas Humphries, Marian Dietz, Faezeh Ebrahimianghazani, Bailey Kacsmar, Xinda Li, Nils Lukas, Rasoul Akhavan Mahdavi, Simon Oya, Ehsan Amjadian, Florian Kerschbaum. USENIX Security, 2024.
-
OblivGNN: Oblivious Inference on Transductive and Inductive Graph Neural Network.[Topic:GNN] [pdf]
- Zhibo Xu, Shangqi Lai, Xiaoning Liu, Alsharif Abuadbba, Xingliang Yuan, Xun Yi. USENIX Security, 2024.
-
MD-ML: Super Fast Privacy-Preserving Machine Learning for Malicious Security with a Dishonest Majority.[Topic:PPML] [pdf]
- Boshi Yuan, Shixuan Yang, Yongxiang Zhang, Ning Ding, Dawu Gu, Shi-Feng Sun. USENIX Security, 2024.
-
Accelerating Secure Collaborative Machine Learning with Protocol-Aware RDMA.[Topic:SCML] [pdf]
- Zhenghang Ren, Mingxuan Fan, Zilong Wang, Junxue Zhang, Chaoliang Zeng, Zhicong Huang, Cheng Hong, Kai Chen. USENIX Security, 2024.
-
Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models.[Topic:LLM] [pdf]
- Matthieu Meeus, Shubham Jain, Marek Rei, Yves-Alexandre de Montjoye. USENIX Security, 2024.
-
MIST: Defending Against Membership Inference Attacks Through Membership-Invariant Subspace Training.[Topic:MI attack] [pdf]
- Jiacheng Li, Ninghui Li, Bruno Ribeiro. USENIX Security, 2024.
-
Neural Network Semantic Backdoor Detection and Mitigation: A Causality-Based Approach.[Topic:Backdoor] [pdf]
- Bing Sun, Jun Sun, Wayne Koh, Jie Shi. USENIX Security, 2024.
-
On the Difficulty of Defending Contrastive Learning against Backdoor Attacks.[Topic:Backdoor] [pdf]
- Changjiang Li, Ren Pang, Bochuan Cao, Zhaohan Xi, Jinghui Chen, Shouling Ji, Ting Wang. USENIX Security, 2024.
-
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models.[Topic:Backdoor] [pdf]
- Hongbin Liu, Michael K. Reiter, Neil Zhenqiang Gong. USENIX Security, 2024.
-
Xplain: Analyzing Invisible Correlations in Model Explanation.[Topic:Backdoor] [pdf]
- Kavita Kumari, Alessandro Pegoraro, Hossein Fereidooni, Ahmad-Reza Sadeghi. USENIX Security, 2024.
-
Verify your Labels! Trustworthy Predictions and Datasets via Confidence Scores.[Topic:Backdoor] [pdf]
- Torsten Krauß, Jasper Stang, Alexandra Dmitrienko. USENIX Security, 2024.
-
More Simplicity for Trainers, More Opportunity for Attackers: Black-Box Attacks on Speaker Recognition Systems by Inferring Feature Extractor.[Topic:AE] [pdf]
- Yunjie Ge, Pinji Chen, Qian Wang, Lingchen Zhao, Ningping Mou, Peipei Jiang, Cong Wang, Qi Li, Chao Shen. USENIX Security, 2024.
-
Adversarial Illusions in Multi-Modal Embeddings.[Topic:Multi-modal embeddings] [pdf]
- Tingwei Zhang, Rishi D. Jha, Eugene Bagdasaryan, Vitaly Shmatikov. USENIX Security, 2024.
-
Splitting the Difference on Adversarial Training.[Topic:Adversarial Attack Defense] [pdf]
- Matan Levi, Aryeh Kontorovich. USENIX Security, 2024.
-
Machine Learning needs Better Randomness Standards: Randomised Smoothing and PRNG-based attacks.[Topic:Adversarial Attack Defense] [pdf]
- Pranav Dahiya, Ilia Shumailov, Ross Anderson. USENIX Security, 2024.
-
Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning.[Topic:Backdoor and Federated Learning] [pdf]
- Xiaoting Lyu, Yufei Han, Wei Wang, Jingkai Liu, Yongsheng Zhu, Guangquan Xu, Jiqiang Liu, Xiangliang Zhang. USENIX Security, 2024.
-
ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning.[Topic:Backdoor and Federated Learning] [pdf]
- Zhangchen Xu, Fengqing Jiang, Luyao Niu, Jinyuan Jia, Bo Li, Radha Poovendran. USENIX Security, 2024.
-
BackdoorIndicator: Leveraging OOD Data for Proactive Backdoor Detection in Federated Learning.[Topic:Backdoor and Federated Learning] [pdf]
- Songze Li, Yanbo Dai. USENIX Security, 2024.
-
UBA-Inf: Unlearning Activated Backdoor Attack with Influence-Driven Camouflage.[Topic:Backdoor and Federated Learning] [pdf]
- Zirui Huang, Yunlong Mao, Sheng Zhong. USENIX Security, 2024.
-
LLM-Fuzzer: Scaling Assessment of Large Language Model Jailbreaks.[Topic:LLM Jailbreaking] [pdf]
- Jiahao Yu, Xingwei Lin, Zheng Yu, Xinyu Xing. USENIX Security, 2024.
-
Don't Listen To Me: Understanding and exploring jailbreak prompts of large language models.[Topic:LLM Jailbreaking] [pdf]
- Zhiyuan Yu, Xiaogeng Liu, Shunning Liang, Zach Cameron, Chaowei Xiao, Ning Zhang. USENIX Security, 2024.
-
Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise and Reconstruction.[Topic:LLM Jailbreaking] [pdf]
- Tong Liu, Yingjie Zhang, Zhe Zhao, Yinpeng Dong, Guozhu Meng, Kai Chen. USENIX Security, 2024.
-
SoK: All You Need to Know About On-Device ML Model Extraction - The Gap Between Research and Practice.[Topic:Watermark] [pdf]
- Tushar Nayan, Qiming Guo, Mohammed Alduniawi, Marcus Botacin, A. Selcuk Uluagac, Ruimin Sun. USENIX Security, 2024.
-
Unveiling the Secrets without Data: Can Graph Neural Networks Be Exploited through Data-Free Model Extraction Attacks?[Topic:GNN] [pdf]
- Yuanxin Zhuang, Chuan Shi, Mengmei Zhang, Jinghui Chen, Lingjuan Lyu, Pan Zhou, Lichao Sun. USENIX Security, 2024.
-
ClearStamp: A Human-Visible and Robust Model-Ownership Proof based on Transposed Model Training.[Topic:Watermark] [pdf]
- Torsten Krauß, Jasper Stang, Alexandra Dmitrienko. USENIX Security, 2024.
-
DeepEclipse: How to Break White-Box DNN-Watermarking Schemes.[Topic:Watermark] [pdf]
- Alessandro Pegoraro, Carlotta Segna, Kavita Kumari, Ahmad-Reza Sadeghi. USENIX Security, 2024.
-
Deciphering Textual Authenticity: A Generalized Strategy through the Lens of Large Language Semantics for Detecting Human vs. Machine-Generated Text.[Topic:LLM] [pdf]
- Mazal Bethany, Brandon Wherry, Emet Bethany, Nishant Vishwamitra, Anthony Rios, Peyman Najafirad. USENIX Security, 2024.
-
How Does a Deep Learning Model Architecture Impact Its Privacy? A Comprehensive Study of Privacy Attacks on CNNs and Transformers.[Topic:Privacy Attacks] [pdf]
- Guangsheng Zhang, Bo Liu, Huan Tian, Tianqing Zhu, Ming Ding, Wanlei Zhou. USENIX Security, 2024.
-
FaceObfuscator: Defending Deep Learning-based Privacy Attacks with Gradient Descent-resistant Features in Face Recognition.[Topic:Privacy Attacks] [pdf]
- Shuaifan Jin, He Wang, Zhibo Wang, Feng Xiao, Jiahui Hu, Yuan He, Wenwen Zhang, Zhongjie Ba, Weijie Fang, Shuhong Yuan, Kui Ren. USENIX Security, 2024.
-
Hijacking Attacks against Neural Network by Analyzing Training Data.[Topic:Hijacking Attacks] [pdf]
- Yunjie Ge, Qian Wang, Huayang Huang, Qi Li, Cong Wang, Chao Shen, Lingchen Zhao, Peipei Jiang, Zheng Fang, Shenyi Zhang. USENIX Security, 2024.
-
Information Flow Control in Machine Learning through Modular Model Architecture.[Topic:ML] [pdf]
- Trishita Tiwari, Suchin Gururangan, Chuan Guo, Weizhe Hua, Sanjay Kariyappa, Udit Gupta, Wenjie Xiong, Kiwan Maeng, Hsien-Hsin S. Lee, G. Edward Suh. USENIX Security, 2024.
-
Devil in the Room: Triggering Audio Backdoors in the Physical World.[Topic:Physical Adversarial Attacks] [pdf]
- Meng Chen, Xiangyu Xu, Li Lu, Zhongjie Ba, Feng Lin, Kui Ren. USENIX Security, 2024.
-
FraudWhistler: A Resilient, Robust and Plug-and-play Adversarial Example Detection Method for Speaker Recognition.[Topic:AE] [pdf]
- Kun Wang, Xiangyu Xu, Li Lu, Zhongjie Ba, Feng Lin, Kui Ren. USENIX Security, 2024.
-
EaTVul: ChatGPT-based Evasion Attack Against Software Vulnerability Detection.[Topic:Evasion Attack] [pdf]
- Shigang Liu, Di Cao, Junae Kim, Tamas Abraham, Paul Montague, Seyit Camtepe, Jun Zhang, Yang Xiang. USENIX Security, 2024.
-
“Security is not my field, I’m a stats guy”: A Qualitative Root Cause Analysis of Barriers to Adversarial Machine Learning Defenses in Industry. [Topic: AEs] [pdf]
- Jaron Mink, Harjot Kaur, Juliane Schmüser and Sascha Fahl, Yasemin Acar. USENIX Security, 2023.
-
A Data-free Backdoor Injection Approach in Neural Networks. [Topic: Backdoor] [pdf]
- Peizhuo Lv, Chang Yue, Ruigang Liang, Yunfei Yang. USENIX Security, 2023.
-
A Plot is Worth a Thousand Words: Model Information Stealing Attacks via Scientific Plots. [Topic: MSA] [pdf]
- Boyang Zhang, Xinlei He, Yun Shen, Tianhao Wang, Yang Zhang. USENIX Security, 2023.
-
Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks. [Topic: BFA] [pdf]
- Jialai Wang, Ziyuan Zhang, Meiqi Wang, Han Qiu, Tianwei Zhang, Qi Li, Zongpeng Li, Tao Wei, Chao Zhang. USENIX Security, 2023.
-
Black-box Adversarial Example Attack towards FCG Based Android Malware Detection under Incomplete Feature Information. [Topic: AEs] [pdf]
- Heng Li, Zhang Cheng, Bang Wu, Liheng Yuan, Cuiying Gao, Wei Yuan, Xiapu Luo. USENIX Security, 2023.
-
CAPatch: Physical Adversarial Patch against Image Captioning Systems. [Topic: AEs] [pdf]
- Shibo Zhang, Yushi Cheng, Wenjun Zhu, Xiaoyu Ji, Wenyuan Xu. USENIX Security, 2023.
-
DiffSmooth: Certifiably Robust Learning via Diffusion Models and Local Smoothing. [Topic: AEs] [pdf]
- Jiawei Zhang, Zhongzhu Chen, Huan Zhang, Chaowei Xiao, Bo Li. USENIX Security, 2023.
-
Every Vote Counts: Ranking-Based Training of Federated Learning to Resist Poisoning Attacks. [Topic: PA & FL] [pdf]
- Hamid Mozaffari, Virat Shejwalkar, Amir Houmansadr. USENIX Security, 2023.
-
Exorcising "Wraith": Protecting LiDAR-based Object Detector in Automated Driving System from Appearing Attacks. [Topic: Appearing-Attack] [pdf]
- Qifan Xiao, Xudong Pan, Yifan Lu, Mi Zhang, Jiarun Dai, Min Yang. USENIX Security, 2023.
-
Fine-grained Poisoning Attack to Local Differential Privacy Protocols for Mean and Variance Estimation. [Topic: DP] [pdf]
- Xiaoguang Li, Ninghui Li, Wenhai Sun, Neil Zhenqiang Gong, Hui Li. USENIX Security, 2023.
-
FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases. [Topic: Backdoor] [pdf]
- Chong Fu, Xuhong Zhang, Shouling Ji, Ting Wang, Peng Lin, Yanghe Feng, Jianwei Yin. USENIX Security, 2023.
-
GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation. [Topic: DP & GNN] [pdf]
- Sina Sajadmanesh, Ali Shahin Shamsabadi, Aurélien Bellet, Daniel Gatica-Perez. USENIX Security, 2023.
-
Lost at C: A User Study on the Security Implications of Large Language Model Code Assistants. [Topic: LLM] [pdf]
- Gustavo Sandoval, Hammond Pearce, Teo Nys, Ramesh Karri, Siddharth Garg, Brendan Dolan-Gavitt. USENIX Security, 2023.
-
Meta-Sift: How to Sift Out a Clean Subset in the Presence of Data Poisoning?. [Topic: PA] [pdf]
- Yi Zeng, Minzhou Pan, Himanshu Jahagirdar, Ming Jin, Lingjuan Lyu, Ruoxi Jia. USENIX Security, 2023.
-
No more Reviewer #2: Subverting Automatic Paper-Reviewer Assignment using Adversarial Learning. [Topic: AEs] [pdf]
- Thorsten Eisenhofer, Erwin Quiring, Jonas Möller, Doreen Riepel, Thorsten Holz, Konrad Rieck. USENIX Security, 2023.
-
PELICAN: Exploiting Backdoors of Naturally Trained Deep Learning Models In Binary Code Analysis. [Topic: Backdoor] [pdf]
- Zhuo Zhang, Guanhong Tao, Guangyu Shen, Shengwei An, Qiuling Xu, Yingqi Liu, Yapeng Ye, Yaoxuan Wu, Xiangyu Zhang. USENIX Security, 2023.
-
PrivateFL: Accurate, Differentially Private Federated Learning via Personalized Data Transformation. [Topic: DP & FL] [pdf]
- Yuchen Yang, Bo Hui, Haolin Yuan, Neil Gong, Yinzhi Cao. USENIX Security, 2023.
-
Rethinking White-Box Watermarks on Deep Learning Models under Neural Structural Obfuscation. [Topic: Watermark] [pdf]
- Yifan Yan, Xudong Pan, Mi Zhang, and Min Yang. USENIX Security, 2023.
-
X-Adv: Physical Adversarial Object Attacks against X-ray Prohibited Item Detection. [Topic: AEs] [pdf]
- Aishan Liu, Jun Guo, Jiakai Wang, Siyuan Liang, Renshuai Tao, Wenbo Zhou, Cong Liu, Xianglong Liu. USENIX Security, 2023.
-
TPatch: A Triggered Physical Adversarial Patch. [Topic: AEs] [pdf]
- Wenjun Zhu, Xiaoyu Ji, Yushi Cheng, Shibo Zhang, Wenyuan Xu. USENIX Security, 2023.
-
UnGANable: Defending Against GAN-based Face Manipulation. [Topic: Deepfake] [pdf]
- WZheng Li, Ning Yu, Ahmed Salem, Michael Backes, Mario Fritz, Yang Zhang. USENIX Security, 2023.
-
Squint Hard Enough: Attacking Perceptual Hashing with Adversarial Machine Learning. [Topic: AEs] [pdf]
- Jonathan Prokos, Neil Fendley, Matthew Green, Roei Schuster, Eran Tromer, Tushar Jois, Yinzhi Cao. USENIX Security, 2023.
-
The Space of Adversarial Strategies. [Topic: AEs] [pdf]
- Ryan Sheatsley, Blaine Hoak, Eric Pauley, Patrick McDaniel. USENIX Security, 2023.
-
That Person Moves Like A Car: Misclassification Attack Detection for Autonomous Systems Using Spatiotemporal Consistency. [Topic: AEs] [pdf]
- Yanmao Man, Raymond Muller, Ming Li, Z. Berkay Celik, Ryan Gerdes. USENIX Security, 2023.
-
NeuroPots: Realtime Proactive Defense against Bit-Flip Attacks in Neural Networks. [Topic: BFA] [pdf]
- Qi Liu, Jieming Yin, Wujie Wen, Chengmo Yang, Shi Sha. USENIX Security, 2023.
-
URET: Universal Robustness Evaluation Toolkit (for Evasion). [Topic: AEs] [pdf]
- Kevin Eykholt, Taesung Lee, Douglas Schales, Jiyong Jang, Ian Molloy, Masha Zorin. USENIX Security, 2023.
-
SMACK: Semantically Meaningful Adversarial Audio Attack. [Topic: AEs] [pdf]
- Zhiyuan Yu, Yuanhaur Chang, Ning Zhang, Chaowei Xiao. USENIX Security, 2023.
-
Gradient Obfuscation Gives a False Sense of Security in Federated Learning. [Topic: FL] [pdf]
- Kai Yue, Richeng Jin, Chau-Wai Wong, Dror Baron, Huaiyu Dai. USENIX Security, 2023.
-
Fairness Properties of Face Recognition and Obfuscation Systems. [Topic: AEs] [pdf]
- Harrison Rosenberg, Brian Tang, Kassem Fawaz, Somesh Jha. USENIX Security, 2023.
-
PCAT: Functionality and Data Stealing from Split Learning by Pseudo-Client Attack. [Topic: SL] [pdf]
- Xinben Gao, Lan Zhang. USENIX Security, 2023.
-
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models. [Topic: MIA] [pdf]
- Yugeng Liu, Rui Wen, Xinlei He, Ahmed Salem, Zhikun Zhang, Michael Backes, Emiliano De Cristofaro, Mario Fritz, Yang Zhang. USENIX Security, 2022.
-
Blacklight: Scalable Defense for Neural Networks against Query-Based Black-Box Attacks. [Topic: AEs] [pdf]
- Huiying Li, Shawn Shan, Emily Wenger, Jiayun Zhang, Haitao Zheng, Ben Y. Zhao. USENIX Security, 2022.
-
AutoDA: Automated Decision-based Iterative Adversarial Attacks. [Topic: AEs] [pdf]
- Qi-An Fu, Yinpeng Dong, Hang Su, Jun Zhu, Chao Zhang. USENIX Security, 2022.
-
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks. [Topic: PA] [pdf]
- Shawn Shan, Arjun Nitin Bhagoji, Haitao Zheng, Ben Y. Zhao. USENIX Security, 2022.
-
Teacher Model Fingerprinting Attacks Against Transfer Learning. [Topic: Fingerprinting] [pdf]
- Yufei Chen, Chao Shen, Cong Wang, Yang Zhang. USENIX Security, 2022.
-
Hidden Trigger Backdoor Attack on NLP Models via Linguistic Style Manipulation. [Topic: Backdoor] [pdf]
- Xudong Pan, Mi Zhang, Beina Sheng, Jiaming Zhu, Min Yang. USENIX Security, 2022.
-
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning. [Topic: PA] [pdf]
- Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong. USENIX Security, 2022.
-
Pool Inference Attacks on Local Differential Privacy: Quantifying the Privacy Guarantees of Apple's Count Mean Sketch in Practice. [Topic: IA & DP] [pdf]
- Andrea Gadotti, Florimond Houssiau, Meenatchi Sundaram Muthu Selva Annamalai, Yves-Alexandre de Montjoye. USENIX Security, 2022.
-
PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier. [Topic: AEs] [pdf]
- Chong Xiang, Saeed Mahloujifar, Prateek Mittal. USENIX Security, 2022.
-
Exploring the Security Boundary of Data Reconstruction via Neuron Exclusivity Analysis. [Topic: DRA] [pdf]
- Xudong Pan, Mi Zhang, Yifan Yan, Jiaming Zhu, Min Yang. USENIX Security, 2022.
-
Poisoning Attacks to Local Differential Privacy Protocols for Key-Value Data. [Topic: PA & DP] [pdf]
- Yongji Wu, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong. USENIX Security, 2022.
-
Communication-Efficient Triangle Counting under Local Differential Privacy. [Topic: DP] [pdf]
- Jacob Imola, Takao Murakami, Kamalika Chaudhuri. USENIX Security, 2022.
-
Security Analysis of Camera-LiDAR Fusion Against Black-Box Attacks on Autonomous Vehicles. [Topic: AEs & AV] [pdf]
- R. Spencer Hallyburton, Yupei Liu, Yulong Cao, Z. Morley Mao, Miroslav Pajic. USENIX Security, 2022.
-
Transferring Adversarial Robustness Through Robust Representation Matching. [Topic: AEs] [pdf]
- Pratik Vaishnavi, Kevin Eykholt, Amir Rahmati. USENIX Security, 2022.
-
Seeing is Living? Rethinking the Security of Facial Liveness Verification in the Deepfake Era. [Topic: Deepfake] [pdf]
- Changjiang Li, Li Wang, Shouling Ji, Xuhong Zhang, Zhaohan Xi, Shanqing Guo, Ting Wang. USENIX Security, 2022.
-
On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning. [Topic: Machine-Unlearning] [pdf]
- Anvith Thudi, Hengrui Jia, Ilia Shumailov, Nicolas Papernot. USENIX Security, 2022.
-
Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture. [Topic: MIA] [pdf]
- Xinyu Tang, Saeed Mahloujifar, Liwei Song, Virat Shejwalkar, Milad Nasr, Amir Houmansadr, Prateek Mittal. USENIX Security, 2022.
-
Membership Inference Attacks and Defenses in Neural Network Pruning. [Topic: MIA] [pdf]
- Xiaoyong Yuan, Lan Zhang. USENIX Security, 2022.
-
Efficient Differentially Private Secure Aggregation for Federated Learning via Hardness of Learning with Errors. [Topic: DP & FL] [pdf]
- Timothy Stevens, Christian Skalka, Christelle Vincent, John Ring, Samuel Clark, Joseph Near. USENIX Security, 2022.
-
Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction. [Topic: Deepfake] [pdf]
- Logan Blue, Kevin Warren, Hadi Abdullah, Cassidy Gibson, Luis Vargas, Jessica O'Dell, Kevin Butler, Patrick Traynor. USENIX Security, 2022.
-
Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models. [Topic: MIAI] [pdf]
- Shagufta Mehnaz, Sayanton V. Dibbo, Ehsanul Kabir, Ninghui Li, Elisa Bertino. USENIX Security, 2022.
-
FLAME: Taming Backdoors in Federated Learning. [Topic: FL & Backdoor] [pdf]
- Thien Duc Nguyen, Phillip Rieger, Huili Chen, Hossein Yalame, Helen Möllering, Hossein Fereidooni, Samuel Marchal, Markus Miettinen, Azalia Mirhoseini, Shaza Zeitouni, Farinaz Koushanfar, Ahmad-Reza Sadeghi, Thomas Schneider. USENIX Security, 2022.
-
Synthetic Data – Anonymisation Groundhog Day. [Topic: Synthetic-Data] [pdf]
- Theresa Stadler, Bristena Oprisanu, Carmela Troncoso. USENIX Security, 2022.
-
On the Security Risks of AutoML. [Topic: NAS] [pdf]
- Ren Pang, Zhaohan Xi, Shouling Ji, Xiapu Luo, Ting Wang. USENIX Security, 2022.
-
Inference Attacks Against Graph Neural Networks. [Topic: IA & GNN] [pdf]
- Zhikun Zhang, Min Chen, Michael Backes, Yun Shen, Yang Zhang. USENIX Security, 2022.
-
Adversarial Detection Avoidance Attacks: Evaluating the robustness of perceptual hashing-based client-side scanning. [Topic: AEs] [pdf]
- Shubham Jain, Ana-Maria Crețu, Yves-Alexandre de Montjoye. USENIX Security, 2022.
-
Label Inference Attacks Against Vertical Federated Learning. [Topic: IA & FL] [pdf]
- Chong Fu, Xuhong Zhang, Shouling Ji, Jinyin Chen, Jingzheng Wu, Shanqing Guo, Jun Zhou, Alex X. Liu, Ting Wang. USENIX Security, 2022.
-
Rolling Colors: Adversarial Laser Exploits against Traffic Light Recognition. [Topic: AEs] [pdf]
- Chen Yan, Zhijian Xu, Zhanyuan Yin, Xiaoyu Ji, Wenyuan Xu. USENIX Security, 2022.
-
PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking. [Topic: AEs] [pdf]
- Chong Xiang, Arjun Nitin Bhagoji, Vikash Sehwag, Prateek Mittal. USENIX Security, 2021.
-
PrivSyn: Differentially Private Data Synthesis. [Topic: DP] [pdf]
- Zhikun Zhang, Tianhao Wang, Ninghui Li, Jean Honorio, Michael Backes, Shibo He, Jiming Chen, Yang Zhang. USENIX Security, 2021.
-
Muse: Secure Inference Resilient to Malicious Clients. [Topic: IA] [pdf]
- Ryan Lehmkuhl, Pratyush Mishra, Akshayaram Srinivasan, Raluca Ada Popa. USENIX Security, 2021.
-
Systematic Evaluation of Privacy Risks of Machine Learning Models. [Topic: IA] [pdf]
- Liwei Song, Prateek Mittal. USENIX Security, 2021.
-
Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers. [Topic: Backdoor] [pdf]
- Giorgio Severi, Jim Meyer, Scott Coull, Alina Oprea. USENIX Security, 2021.
-
Cerebro: A Platform for Multi-Party Cryptographic Collaborative Learning. [Topic: MPC] [pdf]
- Wenting Zheng, Ryan Deng, Weikeng Chen, Raluca Ada Popa, Aurojit Panda, Ion Stoica. USENIX Security, 2021.
-
T-Miner: A Generative Approach to Defend Against Trojan Attacks on DNN-based Text Classification. [Topic: Backdoor] [pdf]
- Ahmadreza Azizi, Ibrahim Asadullah Tahmid, Asim Waheed, Neal Mangaokar, Jiameng Pu, Mobin Javed, Chandan K. Reddy, Bimal Viswanath, Virginia Tech. USENIX Security, 2021.
-
Defeating DNN-Based Traffic Analysis Systems in Real-Time With Blind Adversarial Perturbations. [Topic: AEs] [pdf]
- Milad Nasr, Alireza Bahramali, Amir Houmansadr. USENIX Security, 2021.
-
Data Poisoning Attacks to Local Differential Privacy Protocols. [Topic: PA & DP] [pdf]
- Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong. USENIX Security, 2021.
-
How to Make Private Distributed Cardinality Estimation Practical, and Get Differential Privacy for Free. [Topic: DP] [pdf]
- Changhui Hu, Jin Li, Zheli Liu, Xiaojie Guo, Yu Wei, and Xuan Guang, Grigorios Loukides, Changyu Dong. USENIX Security, 2021.
-
SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations. [Topic: AEs] [pdf]
- Giulio Lovisotto, Henry Turner, Ivo Sluganovic, Martin Strohmeier, Ivan Martinovic. USENIX Security, 2021.
-
WaveGuard: Understanding and Mitigating Audio Adversarial Examples. [Topic: AEs] [pdf]
- Shehzeen Hussain, Paarth Neekhara, Shlomo Dubnov, Julian McAuley, Farinaz Koushanfar. USENIX Security, 2021.
-
Graph Backdoor. [Topic: Backdoor] [pdf]
- Zhaohan Xi, Ren Pang, Shouling Ji, Ting Wang. USENIX Security, 2021.
-
Entangled Watermarks as a Defense against Model Extraction. [Topic: Watermark] [pdf]
- Hengrui Jia, Christopher A. Choquette-Choo, Varun Chandrasekaran, Nicolas Papernot. USENIX Security, 2021.
-
Too Good to Be Safe: Tricking Lane Detection in Autonomous Driving with Crafted Perturbations. [Topic: AEs] [pdf]
- Pengfei Jing, Qiyi Tang, Yuefeng Du, Lei Xue, Xiapu Luo, Ting Wang, Sen Nie, Shi Wu. USENIX Security, 2021.
-
Fantastic Four: Honest-Majority Four-Party Secure Computation With Malicious Security. [Topic: MPC] [pdf]
- Anders Dalskov, Daniel Escudero, Marcel Keller. USENIX Security, 2021.
-
Locally Differentially Private Analysis of Graph Statistics. [Topic: DP] [pdf]
- Jacob Imola, Takao Murakami, Kamalika Chaudhuri. USENIX Security, 2021.
-
Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection. [Topic: Backdoor] [pdf]
- Di Tang, XiaoFeng Wang, Haixu Tang, Kehuan Zhang. USENIX Security, 2021.
-
Stealing Links from Graph Neural Networks. [Topic: GNN] [pdf]
- Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, Yang Zhang. USENIX Security, 2021.
-
Adversarial Policy Training against Deep Reinforcement Learning. [Topic: AEs & RL] [pdf]
- Xian Wu, Wenbo Guo, Hua Wei, Xinyu Xing. USENIX Security, 2021.
-
Moderator: Moderating Text-to-Image Diffusion Models through Fine-grained Context-based Policies. [Topic: ML and Security: Large Language Models] [pdf]
- Peiran Wang, Qiyu Li, Longxuan Yu, Ziyao Wang, Ang Li, Haojian Jin. ACM CCS, 2024.
-
Training Robust ML-based Raw-Binary Malware Detectors in Hours, not Months. [Topic: Verification, Secure Architectures, and Network Security] [pdf]
- Keane Lucas, Weiran Lin, Lujo Bauer, Michael K. Reiter, Mahmood Sharif. ACM CCS, 2024.
-
TREC: APT Tactic / Technique Recognition via Few-Shot Provenance Subgraph Learning. [Topic: Verification, Secure Architectures, and Network Security] [pdf]
- Mingqi Lv, HongZhe Gao, Xuebo Qiu, Tieming Chen, Tiantian Zhu, Jinyin Chen, Shouling Ji. ACM CCS, 2024.
-
SAFARI: Speech-Associated Facial Authentication for AR/VR Settings via Robust VIbration Signatures [Topic: Verification, Secure Architectures, and Network Security] [pdf]
- Tianfang Zhang, Qiufan Ji, Zhengkun Ye, Md Mojibur Rahman Redoy Akanda, Ahmed Tanvir Mahdad, Cong Shi, Yan Wang, Nitesh Saxena, Yingying Chen. ACM CCS, 2024.
-
KnowGraph: Knowledge-Enabled Anomaly Detection via Logical Reasoning on Graph Data. [Topic: Verification, Secure Architectures, and Network Security] [pdf] -Andy Zhou, Xiaojun Xu, Ramesh Raghunathan, Alok Lal, Xinze Guan, Bin Yu, Bo Li. ACM CCS, 2024.
-
Understanding Implosion in Text-to-Image Generative Models. [Topic: ML and Security: Large Language Models] [pdf]
- Wenxin Ding, Cathy Y. Li, Shawn Shan, Ben Y. Zhao, Haitao Zheng. ACM CCS, 2024.
-
Legilimens: Practical and Unified Content Moderation for Large Language Model Services. [Topic: ML and Security: Large Language Models] [pdf]
- Jialin Wu, Jiangyi Deng, Shengyuan Pang, Yanjiao Chen, Jiayang Xu, Xinfeng Li, Wenyuan Xu. ACM CCS, 2024.
-
Optimization-based Prompt Injection Attack to LLM-as-a-Judge. [Topic: ML and Security: Machine Learning Attacks] [pdf]
- Jiawen Shi, Zenghui Yuan, Yinuo Liu, Yue Huang, Pan Zhou, Lichao Sun, Neil Zhenqiang Gong. ACM CCS, 2024.
-
PromSec: Prompt Optimization for Secure Generation of Functional Source Code with Large Language Models (LLMs). [Topic: ML and Security: Generative Models] [pdf]
- Mahmoud Nazzal, Issa Khalil, Abdallah Khreishah, NhatHai Phan. ACM CCS, 2024.ML and Security: Machine Learning Attacks
-
Certifiable Black-Box Attacks with Randomized Adversarial Examples: Breaking Defenses with Provable Confidence [Topic: ML and Security: Machine Learning Attacks] [pdf]
- Hanbin Hong, Xinyu Zhang, Binghui Wang, Zhongjie Ba, Yuan Hong. ACM CCS, 2024.
-
Phantom: Untargeted Poisoning Attacks on Semi-Supervised Learning (Full Version) [Topic: ML and Security: Machine Learning Attacks] [pdf]
- Jonathan Knauer, Phillip Rieger, Hossein Fereidooni, Ahmad-Reza Sadeghi. ACM CCS, 2024.
-
Zero-Query Adversarial Attack on Black-box Automatic Speech Recognition Systems [Topic: ML and Security: Machine Learning Attacks] [pdf]
- Zheng Fang, Tao Wang, Lingchen Zhao, Shenyi Zhang, Bowen Li, Yunjie Ge, Qi Li, Chao Shen, Qian Wang. ACM CCS, 2024.
-
SUB-PLAY: Adversarial Policies against Partially Observed Multi-Agent Reinforcement Learning Systems [Topic: ML and Security: Machine Learning Attacks] [pdf]
- Oubo Ma, Yuwen Pu, Linkang Du, Yang Dai, Ruo Wang, Xiaolei Liu, Yingcai Wu, Shouling Ji. ACM CCS, 2024.
-
Optimization-based Prompt Injection Attack to LLM-as-a-Judge [Topic: ML and Security: Machine Learning Attacks] [pdf]
- Jiawen Shi, Zenghui Yuan, Yinuo Liu, Yue Huang, Pan Zhou, Lichao Sun, Neil Zhenqiang Gong. ACM CCS, 2024.
-
Neural Dehydration: Effective Erasure of Black-box Watermarks from DNNs with Limited Data [Topic: ML and Security: Machine Learning Attacks] [pdf]
- Yifan Lu, Wenxuan Li, Mi Zhang, Xudong Pan, Min Yang. ACM CCS, 2024.
-
Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks [Topic: Blockchain & Distributed Systems: Blockchain Attacks] [pdf]
- Yu He, Boheng Li, Yao Wang, Mengda Yang, Juan Wang, Hongxin Hu, Xingyu Zhao. ACM CCS, 2024.
-
Evaluations of Machine Learning Privacy Defenses are Misleading [Topic: Blockchain & Distributed Systems: Blockchain Attacks] [pdf]
- Michael Aerni, Jie Zhang, Florian Tramèr. ACM CCS, 2024.
-
A Unified Membership Inference Method for Visual Self-supervised Encoder via Part-aware Capability [Topic: Blockchain & Distributed Systems: Blockchain Attacks] [pdf]
- Jie Zhu, Jirong Zha, Ding Li, Leye Wang. ACM CCS, 2024.
-
The Janus Interface: How Fine-Tuning in Large Language Models Amplifies the Privacy Risks [Topic: Blockchain & Distributed Systems: Blockchain Attacks] [pdf]
- Xiaoyi Chen, Siyuan Tang, Rui Zhu, Shijun Yan, Lei Jin, Zihao Wang, Liya Su, Zhikun Zhang, XiaoFeng Wang, Haixu Tang. ACM CCS, 2024.
-
A General Framework for Data-Use Auditing of ML Models [Topic: Blockchain & Distributed Systems: Blockchain Attacks] [pdf]
- Zonghao Huang, Neil Zhenqiang Gong, Michael K. Reiter. ACM CCS, 2024.
-
Dye4AI: Assuring Data Boundary on Generative AI Services [Topic: ML and Security: Generative Models] [pdf]
- Shu Wang, Kun Sun, Yan Zhai. ACM CCS, 2024.
-
I Don't Know You, But I Can Catch You: Real-Time Defense against Diverse Adversarial Patches for Object Detectors [Topic: Privacy and Anonymity: Privacy Attacks Meet ML] [pdf]
- Zijin Lin, Yue Zhao, Kai Chen, Jinwen He. ACM CCS, 2024.
-
AirGapAgent: Protecting Privacy-Conscious Conversational Agents [Topic: Privacy and Anonymity: Privacy Attacks Meet ML] [pdf]
- Eugene Bagdasarian, Ren Yi, Sahra Ghalebikesabi, Peter Kairouz, Marco Gruteser, Sewoong Oh, Borja Balle, Daniel Ramage. ACM CCS, 2024.
-
ERASER: Machine Unlearning in MLaaS via an Inference Serving-Aware Approach [Topic: Privacy and Anonymity: Privacy Attacks Meet ML] [pdf]
- Yuke Hu, Jian Lou, Jiaqi Liu, Wangze Ni, Feng Lin, Zhan Qin, Kui Ren. ACM CCS, 2024.
-
NeuJeans: Private Neural Network Inference with Joint Optimization of Convolution and FHE Bootstrapping [Topic: Usability and Measurement: Phishing, Deepfakes, and Other Risks] [pdf]
- Jae Hyung Ju, Jaiyoung Park, Jongmin Kim, Minsik Kang, Donghwan Kim, Jung Hee Cheon, Jung Ho Ahn. ACM CCS, 2024.
-
Ents: An Efficient Three-party Training Framework for Decision Trees by Communication Optimization [Topic: Usability and Measurement: Phishing, Deepfakes, and Other Risks] [pdf]
- Guopeng Lin, Weili Han, Wenqiang Ruan, Ruisheng Zhou, Lushan Song, Bingshuai Li, Yunfeng Shao. ACM CCS, 2024.
-
zkLLM: Zero Knowledge Proofs for Large Language Models [Topic: Usability and Measurement: Phishing, Deepfakes, and Other Risks] [pdf]
- Haochen Sun, Jason Li, Hongyang Zhang. ACM CCS, 2024.
-
Fisher Information guided Purification against Backdoor Attacks [Topic: ML and Security: Model Security] [pdf]
- Nazmul Karim, Abdullah Al Arafat, Adnan Siraj Rakin, Zhishan Guo, Nazanin Rahnavard. ACM CCS, 2024.
-
BadMerging: Backdoor Attacks Against Model Merging [Topic: ML and Security: Model Security] [pdf]
- Jinghuai Zhang, Jianfeng Chi, Zheng Li, Kunlin Cai, Yang Zhang, Yuan Tian. ACM CCS, 2024.
-
SafeGen: Mitigating Sexually Explicit Content Generation in Text-to-Image Models [Topic: Usability and Measurement: AI Risks] [pdf]
- Xinfeng Li, Yuchen Yang, Jiangyi Deng, Chen Yan, Yanjiao Chen, Xiaoyu Ji, Wenyuan Xu. ACM CCS, 2024.
-
Image-Perfect Imperfections: Safety, Bias, and Authenticity in the Shadow of Text-To-Image Model Evolution [Topic: Usability and Measurement: AI Risks] [pdf]
- Yixin Wu, Yun Shen, Michael Backes, Yang Zhang. ACM CCS, 2024.
-
Decoding the Secrets of Machine Learning in Malware Classification: A Deep Dive into Datasets, Feature Extraction, and Model Performance [Topic: Machine Learning Applications I] [pdf]
- Savino Dambra, Yufei Han, Simone Aonzo, Platon Kotzias, Antonino Vitale, Juan Caballero, Davide Balzarotti, Leyla Bilge. ACM CCS, 2023.
-
Efficient Query-Based Attack against ML-Based Android Malware Detection under Zero Knowledge Setting [Topic: Machine Learning Applications I] [pdf]
- Ping He, Yifan Xia, Xuhong Zhang, Shouling Ji. ACM CCS, 2023.
-
Your Battery Is a Blast! Safeguarding Against Counterfeit Batteries with Authentication [Topic: Machine Learning Applications I] [pdf]
- Francesco Marchiori, Mauro Conti. ACM CCS, 2023.
-
Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information [Topic: Machine Learning Attacks I] [pdf]
- Yi Zeng, Minzhou Pan, Hoang Anh Just, Lingjuan Lyu, Meikang Qiu, Ruoxi Jia. ACM CCS, 2023.
-
Stateful Defenses for Machine Learning Models Are Not Yet Secure Against Black-box Attacks [Topic: Machine Learning Attacks I] [pdf]
- Ryan Feng, Ashish Hooda, Neal Mangaokar, Kassem Fawaz, Somesh Jha, Atul Prakash. ACM CCS, 2023.
-
Evading Watermark based Detection of AI-Generated Content [Topic: Machine Learning Attacks II] [pdf]
- Zhengyuan Jiang, Jinghuai Zhang, Neil Zhenqiang Gong. ACM CCS, 2023.
-
Verifiable Learning for Robust Tree Ensembless [Topic: Language Models & Verification] [pdf]
- Stefano Calzavara, Lorenzo Cazzaro, Giulio Ermanno Pibiri, Nicola Prezza. ACM CCS, 2023.
-
Large Language Models for Code: Security Hardening and Adversarial Testing [Topic: Language Models & Verification] [pdf]
- Jingxuan He, Martin Vechev. ACM CCS, 2023.
-
Characterizing and Detecting Non-Consensual Photo Sharing on Social Networks. [Topic: Non-consensual Sharing] [pdf]
- Tengfei Zheng, Tongqing Zhou, Qiang Liu, Kui Wu, Zhiping Cai. ACM CCS, 2022.
-
DPIS: An Enhanced Mechanism for Differentially Private SGD with Importance Sampling. [Topic: DP & DNN] [pdf]
- Jianxin Wei, Ergute Bao, Xiaokui Xiao, Yin Yang. ACM CCS, 2022.
-
DriveFuzz: Discovering Autonomous Driving Bugs through Driving Quality-Guided Fuzzing. [Topic: AD] [pdf]
- Seulbae Kim, Major Liu, Junghwan "John" Rhee, Yuseok Jeon, Yonghwi Kwon, Chung Hwan Kim. ACM CCS, 2022.
-
EIFFeL: Ensuring Integrity for Federated Learning. [Topic: FL] [pdf]
- Amrita Roy Chowdhury, Chuan Guo, Somesh Jha, Laurens van der Maaten. ACM CCS, 2022.
-
Eluding Secure Aggregation in Federated Learning via Model Inconsistency. [Topic: FL] [pdf]
- Dario Pasquini, Danilo Francati, Giuseppe Ateniese. ACM CCS, 2022.
-
Enhanced Membership Inference Attacks against Machine Learning Models. [Topic: MI] [pdf]
- Jiayuan Ye, Aadyaa Maddi, Sasi Kumar Murakonda, Vincent Bindschaedler, Reza Shokri. ACM CCS, 2022.
-
Feature Inference Attack on Shapley Values. [Topic: MLaaS] [pdf]
- Xinjian Luo, Yangfan Jiang, Xiaokui Xiao. ACM CCS, 2022.
-
Graph Unlearning. [Topic: Machine Unlearning] [pdf]
- Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, Yang Zhang. ACM CCS, 2022.
-
Group Property Inference Attacks Against Graph Neural Networks. [Topic: GNNs] [pdf]
- Xiuling Wang, Wendy Hui Wang. ACM CCS, 2022.
-
Harnessing Perceptual Adversarial Patches for Crowd Counting. [Topic: AEs] [pdf]
- Shunchang Liu, Jiakai Wang, Aishan Liu, Yingwei Li, Yijie Gao, Xianglong Liu, Dacheng Tao. ACM CCS, 2022.
-
Training Set Debugging Using Trusted Items. [Topic: ML] [pdf]
- Zayd Hammoudeh, Daniel Lowd. ACM CCS, 2022.
-
LPGNet: Link Private Graph Networks for Node Classification. [Topic: GCNs & DP] [pdf]
- Aashish Kolluri, Teodora Baluta, Bryan Hooi, Prateek Saxena. ACM CCS, 2022.
-
LoneNeuron: a Highly-Effective Feature-Domain Neural Trojan Using Invisible and Polymorphic Watermarks. [Topic: DNNs & Watermark] [pdf]
- Zeyan Liu, Fengjun Li, Zhu Li, Bo Luo. ACM CCS, 2022.
-
Membership Inference Attacks and Generalization: A Causal Perspective. [Topic: MI] [pdf]
- Teodora Baluta, Shiqi Shen, S. Hitarth, Shruti Tople, Prateek Saxena. ACM CCS, 2022.
-
Membership Inference Attacks by Exploiting Loss Trajectory. [Topic: MI] [pdf]
- Yiyong Liu, Zhengyu Zhao, Michael Backes, Yang Zhang. ACM CCS, 2022.
-
Order-Disorder: Imitation Adversarial Attacks for Black-box Neural Ranking Models. [Topic: IR] [pdf]
- Jiawei Liu, Yangyang Kang, Di Tang, Kaisong Song, Changlong Sun, Xiaofeng Wang, Wei Lu, Xiaozhong Liu. ACM CCS, 2022.
-
Perception-Aware Attack: Creating Adversarial Music via Reverse-Engineering Human Perception. [Topic: AEs] [pdf]
- Rui Duan, Zhe Qu, Shangqing Zhao, Leah Ding, Yao Liu, Zhuo Lu. ACM CCS, 2022.
-
Physical Hijacking Attacks against Object Trackers. [Topic: AV] [pdf]
- Raymond Muller, Yanmao Man, Z. Berkay Celik, Ming Li, Ryan Gerdes. ACM CCS, 2022.
-
Post-breach Recovery: Protection against White-box Adversarial Examples for Leaked DNN Models. [Topic: DNN] [pdf]
- Shawn Shan, Wenxin Ding, Emily Wenger, Haitao Zheng, Ben Y. Zhao. ACM CCS, 2022.
-
QuerySnout: Automating the Discovery of Attribute Inference Attacks against Query-Based Systems. [Topic: QBS] [pdf]
- Ana-Maria Crețu, Florimond Houssiau, Antoine Cully, Yves-Alexandre de Montjoye. ACM CCS, 2022.
-
SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders. [Topic: Watermark] [pdf]
- Tianshuo Cong, Xinlei He, Yang Zhang. ACM CCS, 2022.
-
SpecPatch: Human-In-The-Loop Adversarial Audio Spectrogram Patch Attack on Speech Recognition. [Topic: AEs] [pdf]
- Hanqing Guo, Yuanda Wang, Nikolay Ivanov, Li Xiao, Qiben Yan. ACM CCS, 2022.
-
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning. [Topic: EaaS] [pdf]
- Yupei Liu, Jinyuan Jia, Hongbin Liu, Neil Gong. ACM CCS, 2022.
-
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets. [Topic: ML] [pdf]
- Florian Tramer, Reza Shokri, Ayrton San Joaquin, Hoang Le, Matthew Jagielski, Sanghyun Hong, Nicholas Carlini. ACM CCS, 2022.
-
Understanding Real-world Threats to Deep Learning Models in Android Apps. [Topic: AEs] [pdf]
- Zizhuang Deng, Kai Chen, Guozhu Meng, Xiaodong Zhang, Ke Xu, Yao Cheng. ACM CCS, 2022.
-
When Evil Calls: Targeted Adversarial Voice over IP Network. [Topic: AEs] [pdf]
- Han Liu, Zhiyuan Yu, Mingming Zha, XiaoFeng Wang, William Yeoh, Yevgeniy Vorobeychik, Ning Zhang. ACM CCS, 2022.
-
Why So Toxic? Measuring and Triggering Toxic Behavior in Open-Domain Chatbots. [Topic: AEs] [pdf]
- Wai Man Si, Michael Backes, Jeremy Blackburn, Emiliano De Cristofaro, Gianluca Stringhini, Savvas Zannettou, Yang Zhang. ACM CCS, 2022.
-
"Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution. [Topic: NNs] [pdf]
- Yuyou Gan, Yuhao Mao, Xuhong Zhang, Shouling Ji, Yuwen Pu, Meng Han, Jianwei Yin, Ting Wang. ACM CCS, 2022.
-
Cert-RNN: Towards Certifying the Robustness of Recurrent Neural Networks. [Topic: AEs] [pdf]
- Tianyu Du, Shouling Ji, Lujia Shen, Yao Zhang, Jinfeng Li, Jie Shi, Chengfang Fang, Jianwei Yin, Raheem Beyah, Ting Wang. ACM CCS, 2021.
-
AHEAD: Adaptive Hierarchical Decomposition for Range Query under Local Differential Privacy. [Topic: LDP] [pdf]
- Linkang Du, Zhikun Zhang, Shaojie Bai, Changchang Liu, Shouling Ji, Peng Cheng, Jiming Chen. ACM CCS, 2021.
-
Unleashing the Tiger: Inference Attacks on Split Learning. [Topic: SL] [pdf]
- Dario Pasquini, Giuseppe Ateniese, Massimo Bernaschi. ACM CCS, 2021.
-
TableGAN-MCA: Evaluating Membership Collisions of GAN-Synthesized Tabular Data Releasing. [Topic: GAN] [pdf]
- Aoting Hu, Renjie Xie, Zhigang Lu, Aiqun Hu, Minhui Xue. ACM CCS, 2021.
-
"I need a better description": An Investigation Into User Expectations For Differential Privacy. [Topic: DP] [pdf]
- Rachel Cummings, Gabriel Kaptchuk, Elissa M. Redmiles. ACM CCS, 2021.
-
Locally Private Graph Neural Networks. [Topic: GNNs] [pdf]
- Sina Sajadmanesh, Daniel Gatica-Perez. ACM CCS, 2021.
-
A One-Pass Distributed and Private Sketch for Kernel Sums with Applications to Machine Learning at Scale. [Topic: DP] [pdf]
- Benjamin Coleman, Anshumali Shrivastava. ACM CCS, 2021.
-
On the Robustness of Domain Constraints. [Topic: AEs] [pdf]
- Ryan Sheatsley, Blaine Hoak, Eric Pauley, Yohan Beugin, Michael J. Weisman, Patrick McDaniel. ACM CCS, 2021.
-
Membership Leakage in Label-Only Exposures. [Topic: MI] [pdf]
- Zheng Li, Yang Zhang. ACM CCS, 2021.
-
Hidden Backdoors in Human-Centric Language Models. [Topic: Backdoor] [pdf]
- Shaofeng Li, Hui Liu, Tian Dong, Benjamin Zi Hao Zhao, Minhui Xue, Haojin Zhu, Jialiang Lu. ACM CCS, 2021.
-
DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation. [Topic: DP] [pdf]
- Boxin Wang, Fan Wu, Yunhui Long, Luka Rimanic, Ce Zhang, Bo Li. ACM CCS, 2021.
-
DeepAID: Interpreting and Improving Deep Learning-based Anomaly Detection in Security Applications. [Topic: DL] [pdf]
- Dongqi Han, Zhiliang Wang, Wenqi Chen, Ying Zhong, Su Wang, Han Zhang, Jiahai Yang, Xingang Shi, Xia Yin. ACM CCS, 2021.
-
Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs. [Topic: Classifer] [pdf]
- Mohammad Malekzadeh, Anastasia Borovykh, Deniz Gunduz. ACM CCS, 2021.
-
Differential Privacy for Directional Data. [Topic: DP] [pdf]
- Benjamin Weggenmann, Florian Kerschbaum. ACM CCS, 2021.
-
"Hello, It's Me": Deep Learning-based Speech Synthesis Attacks in the Real World. [Topic: Speech Synthesis Attack] [pdf]
- Emily Wenge, Max Bronckers, Christian Cianfarani, Jenna Cryan, Angela Sha, Haitao Zheng, Ben Y. Zhao. ACM CCS, 2021.
-
EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning. [Topic: MI] [pdf]
- Hongbin Liu, Jinyuan Jia, Wenjie Qu, Neil Gong. ACM CCS, 2021.
-
Subpopulation Data Poisoning Attacks. [Topic: Poisoning Attack] [pdf]
- Matthew Jagielski, Giorgio Severi, Niklas Pousette Harger, Alina Oprea. ACM CCS, 2021.
-
Continuous Release of Data Streams under both Centralized and Local Differential Privacy. [Topic: DP] [pdf]
- Tianhao Wang, Joann Qiongna Chen, Zhikun Zhang, Dong Su, Yueqiang Cheng, Zhou Li, Ninghui Li, Somesh Jha. ACM CCS, 2021.
-
When Machine Unlearning Jeopardizes Privacy. [Topic: MI] [pdf]
- Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, Yang Zhang. ACM CCS, 2021.
-
DetectorGuard: Provably Securing Object Detectors against Localized Patch Hiding Attacks. [Topic: AEs] [pdf]
- Chong Xiang, Prateek Mittal. ACM CCS, 2021.
-
I Can See the Light: Attacks on Autonomous Vehicles Using Invisible Lights. [Topic: AV] [pdf]
- Wei Wang, Yao Yao, Xin Liu, Xiang Li, Pei Hao, Ting Zhu. ACM CCS, 2021.
-
Backdoor Pre-trained Models Can Transfer to All. [Topic: Backdoor] [pdf]
- Lujia Shen, Shouling Ji, Xuhong Zhang, Jinfeng Li, Jing Chen, Jie Shi, Chengfang Fang, Jianwei Yin, Ting Wang. ACM CCS, 2021.
-
Quantifying and Mitigating Privacy Risks of Contrastive Learning. [Topic: CL] [pdf]
- Xinlei He, Yang Zhang. ACM CCS, 2021.
-
Membership Inference Attacks Against Recommender Systems. [Topic: MI] [pdf]
- Minxing Zhang, Zihan Wang, Yang Zhang, Zhaochun Ren, Pengjie Ren, Zhunmin Chen, Pengfei Hu. ACM CCS, 2021.
-
Learning Security Classifiers with Verified Global Robustness Properties. [Topic: Classifier] [pdf]
- Yizheng Chen, Shiqi Wang, Yue Qin, Xiaojing Liao, Suman Jana, David Wagner. ACM CCS, 2021.
-
Robust Adversarial Attacks Against DNN-Based Wireless Communication Systems. [Topic: AEs] [pdf]
- Alireza Bahramali, Milad Nasr, Amir Houmansadr, Dennis Goeckel, Don Towsley. ACM CCS, 2021.
-
Can We Use Arbitrary Objects to Attack LiDAR Perception in Autonomous Driving? [Topic: AEs] [pdf]
- Yi Zhu, Chenglin Miao, Tianhang Zheng, Foad Hajiaghajani, Lu Su, Chunming Qiao. ACM CCS, 2021.
-
Feature Indistinguishable Attack to Circumvent Trapdoor-enabled Defense. [Topic: AEs] [Code][pdf]
- Chaoxiang He, Bin (Benjamin) Zhu, Xiaojing Ma, Hai Jin, Shengshan Hu. ACM CCS, 2021.
-
A Hard Label Black-box Adversarial Attack Against Graph Neural Networks. [Topic: AEs & DNN] [pdf]
- Jiaming Mu, Binghui Wang, Qi Li, Kun Sun, Mingwei Xu, Zhuotao Liu. ACM CCS, 2021.
-
Reverse Attack: Black-box Attacks on Collaborative Recommendation. [Topic: CF & Poisoning Attack] [pdf]
- Yihe Zhang, Xu Yuan, Jin Li, Jiadong Lou, Li Chen, Nianfeng Tzeng. ACM CCS, 2021.
-
zkCNN: Zero Knowledge Proofs for Convolutional Neural Network Predictions and Accuracy. [Topic: CNN] [pdf]
- Tianyi Liu, Xiang Xie, Yupeng Zhang. ACM CCS, 2021.
-
Black-box Adversarial Attacks on Commercial Speech Platforms with Minimal Information. [Topic: AEs] [pdf]
- Baolin Zheng, Peipei Jiang, Qian Wang, Qi Li, Chao Shen, Cong Wang, Yunjie Ge, Qingyang Teng, Shenyi Zhang. ACM CCS, 2021.
-
AI-Lancet: Locating Error-inducing Neurons to Optimize Neural Networks. [Topic: DNN] [pdf]
- Yue Zhao, Hong Zhu, Kai Chen, Shengzhi Zhang. ACM CCS, 2021.