📄Paper • 📊 Slides (coming soon) • 🖼️Poster (coming soon) • 🎬 Video (coming soon)
Cautious predictions—where a machine learning model abstains when uncertain—are crucial for limiting harmful errors in safety-critical applications. In this work, we identify a novel threat: a dishonest institution can exploit these mechanisms to discriminate or unjustly deny services under the guise of uncertainty. We demonstrate the practicality of this threat by introducing an uncertainty-inducing attack called Mirage, which deliberately reduces confidence in targeted input regions, thereby covertly disadvantaging specific individuals. At the same time, Mirage maintains high predictive performance across all data points. To counter this threat, we propose Confidential Guardian, a framework that analyzes calibration metrics on a reference dataset to detect artificially suppressed confidence. Additionally, it employs zero-knowledge proofs of verified inference to ensure that reported confidence scores genuinely originate from the deployed model. This prevents the provider from fabricating arbitrary model confidence values while protecting the model’s proprietary details. Our results confirm that Confidential Guardian effectively prevents the misuse of cautious predictions, providing verifiable assurances that abstention reflects genuine model uncertainty rather than malicious intent.
We are using uv
as our package manager (and we think you should, too)! It is a fast Python dependency management tool and drop-in replacement for pip
.
pip install uv
uv pip install -e .
source .venv/bin/activate
jupyter notebook
mirage.py
: Contains code for the Mirage attack discussed in the paper.conf_guard.py
: Contains code for computing calibration metrics and reliability diagrams.gaussian_experiments.ipynb
: Notebook for the synthethic Gaussian experiments.image_experiments.ipynb
: Notebook for the image experiments on CIFAR-100 and UTKFace.tabular_experiments.ipynb
: Notebook for the tabular experiments on Adult and Credit.regression_experiments.ipynb
: Notebook for the regression experiments.zkp
: Code for running the zero-knowlegde proofs. See README.md in subfolder for details.
@inproceedings{rabanser2025confidential,
title = {Confidential Guardian: Cryptographically Prohibiting the Abuse of Model Abstention},
author = {Stephan Rabanser and Ali Shahin Shamsabadi and Olive Franzese and Xiao Wang and Adrian Weller and Nicolas Papernot},
year = {2025},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
}