This repository is the official implementation of our paper AI-Face: A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark
🎉🎉🎉 Our AI-Face has been accepted by CVPR 2025!
🚨Call for Participation: NeurIPS 2025 Competition on Fairness in AI Face Detection!
We’re thrilled to announce the Competition of Fairness in AI Face Detection, held at the 2nd Workshop on New Trends in AI-Generated Media and Security (AIMS) @ NeurIPS 2025
🔗Competition Website: https://sites.google.com/view/aifacedetection/home
Welcome to our work AI-Face, for fairness benchmark in AI-generated face detection.
In this work, we propose: (1) a million-scale demographically annotated with 37 distinct generations methods; and (2) a comprehensive fairness benchmark for training, evaluation, and analysis.
AI-Face Dataset Highlight: The key features of our proposed AI-Face dataset are as follows
✅ Demographic Annotation: AI-Face provides Skin Tone, Gender, Age annotations, which are highly salient to measuring bias.
✅ Forgery Diversity: AI-Face comprises 37 distinct deepfake techniques from Deepfake Vidoes, GAN, and Diffusion Models (both representive and SOTA methods are included), facilitating the detection of nowadays' SOTA deepfakes and AIGCs.
✅ Forgery Scale: AI-Face offers million-level AI-generated data scale for face image.
The AI-Face Dataset is licensed under CC BY-NC-ND 4.0
You can access and download the images of AI-Face dataset here.
If you would like to access the demographic annotations of AI-Face Dataset, please download and sign the EULA. Please upload the signed EULA to the Google Form and fill the required details. Once the form is approved, the annotations download link will be sent to you. If you have any questions, please send an email to lin1785@purdue.edu, hu968@purdue.edu
You can run the following script to configure the necessary environment:
cd AI-Face-FairnessBench
conda create -n FairnessBench python=3.9.0
conda activate FairnessBench
pip install -r requirements.txt
After getting our AI-Face dataset, put the provided train.csv
and test.csv
within AI-Face dataset under ./dataset
.
train.csv and test.csv is formatted:
Column | Description |
---|---|
Image Path | Path to the image file |
Gender | Gender label: 1 - Male, 0 - Female |
Age | Age label: 0 - Child, 1 - Youth, 3 - Adult, 4 - Middle-aged, 5-Senior |
Skin Tone | Skin Tone label: Monk Skin Tone Scale |
Intersection | 0-(Female,Light), 1-(Female,Medium), 2-(Female,Dark), 3-(Male,Light), 4-(Male,Medium), 5-(Male,Dark) |
Target | Label indicating real (0) or fake (1) image |
- Download image tar files.
- Untar each file.
- Organize the data as shown below:
AI-Face Dataset
├── deepfakes
├── dfd
├── dfdc
├── ...
├── GANs
├── AttGAN
├── STGAN
├── ...
├── DMs
├── Palette
├── StableDiffusion1.5
├── ...
├── Real
├── FFHQ
├── imdb_wiki
Before running the training code, make sure you load the pre-trained weights. You can download Xception model trained on ImageNet (through this link).
To run the training code, you should first go to the ./training/
folder, then run train_test.py
:
cd training
python train_test.py
You can adjust the parameters in train_test.py
to specify the parameters, e.g., model, batchsize, learning rate, etc.
--lr
: learning rate, default is 0.0005.
--train_batchsize
: batchsize for training, default is 128.
--test_batchsize
: batchsize for testing, default is 32.
--datapath
: /path/to/dataset
.
--model
: detector name ['xception', 'efficientnet', 'core', 'ucf', 'srm', 'f3net', 'spsl', 'daw_fdd', 'dag_fdd', 'fair_df_detector'], default is 'xception'.
--dataset_type
: dataset type loaded for detectors, default is 'no_pair'. For 'ucf' and 'fair_df_detector', it should be 'pair'.
To train ViT-b/16 and UnivFD, please run train_test_vit.py
and train_test_clip.py
, respectively.
Checkpoints of detectors trained on our AI-Face can be downloaded through the link.
This is our second version of the dataset; here, we list the key differences between the first version. For more details on the initial version, refer to our paper.
- Annotation Difference. We have Gender, Age, and Race categories in the first version. See the updated annotations in the second version.
Column | Description |
---|---|
Image Path | Path to the image file |
Uncertainty Score Gender | Uncertainty score for gender annotation |
Uncertainty Score Age | Uncertainty score for age annotation |
Uncertainty Score Race | Uncertainty score for race annotation |
Ground Truth Gender | Gender label: 1 - Male, 0 - Female |
Ground Truth Age | Age label: 0 - Young, 1 - Middle-aged, 2 - Senior, 3 - Others |
Ground Truth Race | Race label: 0 - Asian, 1 - White, 2 - Black, 3 - Others |
Intersection | 0-(Male,Asian), 1-(Male,White), 2-(Male,Black), 3-(Male,Others), 4-(Female,Asian), 5-(Female,White), 6-(Female,Black), 7-(Female,Others) |
Target | Label indicating real (0) or fake (1) image |
-
We used VGGFace2 for annotator training in the first version, while IMDB-WIKI, the gender and age labels from IMDB-WIKI were crawled from Wikipedia and IMDb website, which makes sure the label quality is higher and more reliable.
-
The difference in the subsets of different versions of AI-Face includes:
Category | AI-Face v1 | AI-Face v2 |
---|---|---|
Deepfake Video Datasets | FF++, DFDC, DFC, Celeb-DF-v2 | FF++, DFDC, DFD, Celeb-DF-v2 |
GAN Models (10 total) | AttGAN, MMDGAN, StarGAN, StyleGANs, MSGGAN, ProGAN, STGAN, VQGAN | AttGAN, MMDGAN, StarGAN, StyleGANs, MSGGAN, ProGAN, STGAN, VQGAN |
DM Models (8 total) | DALLE2, IF, Midjourney, DCFace, Latent Diffusion, Palette, Stable Diffusion v1.5, Stable Diffusion Inpainting | DALLE2, IF, Midjourney, DCFace, Latent Diffusion, Palette, Stable Diffusion v1.5, Stable Diffusion Inpainting |
Fake Face Images | 1,245,660 | 1,245,660 |
Real Source Datasets | FFHQ, CASIA-WebFace, IMDB-WIKI, CelebA, real images from FF++, DFDC, DFD, Celeb-DF-v2 | FFHQ, IMDB-WIKI, real images from FF++, DFDC, DFD, Celeb-DF-v2 |
Total Real Face Images | 866,096 | 400,885 |
Total Subsets | 30 | 28 |
Generation Methods | 5 in FF++, 5 in DFD, 8 in DFDC, 1 in A-Celeb-DF-v2, 10 GANs, 8 DMs | 5 in FF++, 5 in DFD, 8 in DFDC, 1 in Celeb-DF-v2, 10 GANs, 8 DMs |
If you use the AI-face dataset in your research, please cite our paper as:
@inproceedings{lin2025aiface,
title={AI-Face: A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark},
author={Li Lin and Santosh and Mingyang Wu and Xin Wang and Shu Hu},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2025}
}
We acknowledge that part of our code is adapted from DeepfakeBench (NeruIPS 2023), if you cite our paper, please consider citing their paper as well:
@article{yan2023deepfakebench,
title={Deepfakebench: A comprehensive benchmark of deepfake detection},
author={Yan, Zhiyuan and Zhang, Yong and Yuan, Xinhang and Lyu, Siwei and Wu, Baoyuan},
journal={arXiv preprint arXiv:2307.01426},
year={2023}
}