FakeParts are partial deepfakesβlocalized spatial or temporal edits that blend into otherwise real videos. FakePartsBench is the first benchmark purpose-built to evaluate them.
- Problem. Most detectors and datasets focus on fully synthetic videos. Subtle, localized edits (FakeParts) are under-explored yet highly deceptive.
- Solution. We define FakeParts and release FakePartsBench: 25K+ videos with pixel-level and frame-level annotations covering full deepfakes (T2V/I2V/TI2V) and partial manipulations (faceswap, inpainting, outpainting, style change, interpolation).
- Finding. Humans and SOTA detectors miss many FakeParts; detection accuracy drops by 30β40% versus fully synthetic content.
- Use. Train and evaluate detectors that localize where and when manipulations happen.
- News
- Dataset
- Paper
- Repo Structure
- Installation
- Quickstart
- Evaluation Protocol
- Reproducing Baselines
- Human Study
- Results Snapshot
- Citations
- License & Responsible Use
- Acknowledgements
- Contact
- 2025- Dataset and benchmark released (including closed- and open-source generations).
- 2025- Baseline evaluation code (image- and video-level detectors).
FakePartsBench provides:
-
25,000+ manipulated clips + 16,000 real clips
-
High-res content (up to 1080p), durations typically 5β14 s
-
Annotations: frame masks (spatial), manipulated frames (temporal)
-
Categories:
- Full deepfakes: T2V / I2V / TI2V (Sora, Veo2, Allegro AI)
- Spatial FakeParts: Faceswap (InsightFace), Inpainting (DiffuEraser, ProPainter), Outpainting (AKiRa)
- Temporal FakeParts: Interpolation (Framer)
- Style FakeParts: Style change (RAVE)
Download (mirrors):
Each sample ships with metadata (prompt, source/cond frame when applicable, resolution, FPS) and, for FakeParts, per-frame masks or frame lists of manipulated regions/segments.
FakeParts: a New Family of AI-Generated DeepFakes GaΓ«tan Brison, Soobash Daiboo, Samy AΓ―meur, Awais Hussain Sani, Xi Wang, Gianni Franchi, Vicky Kalogeiton Hi! PARIS / Institut Polytechnique de Paris / LIX / ENSTA Paris Preprint, under review.
FakeParts/
ββ annotation/ # human study annotation tools
β ββ app.py # Streamlit survey app
β ββ preprocessing_remove_au.py
β ββ requirements.txt # annotation dependencies
ββ assets/ # figures for README/paper
β ββ final_teaser.png
β ββ pipeline.jpg
ββ detection/ # baseline detectors
β ββ AIGVDet/
β ββ C2P-CLIP/
β ββ CNNDetection-master/
β ββ DeMamba/
β ββ FatFormer/
β ββ HiFi_IFDL-main/
β ββ NPR/
β ββ UniversalFakeDetect-*/
ββ generation/ # FakeParts generators
β ββ Faceswap/
β ββ Inpainting/
β ββ Interpolation/
β ββ Outpainting/
β ββ Stylechange/
β ββ T2V/
Tip: Place your images in
assets/
(the README referencesassets/final_teaser.png
andassets/pipeline_xi.jpg
).
# (A) Conda (recommended)
conda create -n fakeparts python=3.10 -y
conda activate fakeparts
pip install -r env/requirements.txt
# (B) Extras (for video I/O & metrics)
# pip install av opencv-python imageio[ffmpeg] decord torch torchvision
- FFmpeg required for decoding/encoding (
ffmpeg -version
should work). - Some baselines may require CUDA (see their READMEs in
baselines/
).
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("hi-paris/FakeParts")
# Inspect the data
print(dataset)
We report:
- Binary detection (real vs. fake) at video and frame levels
- Localization for FakeParts: IoU on manipulated masks (spatial) and frames (temporal)
- Quality & consistency: FVD (optional), VBench subset (consistency, flicker, quality)
Default metrics: Accuracy, F1, mAP (per category + macro avg).
Recommended splits: use index.json
or our CSVs to reproduce the paper.
We provide wrappers and configs to reproduce a wide range of image-level and video-level detectors. Each baseline follows the authorsβ official implementation as closely as possible.
- CNNDetection (Wang et al., CVPRβ20) β CNN-based universal fake image detector trained on diverse forgeries.
- UniversalFakeDetector (UFD) (Ojha et al., CVPRβ23) β CLIP-based zero-shot detector, effective across manipulation types.
- FatFormer (Zhao et al., ICCVβ23) β multi-scale attention transformer tuned for subtle manipulations.
- C2P-CLIP (Xu et al., arXivβ24) β contrastive fine-tuning of CLIP for part-level detection.
- NPR (Zhang et al., NeurIPSβ24) β noise-pattern representation learning to capture subtle editing traces.
- HiFi-IFDL (Li et al., arXivβ24) β high-fidelity feature disentanglement for manipulation detection.
- AIGVDet (Bai et al., PRCVβ24) β multi-branch detector combining spatial cues and optical flow.
- DeMamba (Chen et al., arXivβ24) β state-space model for long-range temporal forgery localization.
We release a Streamlit-based survey used in the paper.
cd annotation
pip install -r requirements.txt
streamlit run app.py -- --root /path/to/FakePartsBench
Participants label real vs. fake and provide short rationales per clip.
Average βfakeβ confidence by detectors vs. humans (higher = better fake detection):
Category | AIGVDet | CNNDetection | DeMamba | UniversalFakeDetect | FatFormer | C2P-CLIP | NPR | Human Detection |
---|---|---|---|---|---|---|---|---|
Acc. on orig. testset | 0.914 | 0.997 | 0.971 | 0.843 | ~0.990 | >0.930 | >0.925 | β |
T2V | 0.301 | 0.000 | 0.342 | 0.073 | 0.183 | 0.176 | 0.579 | 0.763 |
I2V | 0.292 | 0.001 | 0.323 | 0.083 | 0.129 | 0.157 | 0.417 | 0.715 |
IT2V | 0.483 | 0.000 | 0.514 | 0.072 | 0.161 | 0.131 | 0.666 | 0.821 |
Stylechange | 0.265 | 0.000 | 0.308 | 0.295 | 0.100 | 0.288 | 0.105 | 0.983 |
Faceswap | 0.216 | 0.000 | 0.265 | 0.031 | 0.620 | 1.000 | 0.000 | 0.612 |
Real (false-positive) | 0.155 | 0.007 | 0.191 | 0.052 | 0.008 | 0.004 | 0.038 | 0.242 |
Interpolation | 0.137 | 0.000 | 0.170 | 0.228 | 0.360 | 0.396 | 0.056 | 0.676 |
Inpainting | 0.074 | 0.003 | 0.089 | 0.337 | 0.213 | 0.171 | 0.264 | 0.588 |
Outpainting | 0.060 | 0.000 | 0.072 | 0.025 | 0.096 | 0.125 | 0.014 | 0.800 |
Takeaway: Partial manipulations (FakeParts) are significantly harder for current detectors than fully synthetic videosβand also harder for humans.
If you use FakeParts please cite:
@article{brison2025fakeparts,
title = {FakeParts: a New Family of AI-Generated DeepFakes},
author = {Ga{\"e}tan Brison and Soobash Daiboo and Samy A{\"i}meur and
Awais Hussain Sani and Xi Wang and Gianni Franchi and Vicky Kalogeiton},
journal = {arXiv preprint},
year = {2025}
}
-
Code: see
LICENSE
(default: BSD-3-Clause unless noted otherwise in subfolders). -
Dataset: released for research and defensive purposes only.
- Do not attempt to identify private individuals.
- Do not use for generating disinformation or harassment.
- Faceswap content uses celebrity imagery to avoid sensitive personal data.
-
Please comply with third-party model/data licenses cited in the paper and
baselines/
.
This work was conducted at Hi! PARIS, Institut Polytechnique de Paris, LIX (Γcole Polytechnique), and U2IS (ENSTA Paris). We thank the authors and teams behind Sora, Veo2, Allegro, Framer, RAVE, InsightFace, DiffuEraser, ProPainter, AKiRa, as well as the maintainers of DAVIS, YouTube-VOS, MOSE, LVD-2M, and Animal Kingdom.
A special thanks to the DeepMind team working on Veo2 and Veo3 for granting us early API access.
Questions, issues, or pull requests are welcome!
- GaΓ«tan Brison β maintainer
- Soobash Daiboo, Samy AΓ―meur, Awais Hussain Sani
- Xi Wang, Gianni Franchi, Vicky Kalogeiton