[Paper Under Revision] Exposing Synthetic Speech: Model Attribution and Detection of AI-generated Speech via Audio Fingerprints.
We propose a simple, training-free method for detecting AI-generated speech and attributing it to its source model by leveraging standardized average residuals as distinctive fingerprints. Our approach effectively addresses single-model attribution, multi-model attribution, and synthetic versus real speech detection, achieving high accuracy and robustness across diverse speech synthesis systems.
This paper Exposing Synthetic Speech: Model Attribution and Detection of AI-generated Speech via Audio Fingerprints is currently under revision to ACSAC 2025. A demo with a selection of fake audio samples from different AI-Generated models employed in our experiments is available online: Fingerprint Demo.
As speech generation technologies continue to advance in quality and accessibility, the risk of malicious use cases, including impersonation, misinformation, and spoofing, increases rapidly. This work addresses this threat by introducing a simple, training-free, yet effective approach for detecting AI-generated speech and attributing it to its source model. Specifically, we tackle three key tasks: (1) single-model attribution in an open-world setting, where the goal is to determine whether a given audio sample was generated by a specific target neural speech synthesis system (with access only to data from that system); (2) multi-model attribution in a closed-world setting, where the objective is to identify the generating system from a known pool of candidates; and last but not least (3) detection of synthetic versus real speech. Our approach leverages standardized average residuals—the difference between an input audio signal and its filtered version using either a low-pass filter or the EnCodec audio autoencoder. We demonstrate that these residuals consistently capture artifacts introduced by diverse speech synthesis systems, serving as distinctive, model-agnostic fingerprints for attribution. Across extensive experiments, our approach achieves AUROC scores exceeding 99% in most scenarios, evaluated on augmented benchmark datasets that pair real speech with synthetic audio generated by multiple synthesis systems. In addition, our robustness analysis underscores the method's ability to maintain high performance even in the presence of moderate additive noise. Due to its simplicity, efficiency, and strong generalization across speech synthesis systems and languages, this technique offers a practical tool for digital forensics and security applications.
We utilize multiple speech corpora and include synthetic speech generated by a wide range of systems. We focus primarily on three datasets:
- Augmented LJSpeech Benchmark (English):
- We use the LJ Speech corpus (English)].
- Synthetic speech samples for this dataset are drawn from the WaveFake dataset and its extension.
- In addition, we augment the dataset with synthetic speech generated using publicly available implementations of diffusion-based models and hybrid NSF models.
- JSUT Benchmark (Japanese):
- ASVspoof LA Benchmark (VCTK-based, English):
- We use the ASVspoof 2019 Logical Access (LA) corpus, which includes both genuine human speech and synthetically generated audio.
The following command runs a fingerprinting experiment using the run_modelattribution.py script. It supports various corpora, filter types, scoring methods, and preprocessing settings.
python run_modelattribution.py \
--corpus ... \
--real-data-path /path/to/real/audio \
--fake-data-path /path/to/fake/audio \
--filter-type ... \
--filter-param ... \
--nfft ... \
--hop-len ... \
--scorefunction ... \
--seed 1
Argument | Description |
---|---|
--corpus |
Select the corpus: ljspeech , jsut , or asvspoof . |
--real-data-path |
Path to the directory containing real audio files. |
--fake-data-path |
Path to the directory containing fake (spoofed) audio files. |
--filter-type |
Type of audio filter: EncodecFilter , Oracle , band_pass_filter , band_stop_filter , low_pass_filter , high_pass_filter . |
--filter-param |
Parameter for the filter (e.g., bandwidth for Encodec, cutoff frequencies for band filters). |
--nfft |
Number of FFT points (controls spectral resolution). |
--hop-len |
Hop length (stride) used in spectrogram computation. |
--scorefunction |
Scoring function for attribution: correlation or mahalanobis . |
--seed |
Random seed for reproducibility. |
For instance, to extract fingerprints using a low-pass filter:
python run_modelattribution.py --corpus ljspeech --real-data-path /.../LJSpeech-1.1/wavs/ --fake-data-path /.../WaveFake/ --filter-type low_pass_filter --filter-param 1 --seed 1
or using Encodec:
python run_modelattribution.py --corpus ljspeech --real-data-path /.../LJSpeech-1.1/wavs/ --fake-data-path /.../WaveFake/ --filter-type EncodecFilter --filter-param 24 --nfft 2048 --hop-len 128 --scorefunction correlation --seed 1
To compute in a closed-world setting, select one model from x-vector, vfd-resnet, se-resnet, resnet, lcnn, and fingerprints to train the classifier.
python train_model.py --model vfd-resnet --classification_type multiclass --seed 1 --corpus ljspeech
python train_model.py --model vfd-resnet --classification_type binary --seed 1 --corpus ljspeech