First create a conda environment using the following command:
conda env create -f environment.yml
Then activate the environment using the following command:
conda activate chimera
Then download the required data and models from this link
In the recapture-detection directory, run the following command:
python main.py --test --test_path TEST_PATH --config CONFIG --test_raw_dirnames RAW_DIRNAMES --test_recap_dirnames RECAP_DIRNAMES
where TEST_PATH
is the path to the model, CONFIG
is the path to the configuration file, RAW_DIRNAMES
is the list of raw directory names, and RECAP_DIRNAMES
is the list of recapture directory names. Ensure that the configuration file matches the model being tested. Results should be printed to the terminal.
In the deepfake-detection directory, first update dataset paths in dataset_paths.py
and move the deepfake detection model weights are in the pretrained_weights
folder.
The following command:
./test.sh
will run all the deepfake detection tests at once and save the results in the deepfake-detection/results
directory.
To train Chimera, you must collect data using a fixed screen and camera setup. Then, use the pix2pix training script to train the model. Training is done in the pytorch-CycleGAN-and-pix2pix
directory.
Examples of training commands are provided in the pytorch-CycleGAN-and-pix2pix/scripts/train_pix2pix.sh
file; ideal parameters depend on setup. Refer to the paper for more details on the training process.