[IEEE Signal Processing Letters, 2025] Adaptive Video Demoiréing Network with Subtraction-Guided Alignment
Seung-Hun Ok, Young-Min Choi, Seung-Wook Kim, Se-Ho Lee
Paper | Supplementary Materials
The experiments were conducted using the following software environment and libraries:
- Python: 3.11.4
- CUDA: 12.1
- PyTorch: 2.1.0
- Torchvision: 0.16.0
- numpy: 1.26.4
- scikit-image
- opencv-python
- deepspeed
- lpips
- tensorboard
- wandb
Our project is based on the VDmoire dataset, which can be downloaded from here.
After downloading the dataset, place the folders as follows:
project_root/
├── AVDNet/
│ ├── experiments/...
│ │ ...
│ └── train.py
└── datasets/
├── homo/...
└── optical/
├── iphone/...
└── tcl/...
You can download the pretrained model for testing from here.
After downloading the model, place it in the AVDNet/experiments/
directory before running the test.
Note: If you use the pretrained model provided, be sure to set
strict_load: false
in the test option file, as some class names differ slightly.
The following is an example command for training on the iPhone-V1 subset using GPU 0:
CUDA_VISIBLE_DEVICES=0 python train.py -opt options/train/Train_ipv1.yml
The following is an example command for testing on the TCL-V2 subset using GPU 3:
CUDA_VISIBLE_DEVICES=3 python test.py -opt options/test/Test_tclv2.yml
You can run training or testing by selecting the appropriate .yml
configuration file and specifying the GPU to use.
Please cite the following paper if you use this code in your research:
@article{ok2025adaptive,
title = {Adaptive Video Demoiréing Network With Subtraction-Guided Alignment},
author = {Ok, Seung-Hun and Choi, Young-Min and Kim, Seung-Wook and Lee, Se-Ho},
journal = {IEEE Signal Processing Letters},
volume = {32},
pages = {2733--2737},
year = {2025}
}
Our work and implementation were inspired by MBCNN and DTNet.
We sincerely thank the authors for making their code publicly available.
For any questions, please contact: cornking123@jbnu.ac.kr