Skip to content

[CVPRW 2024] Training Models by Wavelet Losses Improves Quantitative and Visual Performance in Single Image Super-Resolution

License

Notifications You must be signed in to change notification settings

mandalinadagi/Wavelettention

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PWC PWC PWC PWC

Wavelettention

in NTIRE 2024

Cansu Korkmaz and A. Murat Tekalp

Method:

Stationary Wavelet Loss Depiction:

Visual Comparisons:

Also to compare our method, you can download benchmark results from Google Drive

Getting Started:

Create python virtual environment and install dependencies

- python -m venv wlt
- source source ./wlt/bin/activate
- pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
- pip install -r requirements.txt
  • clone this repository
git clone https://github.com/mandalinadagi/Wavelettention
cd Wavelettention

How To Test

  1. Download pretrained model from Google Drive and place it under wavelettention/pretrained_models/.
  2. Prepare the datasets which can be downloaded from Google Drive.
  3. Modify the configuration file options/test_Wavelettention_SRx4.yml. (path to datasets and the pretrained model)
  4. Run the command python wavelettention/test.py -opt options/test_Wavelettention_SRx4.yml.
  5. You can find the images in results/test_wavelettention/visualization/ folder.

How to Train

  1. Download the LSDIR dataset.
  2. Prepare the Imagenet pretrained x4 HAT-L model.
  3. Modify the configuration file options/train_Wavelettention_SRx4.yml. (path to datasets, pretrained model, set scaling of each wavelet loss term)
  4. Run the command CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python wavelettention/train.py -opt options/train_Wavelettention_SRx4.yml.

Citation

If you find our work helpful in your resarch, please consider citing the following paper.

@inproceedings{korkmaz2024wavelettention,
  title={Training Transformer Models by Wavelet Losses Improves Quantitative and Visual Performance in Single Image Super-Resolution},
  author={Korkmaz, Cansu and Tekalp, A. Murat},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
  month={June},
  year={2024}
}

Contact

If you have any questions please email at ckorkmaz14@ku.edu.tr

Our code is built on BasicSR and HAT. Thanks to their great work.

About

[CVPRW 2024] Training Models by Wavelet Losses Improves Quantitative and Visual Performance in Single Image Super-Resolution

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published