Skip to content

The official implementation for IEEE-ICIP 2025 paper "ESwinDNet: Image Demoiréing Using Multiscale Swin Transformer Layers"

Notifications You must be signed in to change notification settings

Karim19Alaa/ESwinDNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Capturing electronic screens with digital cameras introduces high-frequency artifacts, known as moiré patterns, degrading overall image quality and colors. This work proposes ESwinDNet, an image demoiréing model that combines an encoder-decoder architecture with multiscale Swin Transformer layers. These layers efficiently compute pixel-level attention, a crucial aspect for low-level vision tasks such as image demoiréing. The proposed ESwinDNet model achieves comparable results to the large variant of the baseline model ESDNet-L on the UHDM dataset, demonstrating its capabilities in the removal of moiré patterns in 4K images, with nearly half the number of parameters and floating point operations, yielding faster training and inference time

Moiré pattern removal using ESwinDNet-L

Table Of Content

  1. Installation
  2. Datasets
  3. Training
  4. Testing
  5. Acknowledgement
  6. Citation

Installation

1. Clone the Repository

git clone https://github.com/Karim19Alaa/ESwinDNet.git
cd ESwinDNet

2. Create and Activate Virtual Environment

Using Python 3.7+

# Create virtual environment
python -m venv eswindnet_env

# Activate it
# For Linux/MacOS:
source eswindnet_env/bin/activate

# For Windows (PowerShell):
eswindnet_env\Scripts\activate

3. Install Dependencies

pip install --upgrade pip
pip install -r requirements.txt

Datasets

Make sure to download UHDM or FHMDi dataset in the datasets directory or edit the training scripts to point to their location.

Training

Train ESwinDNet on UHDM dataset

./scripts/train_eswindnet_uhdm.sh
./scripts/train_eswindnetL_uhdm.sh

Train ESwinDNet on FHDMi dataset

./scripts/train_eswindnet_fhdmi.sh
./scripts/train_eswindnetL_fhdmi.sh

Testing

Make sure to edit the scripts to use the desire checkpoint.

Test ESwinDNet on UHDM dataset

./scripts/test/test_eswindnet_uhdm.sh
./scripts/test/test_eswindnetL_uhdm.sh

Test ESwinDNet on FHDMi dataset

./scripts/test/test_eswindnet_fhdmi.sh
./scripts/test/test_eswindnetL_fhdmi.sh

Acknowledgement

This repository relies on the work of

And the help of

Citation

@INPROCEEDINGS{11084615,
  author={Alaa, Karim and Torki, Marwan},
  booktitle={2025 IEEE International Conference on Image Processing (ICIP)},
  title={Eswindnet: Image Demoiréing Using Multiscale Swin Transformer Layers},
  year={2025},
  volume={},
  number={},
  pages={845-850},
  keywords={Wavelet transforms;Training;Image quality;Costs;Image color analysis;Computational modeling;Semantics;Transformers;Digital cameras;Software development management;Image Demoiréing;Swin Transformer;Multiscale Network;Wavelet Transform},
  doi={10.1109/ICIP55913.2025.11084615}}

About

The official implementation for IEEE-ICIP 2025 paper "ESwinDNet: Image Demoiréing Using Multiscale Swin Transformer Layers"

Topics

Resources

Stars

Watchers

Forks