Skip to content

Jinwon-Ko/LUTFormer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

64 Commits
 
 
 
 
 
 

Repository files navigation

[Neurocomputing 2025] LUTFormer: Lookup Table Transformer for Image Enhancement.

Jinwon Ko, Keunsoo Ko, Hanul Kim and Chang-Su Kim.

Official code for "LUTFormer: Lookup Table Transformer for Image Enhancement" in Neurocomputing 2025. [paper]

Dataset

The FiveK, PPR10K, UIEB, and EUVP datasets are used for experiments.
The AdaInt project also provides an instruction for generating a 480p version of FiveK to accelerate in the training process.

Installation

Create conda environment:

$ conda create -n LUTFormer python=3.9 anaconda
$ conda activate LUTFormer
$ conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch
$ pip install opencv-python-headless==4.10.0.82

Train

To train LUTFormer,

  1. Edit the configuration file:
    • Open root/LUTFormer_code/config.py.
    • Set run_mode to 'train'. Also, set the task_name, dataset_name, and expert.
    • (Optional) If you want to visualize the results, set viz to True.
  2. Run with
$ cd root/LUTFormer_code/
$ python main.py

Test

To evaluate your trained LUTFormer model,

  1. Edit the configuation file:
    • Open root/LUTFormer_code/config.py.
    • Set run_mode to 'test' and viz to True.
    • Specify the values for task_name, dataset_name, and expert.
  2. Run with
$ cd root/LUTFormer_code/
$ python main.py
  1. Calculate the score using Matlab code
    • FiveK
      (matlab) > ./fivek_calculate_metrics.m [evaluate image dir] [GT dir]
    • PPR10K
      (matlab) > ./ppr10k_calculate_metrics.m [evaluate image dir] [GT dir] [mask dir]

*** Note ***
If you want to get the performance of the paper, set run_mode to 'test_paper'.
Pretrained models are available in root/LUTFormer_code/pretrained. They can also be downloaded from here.

Demo

You can run a demo with pretrained models to enhance your own images.

  1. Prepare your input images
    • Place your images in the directory specified by --input_dir (default: root/LUTFormer_code/demo_img/input)
    • The enhanced results will be saved to --output_dir (default: root/LUTFormer_code/demo_img/result)
  2. (Optional) Check configuration:
    • You can also override settings via command-line arguments, including:
      • --yaml_path (default: root/LUTFormer_code/configs/Retouching_FiveK.yaml)
      • --pretrained_path (default: root/LUTFormer_code/pretrained/Retouching_FiveK_expertC.pth)
      • --task_name, --dataset_name, --expert
  3. Run demo with
$ cd root/LUTFormer_code/
$ python demo.py --input_dir ./demo_img/input --out_dir ./demo_img/result

Results

  1. Photo retouching on FiveK dataset

Retouching FiveK

  1. Photo retouching on PPR10K dataset

Retouching PPR10K

  1. Tone mapping on FiveK dataset

ToneMap FiveK

  1. Underwater image enhancement on UIEB dataset

Underwater UIEB

About

LUTFormer: Lookup Table Transformer for Image Enhancement, Neurocomputing 2025.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published