Official code for "LUTFormer: Lookup Table Transformer for Image Enhancement" in Neurocomputing 2025. [paper]
The FiveK, PPR10K, UIEB, and EUVP datasets are used for experiments.
The AdaInt project also provides an instruction for generating a 480p version of FiveK to accelerate in the training process.
Create conda environment:
$ conda create -n LUTFormer python=3.9 anaconda
$ conda activate LUTFormer
$ conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch
$ pip install opencv-python-headless==4.10.0.82
To train LUTFormer,
- Edit the configuration file:
- Open
root/LUTFormer_code/config.py. - Set
run_modeto'train'. Also, set thetask_name,dataset_name, andexpert. - (Optional) If you want to visualize the results, set
viztoTrue.
- Open
- Run with
$ cd root/LUTFormer_code/
$ python main.py
To evaluate your trained LUTFormer model,
- Edit the configuation file:
- Open
root/LUTFormer_code/config.py. - Set
run_modeto'test'andviztoTrue. - Specify the values for
task_name,dataset_name, andexpert.
- Open
- Run with
$ cd root/LUTFormer_code/
$ python main.py
- Calculate the score using Matlab code
- FiveK
(matlab) > ./fivek_calculate_metrics.m [evaluate image dir] [GT dir] - PPR10K
(matlab) > ./ppr10k_calculate_metrics.m [evaluate image dir] [GT dir] [mask dir]
- FiveK
*** Note ***
If you want to get the performance of the paper, set run_mode to 'test_paper'.
Pretrained models are available in root/LUTFormer_code/pretrained. They can also be downloaded from here.
You can run a demo with pretrained models to enhance your own images.
- Prepare your input images
- Place your images in the directory specified by
--input_dir(default:root/LUTFormer_code/demo_img/input) - The enhanced results will be saved to
--output_dir(default:root/LUTFormer_code/demo_img/result)
- Place your images in the directory specified by
- (Optional) Check configuration:
- You can also override settings via command-line arguments, including:
--yaml_path(default:root/LUTFormer_code/configs/Retouching_FiveK.yaml)--pretrained_path(default:root/LUTFormer_code/pretrained/Retouching_FiveK_expertC.pth)--task_name,--dataset_name,--expert
- You can also override settings via command-line arguments, including:
- Run demo with
$ cd root/LUTFormer_code/
$ python demo.py --input_dir ./demo_img/input --out_dir ./demo_img/result
- Photo retouching on FiveK dataset
- Photo retouching on PPR10K dataset
- Tone mapping on FiveK dataset
- Underwater image enhancement on UIEB dataset



