Skip to content

LowLevelAI/GPP-LLIE

Repository files navigation

[AAAI 2025] Low-Light Image Enhancement via Generative Perceptual Priors [Paper]

Han Zhou1,*, Wei Dong1,*, Xiaohong Liu2,†, Yunlun Zhang2, Guangtao Zhai2, Jun Chen1

1McMaster University, 2Shanghai Jiao Tong University,

*Equal Contribution, Corresponding Author

Introduction

This repository represents the official implementation of our AAAI 2025 paper titled Low-Light Image Enhancement via Generative Perceptual Priors. If you find this repo useful, please give it a star ⭐ and consider citing our paper in your research. Thank you.

License

We present GPP-LLIE, a novel LLIE framework with the guidance of Generative Perceptual Priors.

  • VLM-based Generative Perceptual Priors Extraction Pipeline: extract generative perceptual priors for LL images globally and locally based on pre-trained VLMs.
  • Transformer-based Diffusion Framework: develop a Transformer-based diffusion framework for LLIE.
  • Guidance of Perceptual Priors in the Reverse Diffusion Process: leverage global perceptual priors to modulate the layer normalization (GPP-LN) and utilize local perceptual priors to guide the attention mechanism (LPP-Attn) to benefit the enhancement process.

Our Proposed VLM-based Generative Perceptual Priors Extraction Pipeline

teaser

Overall Framework

teaser

📢 News

2025-7-22: Link for pre-trained weight (LOLv1) is released. We release our generated global and local priors for LOLv1. Test and Training codes are provided. ⭐
2025-6-12: This repo has been updated. The proposed VLM-based generative perceptual priors extraction pipeline has been added. ⭐

🛠️ Setup

The code was tested on:

  • Python 3.8, CUDA 11.6, GeForce RTX 2080Ti or higher GPU RAM.

📦 Repository

Clone the repository (requires git):

git clone https://github.com/LowLevelAI/GPP-LLIE.git
cd GPP-LLIE

💻 Dependencies

  • Make Conda Environment:

    conda create -n gppllie python=3.8
    conda activate gppllie
  • Then install dependencies:

    pip install torch==1.13.0+cu117 torchvision==0.14.0+cu117 torchaudio==0.13.0 --extra-index-url https://download.pytorch.org/whl/cu117
    pip install pyiqa==0.1.4 pytorch-lightning==1.9.0 natsort lpips opencv-python
  • Build CUDA extensions:

    cd defor_cuda_ext
    BASICSR_EXT=True python setup.py develop
  • Move CUDA extensions (/defor_cuda_ext/basicsr/ops/dcn/deform_conv_ext.xxxxxx.so) to the path: /ops/dcn/.

🏃 Testing

📷 Download LOLv1 Dataset and Our Generated Global and Local Priors:

⬇ Download Pre-trained Weight for LOLv1

Download Pre-trained weight for LOLv1 Dataset.

🚀 Run inference

  • Inference without GT mean adjustment
python test.py
  • Inference with GT mean adjustment
python test_adjust.py

Please update the weight path, input_dir in the test code. You can also find save_dir for outputs there. For your convenience, results obtained from our device can be found here: With GT, Without GT

🏋️ Training

Please follow above instructions to load dataset, generative priors, and pre-trained weight (we need the state_dict of vae for training).

First, train the diffusion transformer mdoel and the conditional encoder.

python train_dit.py

Second, train the second decoder for enhanced performance.

python train_decoder2.py

✏️ Contributing

Please refer to this instruction.

🎓 Citation

If you find this repo and our paper useful, please consider citing our paper:

@inproceedings{zhou2025gppllie,
  title={Low-light image enhancement via generative perceptual priors},
  author={Zhou, Han and Dong, Wei and Liu, Xiaohong and Zhang, Yulun and Zhai, Guangtao and Chen, Jun},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={39},
  number={10},
  pages={10752--10760},
  year={2025}
}

🎫 License

This work is licensed under the Apache License, Version 2.0 (as defined in the LICENSE).

By downloading and using the code and model you agree to the terms in the LICENSE.

License

About

Official implementation of GPP-LLIE, which is accpeted by AAAI 2025.

Resources

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •