Xinyue Li1, Zhangkai Ni1, Wenhan Yang2
1Tongji University, 2Pengcheng Laboratory
This repository provides the official implementation for the paper "AFUNet: Cross-Iterative Alignment-Fusion Synergy for HDR Reconstruction via Deep Unfolding Paradigm", International Conference on Computer Vision (ICCV), 2025. Paper-arXiv
Existing learning-based methods effectively reconstruct HDR images from multi-exposure LDR inputs with extended dynamic range and improved detail, but they rely more on empirical design rather than theoretical foundation, which can impact their reliability. To address these limitations, we propose the cross-iterative Alignment and Fusion deep Unfolding Network (AFUNet), where HDR reconstruction is systematically decoupled into two interleaved subtasks—alignment and fusion—optimized through alternating refinement, achieving synergy between the two subtasks to enhance the overall performance. Our method formulates multi-exposure HDR reconstruction from a Maximum A Posteriori (MAP) estimation perspective, explicitly incorporating spatial correspondence priors across LDR images and naturally bridging the alignment and fusion subproblems through joint constraints. Building on the mathematical foundation, we reimagine traditional iterative optimization through unfolding—transforming the conventional solution process into an end-to-end trainable AFUNet with carefully designed modules that work progressively. Specifically, each iteration of AFUNet incorporates an Alignment-Fusion Module (AFM) that alternates between a Spatial Alignment Module (SAM) for alignment and a Channel Fusion Module (CFM) for adaptive feature fusion, progressively bridging misaligned content and exposure discrepancies. Extensive qualitative and quantitative evaluations demonstrate AFUNet’s superior performance, consistently surpassing state-of-the-art methods.
TL;DR: We propose a novel cross-iterative Alignment and Fusion deep Unfolding network (AFUNet), achieving superior performance. It decouples HDR reconstruction into alignment and fusion subtasks, optimized through alternating refinement.
Performance comparison of various HDR reconstruction models on three widely used datasets. The performance on metrics PSNR-μ, PSNR-l, SSIM-μ, SSIM-l and HDR-VDP2 are reported. The top three performances are highlighted in red, orange, and yellow backgrounds, respectively.
To start, we prefer creating the environment using conda:
conda create -n afunet
conda activate afunet
pip install -r requirements.txt
Pytorch installation is machine dependent, please install the correct version for your machine.
Dependencies (click to expand)
PyTorch
,numpy
: main computation.pytorch-msssim
: SSIM calculation.tqdm
: progress bar.opencv-python
,scikit-image
: image processing.imageio
: images I/O.einops
: torch tensor shaping with pretty api.
The datasets we used are as follows:
(click to expand;)
data_path
└── data
├── Kal
│ ├── Training
| | ├── 001
| | | ├── 262A0898.tif
| | | ├── 262A0899.tif
| | | ├── 262A0900.tif
| | | ├── exposure.txt
| | | ├── HDRImg.hdr
| | ├── 002
| | ...
| | └── 074
│ └── Test
│ └── Test-set
│ ├── 001
| | ├── 262A2615.tif
| | ├── 262A2616.tif
| | ├── 262A2617.tif
| | ├── exposure.txt
| | ├── HDRImg.hdr
| ├── 002
| | ...
| └── 015
├── Tel
│ ├── Training
| | ├── scene_0001_1
| | | ├── input_1.tif
| | | ├── input_2.tif
| | | ├── input_3.tif
| | | ├── exposure.txt
| | | ├── HDRImg.hdr
| | ├── scene_0001_2
| | ...
| | └── scene_0052_3
│ └── Test
| ├── scene_0007_1
| | ├── input_1.tif
| | ├── input_2.tif
| | ├── input_3.tif
| | ├── exposure.txt
| | ├── HDRImg.hdr
| ├── scene_0007_2
| | ...
| └── scene_0042_3
└── Hu
├── Training
| ├── 001
| | ├── input_1_aligned.tif
| | ├── input_2_aligned.tif
| | ├── input_3_aligned.tif
| | ├── input_exp.txt
| | ├── ref_hdr_aligned_linear.hdr
| ├── 002
| ...
| └── 085
└── Test
├── 086
| ├── input_1_aligned.tif
| ├── input_2_aligned.tif
| ├── input_3_aligned.tif
| ├── input_exp.txt
| ├── ref_hdr_aligned_linear.hdr
├── 087
| ...
└── 100
- Prepare the training dataset.
- Modify
'--dataset_dir'
in thetrain.py
, which contains the../data/Kal
,../data/Hu
and../data/Tel
. - For different datasets, modify the arguments in
train.py
as follows:- For Kalantari's dataset, modify arguments in the
train.py
as follows:'--test_pat'
:'Test/Test-set'
'--ldr_prefix
:''
'--exposure_file_name'
:'exposure.txt'
'--label_file_name'
:'HDRImg.hdr'
- For Tel's dataset, modify arguments in the
train.py
as follows:'--test_path'
:'Test'
'--ldr_prefix'
:''
'--exposure_file_name'
:'exposure.txt'
'--label_file_name'
:'HDRImg.hdr'
- For Hu's dataset, modify arguments in the
train.py
as follows:'--test_path'
:'Test'
'--ldr_prefix'
:'input'
'--exposure_file_name'
:'input_exp.txt'
'--label_file_name'
:'ref_hdr_aligned_linear.hdr'
- For Kalantari's dataset, modify arguments in the
- Run the following commands for training:
$ python train.py
- Prepare the testing dataset.
- Modify
'--dataset_dir'
in thetest.py
, which contains the../data/Kal
,../data/Hu
and../data/Tel
. - For different datasets, modify the arguments in
test.py
as follows:- For Kalantari's dataset, modify arguments in the
test.py
as follows:'--test_path'
:'Test/Test-set'
'--ldr_prefix
:''
'--exposure_file_name'
:'exposure.txt'
'--label_file_name'
:'HDRImg.hdr'
- For Tel's dataset, modify arguments in the
test.py
as follows:'--test_path'
:'Test'
'--ldr_prefix'
:''
'--exposure_file_name'
:'exposure.txt'
'--label_file_name'
:'HDRImg.hdr'
- For Hu's dataset, modify arguments in the
test.py
as follows:'--test_path'
:'Test'
'--ldr_prefix'
:'input'
'--exposure_file_name'
:'input_exp.txt'
'--label_file_name'
:'ref_hdr_aligned_linear.hdr'
- For Kalantari's dataset, modify arguments in the
- Prepare the pretrained model.
- Modify
'--pretrained_model'
, which corresponds to the path of the pretrained model. - Uncomment the following line to save the predicted HDR images:
# save results
# cv2.imwrite(os.path.join(args.save_dir, '00{}_pred.hdr'.format(idx)), pred_hdr)
- Run the following commands for tesing:
$ python test.py
Pretrained models can be find in the ./pretrain_model
folder.
This code is inspired by SCTNet. We thank the authors for the nicely organized code!
Thanks for your attention! If you have any suggestion or question, feel free to leave a message here or contact Dr. Zhangkai Ni (eezkni@gmail.com).