High Dynamic Range Novel View Synthesis (HDR-NVS) aims to establish a 3D scene HDR model from Low Dynamic Range (LDR) imagery. Typically, multiple-exposure LDR images are employed to capture a wider range of brightness levels in a scene, as a single LDR image cannot represent both the brightest and darkest regions simultaneously. While effective, this multiple-exposure HDR-NVS approach has significant limitations, including susceptibility to motion artifacts (e.g., ghosting and blurring), high capture and storage costs. To overcome these challenges, we introduce, for the first time, the single-exposure HDR-NVS problem, where only single exposure LDR images are available during training. We further introduce a novel approach, Mono-HDR-3D, featuring two dedicated modules formulated by the LDR image formation principles, one for converting LDR colors to HDR counterparts and the other for transforming HDR images to LDR format so that unsupervised learning is enabled in a closed loop. Designed as a meta-algorithm, our approach can be seamlessly integrated with existing NVS models. Extensive experiments show that Mono-HDR-3D significantly outperforms previous methods.
Given single-exposure LDR images (with corresponding camera poses) as input, we first learn an LDR 3D scene model (e.g., NeRF or 3DGS). Then, we elevate this LDR model to an HDR counterpart via our camera-imaging–aware LDR-to-HDR Color Converter (L2H-CC). Additionally, we introduce a latent HDR-to-LDR Color Converter (H2L-CC) as a closed-loop component, enabling the optimization of HDR features even when only LDR training images are available, which ensures the framework to be robust in the absence of ground-truth HDR data.
git clone https://github.com/prinasi/Mono-HDR-3D.git
cd Mono-HDR-3D
cd Mono-HDR-GS
conda create -n mono-hdr-gs python=3.9 -y
pip install -r requirements.txt
cd ../Mono-HDR-NeRF
conda create -n mono-hdr-nerf python=3.9 -y
pip install -r requirements.txt
We use the an HDR dataset (multi-view and multi-exposure) that contains 8 synthetic scenes rendered with Blender and 4 real scenes captured by a digital camera. Images are collected at 35 different poses in the real dataset, with 5 different exposure time
For both Mono-HDR-3D and Mono-HDR-NeRF, we provide a bash script to train the model.
bash single_train.sh <scene_name> <gpu_id>For example, to train Mono-HDR-GS on the flower scene with 0-th GPU, run:
cd Mono-HDR-GS
bash single_train.sh flower 0Intermediate results and models are saved in output/mlp/<scene_name>
@inproceedings{zhang2025high,
title={High Dynamic Range Novel View Synthesis with Single Exposure},
author={Zhang, Kaixuan and Wang, Hu and Li, Minxian and Ren, Mingwu and Ye, Mao and Zhu, Xiatian},
booktitle={Forty-second International Conference on Machine Learning},
year={2025}
}
Our code is based on the famous pytorch reimplementation of HDR-GS and HDR-NeRF. We appreciate all the contributors.
