From Zero to Detail: Deconstructing Ultra-High-Definition Image Restoration from Progressive Spectral Perspective (CVPR'2025)
Zhao Chen1* · Zhizhou Chen1* · Yunzhe Xv1 · Enxuan GU2 · Jian Li3 · Zili Yi1 · Qian Wang4 · Jian Yang1 ·Ying Tai1✉
1Nanjing University · 2Dalian University of Technology · 3Tencent Youtu · 4China Mobile
We are the first to propose a Hybrid Architecture for low level vison, ERR, which integrates state-of-the-art operators, including Transformers, Mamba, and KAN.
![]() |
Abstract: Ultra-high-definition (UHD) image restoration faces significant challenges due to its high resolution, complex content, and intricate details. To cope with these challenges, we analyze the restoration process in depth through a progressive spectral perspective, and deconstruct the complex UHD restoration problem into three progressive stages: zero-frequency enhancement, low-frequency restoration, and high-frequency refinement. Building on this insight, we propose a novel framework, ERR, which comprises three collaborative sub-networks: the zero-frequency enhancer (ZFE), the low-frequency restorer (LFR), and the high-frequency refiner (HFR). Specifically, the ZFE integrates global priors to learn global mapping, while the LFR restores low-frequency information, emphasizing reconstruction of coarse-grained content. Finally, the HFR employs our designed frequency-windowed Kolmogorov-Arnold Networks (FW-KAN) to refine textures and details, producing high-quality image restoration. Our approach significantly outperforms previous UHD methods across various tasks, with extensive ablation studies validating the effectiveness of each component.
- 2025.3.21: 🍎 Our code, pretrained weights and visualization results are released!
- 2025.2.27: 🎯 Our paper is accepted by CVPR 2025!
- CUDA >= 11.8
- Pytorch >= 2.1.1
- torchvision >= 0.16.1
- kat_rational needs to be installed from https://github.com/Adamdad/rational_kat_cu
# create conda env
conda create -n ERR python=3.10 -y
conda activate ERR
# install python dependency
pip install -r requirement.txt
You can get the training and test datasets from the following links:
| UHD-LL | UHD-LL |
| UHD-Haze | UHD-Haze |
| UHD-Blur | UHD-Blur |
| 4K-Rain13k | 4K-Rain13k |
We provide pretrained models for UHD-LL, UHD-Haze, UHD-Blur and 4K-Rain13k. You can download from ERR_huggingface.
We provide compared and ablation visual results. You can download from ERR_huggingface.
After preparing the training data, use
bash train.sh
After preparing the testing data, use
bash test.sh
If you use our work, please consider citing:
@inproceedings{zhao2025ERR,
title={From Zero to Detail: Deconstructing Ultra-High-Definition Image Restoration from Progressive Spectral Perspective},
author={Chen Zhao and Zhizhou Chen and Yunzhe Xu and Enxuan Gu and Jian Li and Zili Yi and Qian Wang and Jian Yang and Ying Tai},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2025}
}
Should you have any questions, please contact 2518628273@qq.com