This repository contains codes, slide presentation of our final project in the Computational Imaging course at IMT Atlantique (Brest 2025). Members:
- Nathan SALBEGO
- Xueyun WENG
This project is inspired by this paper. The original repository is available here.
Trainable spectral-spatial sparse coding model(T3SC)is a powerful hybrid approach that combines Deep Learning and Sparse Coding to effectively denoise hyperspectral images. Its is a 2-layer architecture:
-
- The first layer decomposes the spectrum measured at each pixel as a sparse linear combination of a few elements from a learned dictionary, thus performing a form of linear spectral unmixing per pixel
-
- The second layer builds upon the output of the first one, which is represented as a two-dimensional feature map, and sparsely encodes patches on a dictionary in order to take into account spatial relationships between pixels within small receptive fields.
Starting with the code implementation from the initial repository, we've tested some pre-trained model. Moreover, as part of this project, we have focused mainly in explorying the performance of the model with complex noise on the input dataset dcmall.
NATHAN
Washington DC Mall is one of the most widely used dataset for HSI denoising and consists of a high-quality image of size 1280 × 307 with 191 bands. We split the image into two sub-images of size 600 × 307 and 480 × 307 for training and one sub-image of size 200 × 200 for testing.
There is no pre-trained model for the washington dcmall hyperspectral dataset. Therefore, we've trained a models with Noise Adaptive Sparse Coding (model.beta=1) on the dataset dcmall dataset with band-dependant gaussian noise
$ python main.py data=dcmall model.beta=1 noise=uniform noise.params.sigma_max=55
During the training, we track some metrics : loss, MSE, and the Mean Peak Signal-to-Noise Ratio (MPSNR)
At epoch 11, we got the best trade-off between loss and MPSNR so we kept this checkpoint as our best model.
Since we've trained the model with a Gaussian noise with band-dependent variance
To perform some inference using the checkpoint:
$ python main.py mode=test data=dcmall model.beta=1 noise=uniform noise.params.sigma_max=x model.ckpt="/path/to/ckpt"
Where
NOTE: Keep in mind that since the model has been trained using with Noise Adaptive Sparse Coding (model.beta=1), it should also be tested with it
It can be observed that for variance extremely lower than
Below are input-noisy images and output-reconstructed images for 2 different values of
input-noisy | output-reconstructed | MPSNR in | MPSNR out | MSSIM in | MSSIM out | |
---|---|---|---|---|---|---|
15 | ![]() |
![]() |
22.46 | 38.63 | 0.74 | 0.99 |
55 | ![]() |
![]() |
33.32 | 4.57 | 0.95 | -0.03 |
85 | ![]() |
![]() |
19.02 | 37.36 | 0.62 | 0.99 |