ICCV 2025

Fabian Perez¹² · Sara Rojas² · Carlos Hinojosa² · Hoover Rueda-Chacón¹ · Bernard Ghanem²
¹Universidad Industrial de Santander · ²King Abdullah University of Science and Technology (KAUST)
TL;DR: We propose UnMix-NeRF, the first method integrating spectral unmixing into NeRF, enabling hyperspectral view synthesis, accurate unsupervised material segmentation, and intuitive material-based scene editing, significantly outperforming existing methods.
Neural Radiance Field (NeRF)-based segmentation methods focus on object semantics and rely solely on RGB data, lacking intrinsic material properties. This limitation restricts accurate material perception, which is crucial for robotics, augmented reality, simulation, and other applications. We introduce UnMix-NeRF, a framework that integrates spectral unmixing into NeRF, enabling joint hyperspectral novel view synthesis and unsupervised material segmentation. Our method models spectral reflectance via diffuse and specular components, where a learned dictionary of global endmembers represents pure material signatures, and per-point abundances capture their distribution. For material segmentation, we use spectral signature predictions along learned endmembers, allowing unsupervised material clustering. Additionally, UnMix-NeRF enables scene editing by modifying learned endmember dictionaries for flexible material-based appearance manipulation.
UnMix-NeRF is built upon Nerfstudio. Follow these steps to install:
Follow the Nerfstudio installation guide up to and including "tinycudann" to install dependencies and create an environment. You need GPU to run the method, and python 3.10+ to run the code.
git clone https://github.com/Factral/UnMix-NeRF
cd UnMix-NeRF
pip install -r requirements.txt
pip install -e .
Run ns-train -h
and verify that unmixnerf
appears in the list of available methods.
UnMix-NeRF extends the standard Nerfstudio data conventions to support hyperspectral data:
data/
├── transforms.json # Camera poses and intrinsics (standard)
├── images/ # RGB images (standard)
│ ├── frame_00001.jpg
│ └── ...
├── hyperspectral/ # Hyperspectral data (NEW)
│ ├── frame_00001.npy
│ └── ...
└── segmentation/ # Ground truth segmentation (optional)
├── frame_00001.png
└── ...
- Format:
.npy
files with dimensions(H, W, B)
where:H
: Image heightW
: Image widthB
: Number of spectral bands
- Value Range: Normalized between 0 and 1
- File Naming: Must correspond to RGB images (e.g.,
frame_00001.npy
↔frame_00001.jpg
)
Add these fields to your transforms.json
:
{
"frames": [
{
"file_path": "./images/frame_00001.jpg",
"hyperspectral_file_path": "./hyperspectral/frame_00001.npy",
"seg_file_path": "./segmentation/frame_00001.png", // optional
"transform_matrix": [...]
}
]
}
We evaluate UnMix-NeRF on the following hyperspectral datasets:
- NeSpoF for synthetic scenes,
- NeSpoF-Segmentation (our extended version with material labels)
- HS-NeRF dataset containing BaySpec and Surface Optics real-world captures
To initialize endmembers using VCA (Vertex Component Analysis), place a vca.npy
file in your data directory:
- Format:
.npy
file with dimensions(C, B)
where:C
: Number of endmembers/materialsB
: Number of spectral bands
- Usage: Set
--pipeline.model.load_vca True
in training command
ns-train unmixnerf \
--data <path_to_data> \
--pipeline.num_classes <number_of_materials> \
--pipeline.model.spectral_loss_weight <spectral_loss_weight> \
--pipeline.model.temperature <temperature> \
--pipeline.model.load_vca <True/False> \
--experiment-name my_experiment \
--vis viewer+wandb
Parameter | Description | Default |
---|---|---|
--pipeline.num_classes |
Number of material endmembers | 6 |
--pipeline.model.spectral_loss_weight |
Weight for spectral reconstruction loss | 5.0 |
--pipeline.model.temperature |
Temperature for abundance softmax | 0.4 |
--pipeline.model.load_vca |
Initialize endmembers with VCA | False |
--pipeline.model.pred_specular |
Enable specular component prediction | True |
Use the provided scripts in the scripts/
directory to reproduce results on different scenes:
# Hotdog scene
bash scripts/hotdog.sh
# Anacampseros scene
bash scripts/anacampseros.sh
# Caladium scene
bash scripts/caladium.sh
# Other scenes
bash scripts/ajar.sh
bash scripts/cbox_dragon.sh
bash scripts/cbox_sphere.sh
bash scripts/pinecone.sh
Each script contains optimized hyperparameters for the specific scene.
UnMix-NeRF includes a customized Nerfstudio viewer that supports:
- Spectral Band Visualization: View individual wavelengths
- Material Abundance Maps: Visualize learned material distributions
- Segmentation Results: Display unsupervised material clustering
After training completion, the following are automatically saved:
- Learned Endmembers:
endmembers.npy
- Final converged material signatures - Trained model:
outputs/
- Trained model weights
If you find this work useful, please cite our paper:
@inproceedings{perez2025unmix,
title={UnMix-NeRF: Spectral Unmixing Meets Neural Radiance Fields},
author={Perez, Fabian and Rojas, Sara and Hinojosa, Carlos and Rueda-Chac{\'o}n, Hoover and Ghanem, Bernard},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
year={2025}
}
- Built upon Nerfstudio framework