Skip to content

Angknpng/UniSOD

Repository files navigation

UniSOD

This repository provides the source code and results for the paper entilted "Unified-modal Salient Object Detection via Adaptive Prompt Learning".

arXiv version: https://arxiv.org/abs/2311.16835.

Thank you for your attention.

🎉 News 🎉 (July, 2025)

We are pleased to announce that our paper has been accepted to TCSVT 2025! 🙏Thank you for your continued interest and support!

Citing our work

If you think our work is helpful, please cite

@article{wang2025unified,
  title={Unified-modal salient object detection via adaptive prompt learning},
  author={Wang, Kunpeng and Tu, Zhengzheng and Li, Chenglong and Liu, Zhengyi and Luo, Bin},
  journal={IEEE Transactions on Circuits and Systems for Video Technology},
  year={2025},
  publisher={IEEE}
}

Overview

Framework

avatar

Baseline SOD framework

avatar

RGB SOD Performance

avatar

RGB-D SOD Performance

avatar

RGB-T SOD Performance

avatar

Predictions

The predicted RGB, RGB-D, and RGB-T saliency maps can be found here. [baidu pan fetch code: vpvt]

Pretrained Models

The pretrained parameters of our models can be found here. [baidu pan fetch code: o8yx]

Usage

Requirement

  1. Download the datasets for training and testing from here. [baidu pan fetch code: 2sfr]
  2. Download the pretrained parameters of the backbone from here. [baidu pan fetch code: mad3]
  3. Organize dataset directories for pre-training and fine-tuning.
  4. Create directories for the experiment and parameter files.
  5. Please use conda to install torch (1.12.0) and torchvision (0.13.0).
  6. Install other packages: pip install -r requirements.txt.
  7. Set your path of all datasets in ./options.py.

Pre-train

python -m torch.distributed.launch --nproc_per_node=2 --master_port=2024 train_parallel.py

Fine-tuning

python -m torch.distributed.launch --nproc_per_node=2 --master_port=2024 train_parallel_multi.py

Test

python test_produce_maps.py

Acknowledgement

The implement of this project is based on the following link.

Contact

If you have any questions, please contact us (kp.wang@foxmail.com).