Longfei Huang1,2*, Yu Liang2*, Hao Zhang2,
Jinwei Chen2, Wei Dong2,
Lunde Chen1, Wanyu Liu1, Bo Li2,
Peng-Tao Jiang2†
1 Shanghai University
2 vivo Mobile Communication Co., Ltd.
* Equal contribution
† Corresponding author
SDMatte is an interactive image matting method based on stable diffusion, which supports three types of visual prompts (points, boxes, and masks) for accurately extracting target objects from natural images.
- [2025.08.16] Released the LiteSDMatte weights.
- [2025.08.05] Released the SDMatte and SDMatte* weights.
- [2025.08.04] Released evaluation codes.
- [2025.08.01] Published the arXiv preprint.
- [2025.07.31] This repo is created.
- [2025.06.26] Paper accepted by ICCV 2025.
If your work has improved SDMatte and you would like more people to see it, please inform us.
-
ComfyUI-SDMatte, a ComfyUI custom node built on SDMatte, offers interactive high-precision image matting with refined edge detail preservation and optimized VRAM efficiency.
-
ComfyUI-RMBG, a ComfyUI custom node that incorporates SDMatte into their framework. This contribution highlights the practical integration of SDMatte, while further introducing real-time background replacement and enhanced edge refinement for improved accuracy.
Recent interactive matting methods have demonstrated satisfactory performance in capturing the primary regions of objects, but they fall short in extracting fine-grained details in edge regions. Diffusion models trained on billions of image-text pairs, demonstrate exceptional capability in modeling highly complex data distributions and synthesizing realistic texture details, while exhibiting robust text-driven interaction capabilities, making them an attractive solution for interactive matting. To this end, we propose SDMatte, a diffusion-driven interactive matting model, with three key contributions. First, we exploit the powerful priors of the pre-trained U-Net within diffusion models and transform the text-driven interaction mechanism into a visual prompt-driven interaction mechanism to enable interactive matting. Second, we integrate coordinate embeddings of visual prompts and opacity embeddings of objects into U-Net, enhancing SDMatte's sensitivity to spatial position information and opacity information. Third, we propose a masked self-attention mechanism and a visual prompt-driven interaction mechanism that enable the model to focus on areas specified by visual prompts, leading to better performance. Extensive experiments on multiple datasets demonstrate the superior performance of our method, validating its effectiveness in interactive matting.
-
Create a conda virtual env and activate it.
conda create -n SDMatte python==3.10 conda activate SDMatte
-
Install packages.
cd path/to/SDMatte pip install -r requirements.txt
-
Install detectron2 , follow its documentation. For SDMatte, we recommend to build it from latest source code.
python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
-
To train SDMatte and LiteSDMatte, please prepare the following datasets: Composition-1K, DIS-646, AM-2K, UHRSD, RefMatte, and BG-20K.
-
To train SDMatte*, please prepare the following datasets: Composition-1K, DIS-646, AM-2K, COCO-Matte, and BG-20K.
-
To evaluate SDMatte, please prepare the following test datasets: AIM-500, AM-2K, P3M-500, and RefMatte-RW-100.
-
Check lines 15–56 and 529–530 in
SDMatte/data/dataset.py
to modify the data path to your data path.
- Download the weights and configurations of SDMatte and SDMatte* from the Hugging Face repository LongfeiHuang/SDMatte
- Download the weight and configuration of LiteSDMatte from the Hugging Face repository LongfeiHuang/LiteSDMatte
-
Modify the
pretrained_model_name_or_path
field inconfigs/SDMatte.py
to the directory path containing the configuration files, so that the model can be properly initialized. -
Modify the
CKPT_DIR
parameter inscript/test_SDMatte.sh
orscript/test_SDMatte.sh
to the specific path of the downloaded weight file. -
Run the following command to evaluate SDMatte or LiteSDMatte.
bash script/test_SDMatte.sh bash script/test_LiteSDMatte.sh
- Publish paper on arXiv
- Release source code for SDMatte
- Release evaluation codes
- Release pretrained weights for SDMatte and SDMatte*
- Release source code for LiteSDMatte
- Release pretrained weights for LiteSDMatte
- Release training code
- Deploy interactive demo using Gradio or on Hugging Face Spaces
This project is licensed under MIT. Redistribution and use should follow this license.
If you find this repository or our work useful, please consider citing us:
@inproceedings{huang2025sdmatte,
title={SDMatte: Grafting Diffusion Models for Interactive Matting},
author={Huang, Longfei and Liang, Yu and Zhang, Hao and Chen, Jinwei and Dong, Wei and Chen, Lunde and Liu, Wanyu and Li, Bo and Jiang, Peng-Tao},
booktitle={Proceedings of the IEEE/CVF international conference on computer vision},
year={2025}
}
Our repo is built upon Stable Diffusion 2, TAESD, and BK-SDM. We sincerely thank the authors for their contributions to the community.
If you have any questions, please feel free to reach us at 2946399650fly@shu.edu.cn
or pt.jiang@vivo.com
.