Domain-Incremental Learning for Remote Sensing Semantic Segmentation with Multifeature Constraints in Graph Space
- News
- Abstract
- Dependencies and installation
- Dataset
- Usage
- Models and results
- Citation
- Acknowledgement
- Contact
- [2024-11-10] The models have been released.
- [2024-11-01] The codes have been released.
- [2024-10-14] The paper has been accepted by IEEE Transactions on Geoscience and Remote Sensing (TGRS).
The use of deep learning techniques for semantic segmentation in remote sensing has been increasingly prevalent. Effectively modeling remote contextual information and integrating high-level abstract features with low-level spatial features are critical challenges for semantic segmentation task. This paper addresses these challenges by constructing a Graph Space Reasoning (GSR) module and a Dual-channel Cross Attention Upsampling (DCAU) module. Meanwhile, a new domain incremental learning (DIL) framework is designed to alleviate catastrophic forgetting when the deep learning model is used in cross-domain. This framework makes a balance between retaining prior knowledge and acquiring new information through the use of frozen feature layers and multi-feature joint loss optimization. Base on this, a new domain incremental learning of remote sensing semantic segmentation with multifeature constraints in graph space (GSMF-RS-DIL) framework is proposed. Extensive experiments, including ablation experiments on the ISPRS and LoveDA datasets, demonstrate that the proposed method achieves superior performance and optimal computational efficiency in both single-domain and cross-domain tasks. The code is publicly available at https://github.com/HuangWBill/GSMF-RS-DIL.
# 0. Basic environmental
anaconda, cuda==11.1
# 1. create new anaconda env
conda create -n GSMF_RS_DIL python=3.8
conda activate GSMF_RS_DIL
# 2. git clone this repository
git clone https://github.com/HuangWBill/GSMF-RS-DIL.git
cd GSMF-RS-DIL
# 3. install torch and dependencies
pip install -r requirements.txt
# The mmcv, mmengine, mmsegmentation, torch, torchaudio and torchvision versions are strict.
Dataset | Class | Link | Storage path |
---|---|---|---|
Potsdam | impervious surfaces, building, low vegetation,tree, car, background | [ISPRS] | data\Potsdam_IRRG_tif_512 |
Vaihingen | data\Vaihingen_IRRG_tif_512 | ||
Urban | buildings, road, water, barren, forest, agriculture, background | [LoveDA] | data\LoveDA_Urban_512 |
Rural | data\LoveDA_Rural_512 |
- The datasets used in the paper are all publicly available and can be downloaded and preprocessed according to the description in the paper.
- Strictly organize data according to the example data.
- The ISPRS_IRRG dataset consists of the Potsdam and Vaihingen datasets, the LoveDA_all dataset consists of the Urban and Rural datasets.
- GS-AUFPN experiment uses Potsdam_IRRG_tif_512 or LoveDA_Urban_512, GSMF-RS-DIL experiment uses ISPRS_IRRG_512 or LoveDA_all_512
# train GS-AUFPN in Potsdam
python tools/train/train_GS-AUFPN.py --config configs/gs_aufpn_r101-d8_4xb4-80k_Potsdam-512x512.py --work-dir result/GS-AUFPN-Potsdam
# train GS-AUFPN in Urban
python tools/train/train_GS-AUFPN.py --config configs/gs_aufpn_r101-d8_4xb4-80k_Urban-512x512.py --work-dir result/GS-AUFPN-LoveDA_Urban
# test GS-AUFPN in Potsdam
python tools/test/test_GS-AUFPN.py --config configs/gs_aufpn_r101-d8_4xb4-80k_Potsdam-512x512.py --checkpoint result/GS-AUFPN-Potsdam/iter_80000_potsdam.pth --work-dir result/GS-AUFPN-Potsdam/result --out result/GS-AUFPN-Potsdam/result/dataset_pre
# test GS-AUFPN in Urban
python tools/test/test_GS-AUFPN.py --config configs/gs_aufpn_r101-d8_4xb4-80k_Urban-512x512.py --checkpoint result/GS-AUFPN-LoveDA_Urban/iter_80000_urban.pth --work-dir result/GS-AUFPN-LoveDA_Urban/result --out result/GS-AUFPN-LoveDA_Urban/result/dataset_pre
# checkpoint format conversion
python tools/checkpoint_process.py --dataset_name 'ISPRS'
python tools/checkpoint_process.py --dataset_name 'LoveDA'
# train GSMF-RS-DIL in ISPRS
python tools/train/train_GSMF-RS-DIL.py --config configs/gsmf_rs_dil_r101-d8_4xb4-80k_ISPRS-512x512.py --work-dir result/GSMF-RS-DIL-ISPRS
# train GSMF-RS-DIL in LoveDA
python tools/train/train_GSMF-RS-DIL.py --config configs/gsmf_rs_dil_r101-d8_4xb4-80k_LoveDA-512x512.py --work-dir result/GSMF-RS-DIL-LoveDA
# test GSMF-RS-DIL in ISPRS
python tools/test/test_GSMF-RS-DIL.py --config configs/gsmf_rs_dil_r101-d8_4xb4-80k_ISPRS-512x512.py --checkpoint result/GSMF-RS-DIL-ISPRS/iter_10000_isprs.pth --work-dir result/GSMF-RS-DIL-ISPRS/result --out result/GSMF-RS-DIL-ISPRS/result/dataset_pre
# test GSMF-RS-DIL in LoveDA
python tools/test/test_GSMF-RS-DIL.py --config configs/gsmf_rs_dil_r101-d8_4xb4-80k_LoveDA-512x512.py --checkpoint result/GSMF-RS-DIL-LoveDA/iter_10000_loveda.pth --work-dir result/GSMF-RS-DIL-LoveDA/result --out result/GSMF-RS-DIL-LoveDA/result/dataset_pre
- train log and model download
Model | Domain A | Domain B | Device | Iterations | mIoU of Domain A | mIoU of Domain B | Log | checkpoint |
---|---|---|---|---|---|---|---|---|
GS-AUFPN | Potsdam_IRRG_tif_512 | —— | RTX4090 | 80000 | 74.87 | —— | log | download |
GS-AUFPN | Vaihingen_IRRG_tif_512 | —— | RTX4090 | 80000 | 69.77 | —— | log | download |
GS-AUFPN | LoveDA_Urban_512 | —— | RTX4090 | 80000 | 44.41 | —— | log | download |
GS-AUFPN | LoveDA_Rural_512 | —— | RTX4090 | 80000 | 35.89 | —— | log | download |
GSMF-RS-DIL | Potsdam_IRRG_tif_512 | Vaihingen_IRRG_tif_512 | RTX4090 | 80000 | 61.91 | 65.81 | log | download |
GSMF-RS-DIL | LoveDA_Urban_512 | LoveDA_Rural_512 | RTX4090 | 80000 | 51.02 | 36.21 | log | download |
- Results of single domain
Table 1. Quantitative comparison results with State-of-the-art network.
Method | Potsdam | Urban | ||||
---|---|---|---|---|---|---|
OA (%) | mF1 (%) | mIoU (%) | OA (%) | mF1 (%) | mIoU (%) | |
DeepLabV3+ | 88.65 | 83.52 | 73.94 | 56.82 | 58.63 | 43.76 |
PSANet | 88.62 | 83.92 | 74.25 | 57.60 | 59.16 | 44.27 |
PSPNet | 88.69 | 83.77 | 74.16 | 56.91 | 58.29 | 43.38 |
Pointrend | 88.21 | 82.12 | 72.51 | 54.43 | 56.35 | 42.55 |
DANet | 88.44 | 83.01 | 73.38 | 56.21 | 58.43 | 43.62 |
CCNet | 88.56 | 83.38 | 73.77 | 55.95 | 58.20 | 43.91 |
GS-AUFPN (ours) | 88.72 | 84.56 | 74.87 | 57.43 | 59.71 | 44.41 |
- Results of cross domain
Table 2. Quantitative comparison results with other cross domain training methods.
Potsdam | Vaihingen | ΔmIoU(%) | Urban | Rural | ΔmIoU(%) | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|
mIoU(%) | ΔbmIoU(%) | mIoU(%) | ΔbmIoU(%) | mIoU(%) | ΔbmIoU(%) | mIoU(%) | ΔbmIoU(%) | ||||
Single task (A-A) | 74.87 | —— | 69.77 | —— | —— | 44.41 | —— | 35.89 | —— | —— | |
Single task (A-B) | —— | —— | 28.09 | -59.74 | —— | —— | —— | 28.52 | -20.54 | —— | |
Multi task | 74.45 | -00.56 | 66.84 | -4.20 | -2.38 | 53.91 | +21.39 | 35.71 | -0.50 | +10.45 | |
Fine-tune | 54.82 | -26.78 | 67.21 | -3.67 | -15.22 | 49.92 | +12.41 | 35.26 | -1.76 | +5.33 | |
LwF | 62.53 | -16.48 | 63.8 | -8.55 | -12.52 | 49.92 | +12.41 | 34.82 | -2.98 | +4.71 | |
GSMF-RS-DIL (ours) | 61.91 | -17.31 | 65.81 | -5.67 | -11.49 | 51.02 | +14.84 | 36.21 | +0.89 | +7.89 |
Please kindly cite the papers if this code is useful and helpful for your research:
@article{huang2024gsmfrsdil,
title = {Domain-Incremental Learning for Remote Sensing Semantic Segmentation with Multifeature Constraints in Graph Space},
author = {Huang, Wubiao and Ding, Mingtao and Deng, Fei},
journal = {IEEE Transactions on Geoscience and Remote Sensing},
volume = {62},
number = {},
pages = {1-15},
year = {2024},
DOI = {10.1109/TGRS.2024.3481875}
}
This implementation is based on MMSegmentation. Thanks for the awesome work.
If you have any questions or suggestions, feel free to contact Wubiao Huang.