This work solves the unsupervised cross-modal domain adaptation task by generating synthetic MRI/PET data and adopting a teacher-student semi-supervised training framework with cross-set data augmentation (CDA) (which is proposed by our work Labeled-to-unlabeled distribution alignment for partially-supervised multi-organ medical image segmentation). For details, please see the flowchart below:
- CUDA >= 11.3
- python >= 3.7.13
To set up the environment, follow these steps:
conda create -n FLARE python=3.7.13
conda activate FLARE
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113
pip install -r requirements.txt
Before starting any training or inference, make sure to modify the base paths in all configuration files:
- Modify the base paths in
configs/xxx/xxx_base.yaml
: xxx includespreprocess
,train
,inference
, etc.
DATASET:
BASE_DIR: "your/path/to/datasets" # Dataset root directory
-
All training and inference configuration files inherit from
base.yaml
, so this step is mandatory. -
Please ensure to use absolute paths instead of relative paths to avoid potential path resolution issues.
The training Data and validation data are provided by the FLARE25. In short, there are 2050 CT data (50 labeled data and 2000 pseudo-labeled data (we use the pseudo labels provided by blackbean)), 4817 unlabeled MRI data and 1000 unlabeled PET data for training.
|-- datasets
| |-- CT
| | |-- CT2MR_image
| | |-- CT2MR_label
| | |-- CT2PET_image
| | |-- CT2PET_label
| | |-- CT_image
| | `-- CT_label
| |-- MRI
| | |-- PublicValidation
| | | |-- MRI_imagesVal
| | | `-- MRI_labelsVal
| | `-- Training
| | `-- MRI_image
| |-- PET
| | |-- PublicValidation
| | | |-- PET_imagesVal
| | | `-- PET_labelsVal
| | `-- Training
| | `-- PET_image
| |-- processed_data
| | `-- fine
| | |-- big_segnet
| | `-- combined_data
Please refer to "Style_Translation/README.md" for detailed information.
# Process CT dataset
python ./preprocess/data_preprocess.py --cfg ./configs/preprocess/preprocess_step1_CT.yaml
# Process Fake MRI dataset
python ./preprocess/data_preprocess.py --cfg ./configs/preprocess/preprocess_step1_FakeMRI.yaml
# Process Fake PET dataset
python ./preprocess/data_preprocess.py --cfg ./configs/preprocess/preprocess_step1_FakePET.yaml
python train.py --cfg ./configs/train/train_big_segnet_ctl_fakemri.yaml
python train.py --cfg ./configs/train/train_big_segnet_ctl_fakepet.yaml
# Process Real unlabeled MRI dataset
python ./preprocess/data_preprocess_ul_bigsegnet.py --cfg ./configs/preprocess/preprocess_step2_MRIul.yaml
# Process Real unlabeled PET dataset
python ./preprocess/data_preprocess_ul_bigsegnet.py --cfg ./configs/preprocess/preprocess_step2_PETul.yaml
python train_CDA.py --cfg ./configs/train/train_big_segnet_ctl_mriul_CDA.yaml
python train_CDA.py --cfg ./configs/train/train_big_segnet_ctl_petul_CDA.yaml
To infer the testing cases, run this command:
python inference.py --cfg ./configs/inference/inference_big_segnet_mri.yaml
python inference.py --cfg ./configs/inference/inference_big_segnet_pet.yaml
To compute the evaluation metrics, run:
python eval.py --cfg ./configs/eval/eval_big_segnet_mri.yaml
python eval_pet.py --cfg ./configs/eval/eval_big_segnet_pet.yaml
To run the inference using Docker, use the following command:
Note: This is the official inference script. When running predictions, please replace
input_dir
andoutput_dir
with your own input and output directories. The input MRI or PET images must be in.nii.gz
format.
docker run --gpus "device=0" \
-m 28G \
--rm \
-v input_dir:/workspace/inputs/ \
-v output_dir:/workspace/outputs/ \
omnigraft:latest /bin/bash -c "sh predict.sh MRI"
docker run --gpus "device=0" \
-m 28G \
--rm \
-v input_dir:/workspace/inputs/ \
-v output_dir:/workspace/outputs/ \
omnigraft:latest /bin/bash -c "sh predict.sh PET"
Docker Container download link Onedrive
Our method achieves the following performance on FLARE25
MRI Data
Dataset Name | DSC(%) | NSD(%) |
---|---|---|
Validation Dataset | 75.92% | 82.02% |
Test Dataset | (?) | (?) |
PET Data
Dataset Name | DSC(%) | NSD(%) |
---|---|---|
Validation Dataset | 77.30% | 61.23% |
Test Dataset | (?) | (?) |
We thank the contributors of FLARE24-task3.