Code for "Anatomy-aware Sketch-guided Latent Diffusion Model for Orbital Tumor Multi-Parametric MRI Missing Modalities Synthesis"
This repository contains code and example data for training and testing the ASLDM model on the OTTS dataset, enabling synthesis of missing modalities from multi-parametric MRI using anatomical sketch guidance.
Please download the two pre-trained model weight files from the following Google Drive links:
After downloading, place these weight files into the checkpoints/
directory of the project. For example:
ASLDM/
├── checkpoints/
│ ├── autoencoder.pt
│ └── diffusion_unet.pt
⚠️ Note: Ensure the weight file names match those specified in the code to avoid loading errors.
Below are examples of each modality used in our model:
T1WI![]() Anatomical structure | T2WI![]() Edema/fluid sensitivity | T1CE![]() Contrast-enhanced lesion | DWI![]() Diffusion signal |
ADC![]() Quantitative diffusion | Seg![]() Tumor mask | Sketch![]() Structural prior |
We recommend using Python 3.9+ and creating a virtual environment:
conda create -n asldm python=3.12
conda activate asldm
pip install -r requirements.txt
Additional dependencies might include:
torch >= 2.0
torchvision
monai
scikit-image
pillow
numpy
tqdm
itertools
tqdm
natsort
Place the data in the data/test/
directory. File names should follow:
<data_root>/<modality>/<sample_id>-<modality>-slice_<slice_id>.png
Run inference with a sketch input and available modalities:
python test.py
ASLDM/
├── data/
│ └── test/
│ ├── t1n/
│ ├── t2w/
│ ├── t1c/
│ ├── dwi/
│ ├── adc/
│ ├── seg/
│ └── sketch/
├── models/
├── test.py
├── train.py
└── README.md