Skip to content

sct-pipeline/contrast-agnostic-softseg-spinalcord

Repository files navigation

Towards Contrast-agnostic Soft Segmentation of the Spinal Cord

MedIA

Official repository for contrast-agnostic segmentation of the spinal cord.

This repo contains all the code for training the contrast-agnostic model. The code for training is based on the nnUNetv2 framework. The segmentation model is available as part of Spinal Cord Toolbox (SCT) via the sct_deepseg functionality.

Citation Information

If you find this work and/or code useful for your research, please cite our paper:

@article{BEDARD2025103473,
title = {Towards contrast-agnostic soft segmentation of the spinal cord},
journal = {Medical Image Analysis},
volume = {101},
pages = {103473},
year = {2025},
issn = {1361-8415},
doi = {https://doi.org/10.1016/j.media.2025.103473},
url = {https://www.sciencedirect.com/science/article/pii/S1361841525000210},
author = {Sandrine Bédard* and Enamundram Naga Karthik* and Charidimos Tsagkas and Emanuele Pravatà and Cristina Granziera and Andrew Smith and Kenneth Arnold {Weber II} and Julien Cohen-Adad},
note = {Shared authorship -- authors contributed equally}
}

TODO: add lifelong learning figure

Table of contents

Training the model

Step 1: Configuring the environment

  1. Create a conda environment with the following command:
conda create -n contrast_agnostic python=3.9
  1. Activate the environment with the following command:
conda activate contrast_agnostic
  1. Clone the repository with the following command:
git clone https://github.com/sct-pipeline/contrast-agnostic-softseg-spinalcord.git
  1. Install the required packages with the following command:
cd contrast-agnostic-softseg-spinalcord/nnUnet
pip install -r requirements.txt

Note The requirements.txt does NOT install nnUNet. It has to be installed separately and can be done within the conda environment created above. See here for installation instructions. Please note that the nnUNet version used in this work is tag v2.5.1.

Step 2: Train the model

The script scripts/train_contrast_agnostic.sh downloads the datasets from git-annex, creates datalists, converts them into nnUNet-specific format, and trains the model. More instructions about what variables to set and which datasets to use can be found in the script itself. Once these variables are set, the script can be run simply as follows:

bash scripts/train_contrast_agnostic.sh

Lifelong learning for monitoring morphometric drift

This section provides some notes on the lifelong/continuous learning framework for automatically monitoring morphometric drift between various versions of segmentation models. Once a new segmentation model is developed and released, a GitHub actions (GHA) workflow is triggered which automatically computes the spinal cord CSA between current (new) version of the model and previously released models.

For a fair comparison, we evalute various model versions on the frozen test set of the spine-generic data-multi-subject (public) dataset. The test split can be found in scripts/spine_generic_test_split_for_csa_drift_monitoring.yaml file.

Step 1: Creating a new release

Here are the steps involved in the workflow:

  • After training a new segmentation model, create a release with the following naming convention:
    • Tag name: vX.Y (e.g. v2.0, v3.0, etc.), where X is the major update (i.e. architectural/training-strategy change) and Y is the minor update (addition of new contrasts and/or pathologies).
    • Release title: contrast-agnostic-spinal-cord-segmentation vX.Y (note, the title can be anything, GHA workflow does not depend on it).
    • Release description: A drop-down summary of the dataset characteristics. The details of the datasets used during training is automatically generated from the nnUnet/utils.py script.
    • Release assets: The model weights and the training logs (if needed) are attached to the release. The entire output folder of the nnUnet model containing the folds, should be uploaded. The naming convention for the .zip file should be model_contrast_agnostic_<date-the-model-was-trained-on>.zip.
    • Once the above steps are completed, publish the release.

Step 2: The GHA workflow

  • Once published, the release triggers a GHA workflow. The workflow is a .yml file located in the .github/workflows folder. For a high-level overview, it is divided into the following steps:
    • Job 1: Clones the dataset via git-annex and only downloads subjects in the test split. The dataset is cached for future use.
    • Job 2: The test set of (n=49) is split into batches of 3 subjects for parallel processing. The model is downloaded from the release and each job (or, a runner) is responsible for computing the C2-C3 CSA for all the 6 contrasts.
    • Job 3: The output .csv files are aggregated across batches and merged into a single CSV file. The file is saved with the following naming convention csa_c2c3__model_<tag-name>.csv (note that the tag name defined in Step 1 is being used here) and uploaded to the release.
    • Job 4: All csa_c2c3__model_<tag-name>.csv files corresponding to current and previous releases are downloaded. Then, violin plots comparing the CSA per contrast (for each model) and the STD of CSA across contrasts are generated. The plots are saved in the morphometric_plots.zip folder and uploaded to the existing release.

In summary, once a new model is released, the GitHub actions workflow automatically generates the plots for monitoring the morphometric drift between various versions of the segmentation model.

About

Contrast-agnostic spinal cord segmentation project with softseg

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 7