Skip to content
/ GLIMS Public

The official repository of GLIMS, for 3D volumetric segmentation. Optimized on BraTS 2023 and BTCV datasets.

License

Notifications You must be signed in to change notification settings

yaziciz/GLIMS

Repository files navigation

GLIMS: Attention-guided lightweight multi-scale hybrid network for volumetric semantic segmentation

This repository contains the code of GLIMS.

GLIMS ranked in the top 5 among 65 unique submissions during the validation phase of the Adult Glioblastoma Segmentation challenge of BraTS 2023.

Installation

Clone the repository

git clone https://github.com/yaziciz/GLIMS.git
cd GLIMS

Install the required dependencies

By using Conda, create a virtual environment and install the project's dependencies:

conda env create -f environment.yml

Usage Instructions

Running the Main Script

The GLIMS model can be trained by the given script on the BraTS 2023 dataset:

python main.py --output_dir <output_directory> --data_dir <data_directory> --json_list <json_list_file> --fold <fold_id>

Validation

By using the pre-trained model, the validation phase can be performed as follows:

python post_validation.py --output_dir <output_directory> --data_dir <data_directory> --json_list <json_list_file> --fold <fold_number> --pretrained_dir <pretrained_model_directory>

Testing with Model Ensembles

The model weights can be accessed here: Google Drive Folder

To test GLIMS by using the ensemble method on the unannotated BraTS 2023 dataset, the following script can be used:

python test_BraTS.py --data_dir <validation_data_directory> --model_ensemble_1 <model_1_path> --model_ensemble_2 <model_2_path> --output_dir <output_directory>

The model_ensemble_1 and model_ensemble_2 variables represent the fold 2 and fold 4 models, as indicated in our challenge submission paper on arXiv.

Citations

GLIMS: Attention-guided lightweight multi-scale hybrid network for volumetric semantic segmentation
Image and Vision Computing, May 2024
Journal Paper, arXiv

@article{yazici2024glims,
  title={GLIMS: Attention-guided lightweight multi-scale hybrid network for volumetric semantic segmentation},
  author={Yaz{\i}c{\i}, Ziya Ata and {\"O}ks{\"u}z, {\.I}lkay and Ekenel, Haz{\i}m Kemal},
  journal={Image and Vision Computing},
  pages={105055},
  year={2024},
  publisher={Elsevier},
  doi={https://doi.org/10.1016/j.imavis.2024.105055}
}

Attention-Enhanced Hybrid Feature Aggregation Network for 3D Brain Tumor Segmentation
Accepted to the 9th Brain Lesion (BrainLes) Workshop @ MICCAI 2023
Challenge Proceedings Paper arXiv

@incollection{yazici2023attention,
  title={Attention-Enhanced Hybrid Feature Aggregation Network for 3D Brain Tumor Segmentation},
  author={Yaz{\i}c{\i}, Ziya Ata and {\"O}ks{\"u}z, {\.I}lkay and Ekenel, Haz{\i}m Kemal},
  booktitle={International Challenge on Cross-Modality Domain Adaptation for Medical Image Segmentation},
  pages={94--105},
  year={2023},
  publisher={Springer},
  doi={https://doi.org/10.1007/978-3-031-76163-8_9}
}

Thank you for your interest in our work!

We are also deeply grateful to the MONAI Consortium for their MONAI framework, which was instrumental in the development of GLIMS.

About

The official repository of GLIMS, for 3D volumetric segmentation. Optimized on BraTS 2023 and BTCV datasets.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages