Skip to content

xmindflow/LHUNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

8 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

LHU-Net: A Lean Hybrid U-Net for Cost-Efficient High-Performance Volumetric Medical Image Segmentation

arXiv

This repository contains the official implementation of LHU-Net. Our paper, "LHU-Net: A Lean Hybrid U-Net for Cost-Efficient High-Performance Volumetric Medical Image Segmentation," addresses the growing complexity in medical image segmentation models, focusing on balancing computational efficiency with segmentation accuracy.

Yousef Sadegheih, Afshin Bozorgpour, Pratibha Kumari, Reza Azad, and Dorit Merhof


πŸ“‘ Table of Contents

  1. Abstract
  2. Updates
  3. Key Contributions
  4. Model Architecture
  5. Datasets, Pre-trained Weights, and Visualizations
  6. Results
  7. Getting Started
  8. Acknowledgments
  9. Citation
  10. Touchstone Benchmark

πŸ“ Abstract

The rise of Transformer architectures has advanced medical image segmentation, leading to hybrid models that combine Convolutional Neural Networks (CNNs) and Transformers. However, these models often suffer from excessive complexity and fail to effectively integrate spatial and channel features, crucial for precise segmentation. To address this, we propose LHU-Net, a Lean Hybrid U-Net for volumetric medical image segmentation. LHU-Net prioritizes spatial feature extraction before refining channel features, optimizing both efficiency and accuracy. Evaluated on four benchmark datasets (Synapse, Left Atrial, BraTS-Decathlon, and Lung-Decathlon), LHU-Net consistently outperforms existing models across diverse modalities (CT/MRI) and output configurations. It achieves state-of-the-art Dice scores while using four times fewer parameters and 20% fewer FLOPs than competing models, without the need for pre-training, additional data, or model ensembles. With an average of 11 million parameters, LHU-Net sets a new benchmark for computational efficiency and segmentation accuracy.


πŸ”” Updates

  • πŸ‘Š Complete rewrite of the source code for full compatibility with the nnUNetV2 framework – July 29, 2025
  • πŸ₯³: Paper Accepted in MICCAI 2025 – June 17, 2025
  • πŸ”₯ Participation in Touchstone Benchmark – July 16, 2024
  • 😎 First release – April 5, 2024

⚑ Key Contributions

  • Efficient Hybrid Attention Selection: Introduces a strategic deployment of specialized attention mechanisms within Transformers, enabling nuanced feature extraction tailored to the demands of medical image segmentation.
  • Benchmark Setting Efficiency: Achieves high-performance segmentation with significantly reduced computational resources, demonstrating an optimal balance between model complexity and computational efficiency.
  • Versatile Superiority: Showcases unparalleled versatility and state-of-the-art performance across multiple datasets, highlighting its robustness and potential as a universal solution for medical image segmentation.

Brats-performance


βš™οΈ Model Architecture

LHU-Net leverages a hierarchical U-Net encoder-decoder structure optimized for 3D medical image segmentation. The architecture integrates convolutional-based blocks with hybrid attention mechanisms, capturing both local features and non-local dependencies effectively.

LHU-Net Architecture

For a detailed explanation of each component, please refer to our paper.


πŸ—„οΈ Datasets, Pre-trained weights, and Visulizations

Our experiments were conducted on four benchmark datasets. The pre-processed datasets and pre-trained weights, including the predicted outputs on the test splits, can be downloaded from the table below.

Dataset Visualization Pre-Trained Weights Pre-Processed Dataset
BRaTS-Decathlon [Download Visualization] [Download Weights] [Download Dataset]
Left Atrial (LA) [Download Visualization] [Download Weights (fold 0)] [Download Dataset]
Lung-Decathlon [Download Visualization] [Download Weights] [Download Dataset]
Synapse [Download Visualization] [Download Weights] [Download Dataset]

Notes:

  • The dataset splits used in our experiments can be found in the splits_final.json file within each pre-processed dataset folder.

πŸ“ˆ Results

LHU-Net demonstrated exceptional performance across these datasets, significantly outperforming existing state-of-the-art models in terms of efficiency and accuracy.

synapse_table

ACDC_table

ACDC_table

πŸš€ Getting Started

This section provides instructions on how to run LHU-Net for your segmentation tasks. It is built using nnUNetV2 as its framework.

πŸ› οΈ Requirements

  • Operating System: Ubuntu 22.04 or higher
  • CUDA: Version 12.x
  • Package Manager: Conda
  • Hardware:
    • GPU with 8GB memory or larger (recommended)
    • For our experiments, we used a single GPU (A100-80G)

πŸ“¦ Installation

To install the required packages and set up the environment, simply run the following command:

./env_creation.sh

This will:

  • Create a Conda environment named lhunet
  • Install all the necessary dependencies
  • Automatically move the essential files from the src folder to the nnUNetV2 directory

πŸ‹οΈ Training & Inference

For training and inference, you can use the provided shell scripts located in the script folder. These scripts are pre-configured for easy execution.

⚠️ Notes

  • Path Configuration: Before running the scripts, make sure to update the paths in the shell script files to reflect your setup.
  • Metrics: There is a metrics folder containing Python scripts that can be used to calculate DSC (Dice Similarity Coefficient) and HD95 (Hausdorff Distance 95%) metrics for each dataset.

🀝 Acknowledgments

This repository is built based on nnFormer, nnU-Net, UNETR++, MCF, D3D, D-LKA. We thank the authors for their code repositories.

πŸ“š Citation

If you find this work useful for your research, please cite:

@article{sadegheih2024lhunet,
  title={LHU-Net: A Lean Hybrid U-Net for Cost-Efficient High-Performance Volumetric Medical Image Segmentation},
  author={Sadegheih, Yousef and Bozorgpour, Afshin and Kumari, Pratibha and Azad, Reza and Merhof, Dorit},
  booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
  year={2025},
  organization={Springer}
}

For the implementation and weights for the touchstone benchmark, please visit here.

About

LHU-Net: A Lean Hybrid U-Net for Cost-efficient, High-performance Volumetric Medical Image Segmentation

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •