Skip to content

This repository contains a modified U-Net architecture implementation specifically tailored for the reconstruction of MRI images. The U-Net model is a powerful convolutional neural network known for its efficiency in image segmentation tasks, and in this case, has been adapted to handle the complex data typically associated with MRI scans.

License

MIT, Unknown licenses found

Licenses found

MIT
LICENSE
Unknown
LICENSE.md
Notifications You must be signed in to change notification settings

starceees/MR-Image-Reconstruction-Using-Deep-Learning

Repository files navigation

MR Image Reconstruction Using Deep Learning

GitHub last commit License Stars

A modular pipeline for 2D slice-wise cardiac MRI segmentation, comparing a custom UNet2D implementation against the self-configuring nnUNet framework on the Medical Segmentation Decathlon (MSD) Task02_Heart dataset.

πŸ“‹ Table of Contents

πŸ” Overview

This repository provides an end-to-end pipeline for cardiac MRI segmentation using deep learning techniques. We compare our custom UNet2D implementation with the state-of-the-art nnUNet framework, analyzing performance differences, computational requirements, and implementation complexity.

🎯 Project Rationale

High-quality segmentation of cardiac MRI volumes is critical for clinical diagnosis and treatment planning. While nnUNet has set a new benchmark by automatically adapting to any medical segmentation task, there is still value in understanding and optimizing a hand-crafted UNet2D architecture under limited compute environments. This repository demonstrates both approaches, highlights their trade-offs, and provides a reusable, end-to-end codebase for future research.

✨ Features

  • Input Handling: Accepts complex-valued MRI images by considering real and imaginary parts as separate channels.
  • High Capacity Models: With up to 125 million trainable parameters, our implementation is capable of learning intricate patterns for accurate reconstruction.
  • Efficient Gradient Flow: Carefully designed skip connections and up-convolutions ensure efficient backpropagation of gradients.
  • Customizability: Flexible architecture supporting various configuration options for experimentation.
  • Comprehensive Evaluation: Automated quantitative analysis using multiple metrics (Dice, Jaccard, precision, recall).
  • Visual Reporting: Generates PDF reports with visualizations for qualitative assessment.

πŸ—οΈ Pipeline Architecture

1. Data Ingestion & Preparation

  • Download the Task02_Heart dataset from the MSD website (not included in this repo).
  • Organize as:
    Task02_Heart/
      β”œβ”€β”€ imagesTr/
      β”œβ”€β”€ labelsTr/
      └── imagesTs/
    
  • Configure data_root in parameters.yaml.

2. Preprocessing & Augmentation

  • Axial Slice Extraction from 3D NIfTI volumes.
  • Intensity Normalization (min–max scaling).
  • On-the-fly Augmentations: random flips, rotations, zoom.

3. Model Definitions

  • Custom UNet2D: Fixed encoder–decoder with skip-connections, mixed-precision training (FP16), Cross-Entropy loss.
  • nnUNet: Self-configuring pipeline that selects 2D/3D architectures, patch sizes, hyperparameters, and advanced augmentations automatically.

4. Training & Validation

  • Training Script (Implementation/training.py):
    • Adam optimizer + ReduceLROnPlateau scheduler
    • WandB logging for real-time metrics
    • Checkpointing best validation Dice
  • Inference Script (Implementation/inference.py):
    • Loads best model
    • Computes Dice, Jaccard, precision, recall on test set
    • Generates PDF visualizations

πŸ“Š Key Findings

Metric Custom UNet2D nnUNet Ξ”
Average Dice 0.387 0.932 +0.545
Median Dice 0.439 0.933 +0.494
  • nnUNet delivers substantially higher segmentation accuracy and more precise mask alignment, thanks to its adaptive architecture and advanced data handling.
  • Custom UNet2D remains a viable lightweight solution but requires extensive manual tuning and lacks inter-slice context.

πŸ“ Repository Structure

MR-Image-Reconstruction-Using-Deep-Learning/
β”œβ”€β”€ Data/                          # Data processing utilities
β”œβ”€β”€ Implementation/                # Core implementation files
β”œβ”€β”€ deprecated/                    # Legacy code (maintained for reference)
β”œβ”€β”€ home/raghuram/ARPL/MR-Image-Reconstruction-Using-Deep-Learning/ # Model results
β”œβ”€β”€ interface/                     # User interface components
β”œβ”€β”€ nnUNet/                        # nnUNet integration
β”œβ”€β”€ .gitignore                     # Git ignore file
β”œβ”€β”€ LICENSE                        # Project license
β”œβ”€β”€ LICENSE.md                     # License details
β”œβ”€β”€ README.md                      # This file
β”œβ”€β”€ requirements.txt               # Dependencies
β”œβ”€β”€ resume_robotics.pdf            # Additional documentation
└── task_HSS.pdf                   # Task definition

πŸš€ Installation

  1. Clone the repository:

    git clone https://github.com/starceees/MR-Image-Reconstruction-Using-Deep-Learning.git
    cd MR-Image-Reconstruction-Using-Deep-Learning
  2. Install dependencies:

    pip install -r requirements.txt
  3. Download and prepare the data:

    • Download the Task02_Heart dataset from Medical Segmentation Decathlon
    • Place it in the expected directory structure
    • Update the configuration in parameters.yaml

πŸ’» Usage

Training the Custom UNet2D Model

python Implementation/training.py

Running Inference and Evaluation

python Implementation/inference.py

Running the nnUNet Benchmark

Follow nnUNet's official instructions to train on Task02_Heart. Place results under nnUNet/nnunet_inference/ for direct comparison.

πŸ–ΌοΈ Results & Visualization

Our implementation automatically generates comprehensive PDF reports with segmentation visualizations and quantitative metrics. Example visualizations are included in the home/raghuram/ARPL/MR-Image-Reconstruction-Using-Deep-Learning/ directory.

Sample Segmentation

🀝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

πŸ“„ License

This project is licensed under the terms of the LICENSE file included in the repository. See LICENSE for more details.


About

This repository contains a modified U-Net architecture implementation specifically tailored for the reconstruction of MRI images. The U-Net model is a powerful convolutional neural network known for its efficiency in image segmentation tasks, and in this case, has been adapted to handle the complex data typically associated with MRI scans.

Resources

License

MIT, Unknown licenses found

Licenses found

MIT
LICENSE
Unknown
LICENSE.md

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published