DeepPI-EM: Deep learning-driven automated mitochondrial segmentation for analysis of complex transmission electron microscopy images
Authors : Chan Jang, Hojun Lee, Jaejun Yoo, Haejin Yoon
Mitochondria are central to cellular energy production and regulation, with their morphology tightly linked to functional performance. Precise analysis of mitochondrial ultrastructure is crucial for understanding cellular bioenergetics and pathology. While transmission electron microscopy (TEM) remains the gold standard for such analyses, traditional manual segmentation methods are time-consuming and prone to error. In this study, we introduce a novel deep learning framework that combines probabilistic interactive segmentation with automated quantification of mitochondrial morphology. Leveraging uncertainty analysis and real-time user feedback, the model achieves comparable segmentation accuracy while reducing analysis time by 90% compared to manual methods. Evaluated on both benchmark Lucchi++ datasets and real-world TEM images of mouse skeletal muscle, the pipeline not only improved efficiency but also identified key pathological differences in mitochondrial morphology between wild-type and mdx mouse models of Duchenne muscular dystrophy. This automated approach offers a powerful, scalable tool for mitochondrial analysis, enabling high-throughput and reproducible insights into cellular function and disease mechanisms.
pi_seg/
: Contains core modules for image segmentation and processing.config.py
: Configuration settings for model training and evaluation.dataset.py
: Scripts for dataset loading and preprocessing.main.py
: Entry point for training and evaluating the model.model.py
: Definitions of the neural network architectures used.test.py
: Scripts for testing the trained models.train_validate.py
: Procedures for training and validating the models.gui/
: Contains interactive segmentation logic and user interface components.run_gui.py
: Launches the graphical user interface for interactive image annotation and analysis.
To set up the environment, follow these steps:
git clone https://github.com/LAIT-CVLab/DeepPI-EM.git
cd DeepPI-EM
python -m venv env
source env/bin/activate # On Windows: env\Scripts\activate
If you are using CPU or already have a compatible CUDA setup, install the default requirements:
pip install -r requirements.txt
If you are using CUDA 11.3 with PyTorch 1.11.0, install the CUDA-compatible version instead:
pip install -r requirements-cu113.txt
requirements-cu113.txt
includes dependencies optimized for CUDA 11.3, such astorch==1.11.0+cu113
andmmcv-full
, which are not available on the default PyPI index.
All training and testing parameters can be configured in config.py
. And pre-trained models required for training and evaluation can be downloaded from the following links:
If you want to train the model with a new dataset, please define the dataset by following the template in the 'pi_seg/data/datasets' folder. Afterward, add the dataset object to the 'pi_seg/data/datasets/init.py' file before proceeding.
python main.py
python test.py
We provide a simple GUI for evaluating the trained model and enabling practical interaction. A demo using the Lucchi++ dataset is available here.
Users can specify any trained model within run_gui.py
and launch the interface as follows:
python run_gui.py
This project uses publicly available electron microscopy datasets:
- Lucchi++ Dataset
- Kasthuri Dataset
- MitoEM-H
- Skeletal Muscle TEM Dataset : A custom dataset developed by our team for mitochondria segmentation in skeletal muscle transmission electron microscopy (TEM) images.
📌 Note: These datasets are publicly accessible and can be used for research purposes.