A plug-and-play deep learning system for enhanced liver lesion classification on non-contrast CT scans using hierarchical cross-attention and graph neural networks.
[License: MPL 2.0](ttps://www.mozilla.org/en-US/MPL/2.0/)
[MICCAI 2025](https://conferences.miccai.org/2025/en/)
Official PyTorch implementation of "PLUS: Plug-and-Play Enhanced Liver Lesion Diagnosis Model on Non-Contrast CT Scans" (MICCAI 2025 Early Accept)
PLUS (Plug-and-Play Liver Lesion Enhanced Diagnosis) is designed to enhance existing liver lesion detection systems by providing accurate classification on non-contrast CT scans. The system combines:
- Plug-and-Play Architecture: Seamlessly integrates with existing detection workflows
- Non-Contrast CT Optimization: Specifically designed for non-contrast CT imaging
- Hierarchical cross-attention: Multi-scale feature extraction and fusion
- Graph interaction module: Class prototype learning with prior-aware attention
- FP - False Positive
- HCC - Hepatocellular Carcinoma
- ICC - Intrahepatic Cholangiocarcinoma
- Meta - Metastasis
- Heman - Hemangioma
- FNH - Focal Nodular Hyperplasia
- Cyst - Cyst
- OM - Other Malignant
- OB - Other Benign
git clone <repository_url>
cd PLUS-liver-lesion-diagnosis
pip install -r requirements.txt
data/
├── images/ # Non-contrast CT images (.nii.gz)
├── masks/ # Segmentation masks (.nii.gz)
└── annotations/ # JSON annotation files
python scripts/train.py \
--json_folder /path/to/annotations \
--image_folder /path/to/images \
--exp_name plus_experiment \
--num_epochs 100 \
--gpu_id 0
python scripts/test.py \
--json_folder /path/to/annotations \
--image_folder /path/to/images \
--model_path experiments/plus_experiment/checkpoints/best_model.pth \
--exp_dir test_results
├── src/ # Source code
│ ├── models/ # PLUS model definitions
│ ├── data/ # Data processing
│ ├── training/ # Training logic
│ ├── inference/ # Testing and prediction
│ └── evaluation/ # Metrics and evaluation
├── scripts/ # Training/testing scripts
├── configs/ # Configuration files
├── docs/ # Documentation
└── requirements.txt # Dependencies
- Images: Non-contrast CT scans in NIfTI format (
{case_id}_0000.nii.gz
) - Masks: NIfTI format segmentation masks (
{case_id}.nii.gz
) - Annotations: JSON files with detection results and probabilities
{
"case_id": "case_001",
"detection_results": [
{
"pred_bbox": [z1, y1, x1, z2, y2, x2],
"pred_class": 2,
"pred_size_mm3": 1250.5,
"lesion_class_prob": [0.1, 0.8, 0.05, ...],
"status": "TP",
"matched_gt": {
"gt_class": 2,
"gt_bbox": [z1, y1, x1, z2, y2, x2]
}
}
]
}
The PLUS system is designed as a plug-and-play enhancement module that can be integrated into existing liver lesion detection pipelines:
- 3D CNNs: ROI feature extraction optimized for non-contrast CT
- Hierarchical Cross-Attention: Multi-scale global-local feature fusion
- Graph Neural Network: Class prototype learning and interaction
- Multi-modal Fusion: Combines spatial, contextual, and probabilistic information
- Plug-and-Play Design: Easy integration with existing detection systems
Modify configs/default_config.py
to adjust:
- Model parameters (dimensions, layers)
- Training settings (learning rate, epochs)
- Data processing options
- Evaluation metrics
If you find our work useful, you can cite our work via:
@inproceedings{hao2025plus,
title={PLUS: Plug-and-Play Enhanced Liver Lesion Diagnosis Model on Non-Contrast CT Scans},
author={Hao, Jiacheng and Zhang, Xiaoming and Liu, Wei and Yin, Xiaoli and Gao, Yuan and Li, Chunli and Zhang, Ling and Lu, Le and Shi, Yu and Han, Xu and Yan, Ke},
booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
year={2025},
organization={Springer}
}