This project implements deep learning models for epidermis segmentation in skin whole slide images using U-Net and DeepLabv3+ architectures.
# Create and setup conda environment
bash setup_environment.sh
# Activate environment
conda activate hp-skin-01
# Run preprocessing and training
bash run_training.sh
# Evaluate trained models
bash run_evaluation.sh
Place your datasets in the following structure:
dataset/
├── Histo-Seg/
│ ├── WSI/ # .jpg files (20x resolution)
│ └── Mask/ # .jpg files (multiclass masks)
└── Queensland/
├── WSI/ # .tif files (10x resolution)
└── Mask/ # .png files (multiclass masks)
-
Binary Mask Generation: Extracts epidermis pixels from multiclass masks
- Histo-Seg: RGB(112, 48, 160)
- Queensland: RGB(73, 0, 106)
-
Patch Extraction: Creates 384×384 patches with tissue segmentation
- Non-overlapping patches
- Tissue segmentation to remove background
- Paired WSI-mask patches
-
Model Training: Trains U-Net models with different encoders
- ResNet50 encoder
- EfficientNet-B3 encoder
- Pure Dice loss
- Wandb integration for experiment tracking
-
Evaluation: Computes Dice, IoU, and other metrics on test set
- U-Net with ResNet50: Balanced performance
- U-Net with EfficientNet-B3: Higher accuracy, more parameters
- DeepLabv3+: Coming soon
- Histo-Seg: .jpg for both WSI and masks
- Queensland: .tif for WSI (requires OpenSlide), .png for masks
- Models are saved in
experiments/*/checkpoints/
- Evaluation results in
evaluation_results/
- Training progress tracked on Weights & Biases
# Ubuntu/Debian
sudo apt-get install openslide-tools
# macOS
brew install openslide
- Reduce batch size in
configs/training_config.yaml
- Enable gradient accumulation
- Use mixed precision training
# Verify installation
python -c "import torch; print(f'PyTorch: {torch.__version__}')"
python -c "import segmentation_models_pytorch as smp; print('SMP installed')"
See CLAUDE.md
for detailed project documentation and implementation details.
This project is for research purposes only.