Skip to content

Kalrfou/SwinT-pretrained-microscopy-models

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

86 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CS-UNet: A Generalizable and Flexible Segmentation Algorithm

Transfer Learning for Microstructure Segmentation with CS-UNet: A Hybrid Algorithm with Transformer and CNN Encoders

News

Introduction

Feel free to check out our preprint on arXiv

alt text The encoder-decoder architecture for microstructure segmentation with transferring learning, where the CNN and Swin-T models are pre-trained on ImageNet and microscopy images. The weights of the pre-trained CNN and Swin-T models are used to initialize the encoders while the weights of the Swin-T models are used to initialize the decoders.

Pretrained microscopy models

You can download the pretrained MicroLite Swin-T encoders, which utilize transfer learning from classification models trained on an extensive dataset of microscopy images containing over 50,000 images.

Swin-T architecture Depth Pre-training method Top-1 accuracy top-5 accuracy Download
Original [2,2,6,2] MicroLite 84.23 95.91 ckp
Original [2,2,6,2] ImageNet → MicroLite 84.63 96.35 ckp
Intermediate [2,2,2,2] MicroLite 84.0 96.91 ckp
Intermediate [2,2,2,2] ImageNet → MicroLite 84.45 97.83 ckp

Dataset

For image segmentation, the datasets used in this repository were obtained from NASA GitHub (pretrained-microscopy-models). They consist of 7 microscopy datasets derived from two materials: Nickel-based superalloys (Super): These datasets have 3 classes: matrix, secondary, and tertiary. Environmental barrier coatings (EBC): These datasets have 2 classes: oxide layer and background (non-oxide) layer.

Citation

Citing Swin Transformer

@inproceedings{liu2021Swin,
  title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
  author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021}
}

Citing Swin-Unet

@inproceedings{cao2022swin,
  title={Swin-unet: Unet-like pure transformer for medical image segmentation},
  author={Cao, Hu and Wang, Yueyue and Chen, Joy and Jiang, Dongsheng and Zhang, Xiaopeng and Tian, Qi and Wang, Manning},
  booktitle={European conference on computer vision},
  pages={205--218},
  year={2022},
  organization={Springer}
}

Citing Transdeeplab

@inproceedings{azad2022transdeeplab,
  title={Transdeeplab: Convolution-free transformer-based deeplab v3+ for medical image segmentation},
  author={Azad, Reza and Heidari, Moein and Shariatnia, Moein and Aghdam, Ehsan Khodapanah and Karimijafarbigloo, Sanaz and Adeli, Ehsan and Merhof, Dorit},
  booktitle={International Workshop on PRedictive Intelligence In MEdicine},
  pages={91--102},
  year={2022},
  organization={Springer}
}

Citing HiFormer

@inproceedings{heidari2023hiformer,
  title={Hiformer: Hierarchical multi-scale representations using transformers for medical image segmentation},
  author={Heidari, Moein and Kazerouni, Amirhossein and Soltany, Milad and Azad, Reza and Aghdam, Ehsan Khodapanah and Cohen-Adad, Julien and Merhof, Dorit},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  pages={6202--6212},
  year={2023}
}

Microstructure segmentation with deep learning encoders pre-trained on a large microscopy dataset

@article{stuckner2022microstructure,
  title={Microstructure segmentation with deep learning encoders pre-trained on a large microscopy dataset},
  author={Stuckner, Joshua and Harder, Bryan and Smith, Timothy M},
  journal={NPJ Computational Materials},
  volume={8},
  number={1},
  pages={200},
  year={2022},
  publisher={Nature Publishing Group UK London}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages