Releases: okunator/cellseg_models.pytorch
v0.1.26
0.1.26 — 2025-05-07
Removed
- Removed
datamodules
module - Removed
datasets
module
Refactor
- Refactored the whole model interface to be more user-friendly.
Features
-
Added a new
wsi
module, including:- A
SlideReader
class to read patches from a WSI slide.- Backends: Openslide, CUCIM
- Adapted the reader class from HistoPrep library. Props to Jopo666
get_sub_grids
function to get subgrids from a WSI slide. Can be used to filter the patches. Based on connected components.
- A
-
Added a new torch_datasets module, including:
WSIDatasetInfer
class to run inference directly from WSIs.- Adapted the class from HistoPrep library. Props to Jopo666
TrainDatasetH5
class to handle training data for the models from a h5 file.TrainDatasetFolder
class to handle training data for the models from img and label folders.
-
Added a new
inference.WsiSegmenter
-class to handle the segmentation of WSIs. -
Added a new
wsi.inst_merger.InstMerger
-class to handle the merging of instance masks at image boundaries. -
Added
inst2gdf
andsem2gdf
functions toutils.vectorize
module. These functions convert efficiently instance and semantic masks to GeoDataFrame objects. -
Added
FileHandler.to_mat
andFileHandler.to_gson
save functions that take in a dictionary of model output masks (output from theInferer
-classes) and save it to a .mat or '.feather', '.geojson', '.parquet' files.
Added Dependencies
- Added
libpysal
dependency - Added
networkx
dependency
Removed Dependencies
- Removed
lightning
dependency - Removed
albumentations
dependency
Chore
- Move
FolderDatasetInfer
totorch_datasets
module
v0.1.25
0.1.25 — 2024-07-05
Features
- Image encoders are imported now only from timm models.
- Add
enc_out_indices
to model classes, to enable selecting which layers to use as the encoder outputs.
Removed
- Removed SAM and DINOv2 original implementation image-encoders from this repo. These can be found from timm models these days.
- Removed
cellseg_models_pytorch.training
module which was left unused after example notebooks were updated.
Examples
- Updated example notebooks.
- Added new example notebooks utilizing UNI foundation model from the MahmoodLab.
- Added new example notebooks utilizing the Prov-GigaPath foundation model from the Microsoft Research.
- NOTE: These examples use the huggingface model hub to load the weights. Permission to use the model weights is required to run these examples.
Chore
- Update timm version to above 1.0.0.
Breaking changes
- Lose support for python 3.9
- The
self.encoder
in each model is new, thus, models with trained weights from previous versions of the package will not work with this version.
v0.1.24
0.1.24 — 2023-10-13
Style
- Update the
Ìnferer.infer()
-method api to accept arguments related to saving the model outputs.
Features
-
Add
CPP-Net
. https://arxiv.org/abs/2102.06867 -
Add option for mixed precision inference
-
Add option to interpolate model outputs to a given size to all of the segmentation models.
-
Add DINOv2 Backbone
-
Add support for
.geojson
,.feather
,.parquet
file formats when running inference.
Docs
- Add
CPP-Net
example trainng with Pannuke dataset.
Fixes
- Fix resize transformation bug.
v0.1.23
0.1.23 — 2023-08-28
Features
-
add a stem-skip module. (Long skip for the input image resolution feature map)
-
add
UnetTR
transformer encoder wrapper class -
add a new
Encoder
wrapper for timm and unetTR based encoders -
Add stem skip support and upsampling block options to all current model architectures
-
Add masking option to all the criterions
-
Add
MAELoss
-
Add
BCELoss
-
Add base class for transformer based backbones
-
Add
SAM-VitDet
image encoder with support to load pre-trainedSAM
weights -
Add
CellVIT-SAM
model.
Docs
-
Add notebook example on training
Hover-Net
with lightning from scratch. -
Add notebook example on training
StarDist
with lightning from scratch. -
Add notebook example on training
CellPose
with accelerate from scratch. -
Add notebook example on training
OmniPose
with accelerate from scratch. -
Add notebook example on finetuning
CellVIT-SAM
with accelerate.
Fixes
-
Fix current
TimmEncoder
to store feature info -
Fix Up block to support transconv and bilinear upsampling and fix data flow issues.
-
Fix
StardistUnet
class to output all the decoder features. -
Fix
Decoder
,DecoderStage
and long-skip modules to work with up scale factors instead of output dimensions.
v0.1.22
v0.1.21
0.1.21 — 2023-06-12
Features
- Add StrongAugment data augmentation policy to data-loading pipeline: https://arxiv.org/abs/2206.15274
v0.1.20
0.1.20 — 2023-01-13
Fixes
-
Enable only writing folder&hdf5 datasets with only images
-
Enable writing datasets without patching.
-
Add long missing h5 reading utility function to
FileHandler
Features
-
Add hdf5 input file reading to
Inferer
classes. -
Add option to write pannuke dataset to h5 db in
PannukeDataModule
andLizardDataModule
. -
Add a generic model builder function
get_model
tomodels.__init__.py
-
Rewrite segmentation benchmarker. Now it can take in hdf5 datasets.
v0.1.19
v0.1.18
v0.1.17
0.1.17 — 2022-12-29
Features
- Add transformer modules
- Add exact, slice, and memory efficient (xformers) self attention computations
- Add transformers modules to
Decoder
modules - Add common transformer mlp activation functions: star-relu, geglu, approximate-gelu.
- Add Linformer self-attention mechanism.
- Add support for model intialization from yaml-file in
MultiTaskUnet
. - Add a new cross-attention long-skip module. Works with
long_skip='cross-attn'
Refactor
- Added more verbose error messages for the abstract wrapper-modules in
modules.base_modules
- Added more verbose error catching for xformers.ops.memory_efficient_attention.