Skip to content

Commit 35886d9

Browse files
committed
docs: mod readme, fix notebooks
1 parent 8b20a21 commit 35886d9

File tree

4 files changed

+38
-36
lines changed

4 files changed

+38
-36
lines changed

README.md

Lines changed: 30 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -24,11 +24,21 @@
2424

2525
## Introduction
2626

27-
Contains multi-task encoder-decoder architectures along with dedicated post-processing methods for segmenting cell/nuclei instances. As the name suggests, this library is heavily inspired by [segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch) library for semantic segmentation.
27+
**cellseg-models.pytorch** is a library built upon [PyTorch](https://pytorch.org/) that contains multi-task encoder-decoder architectures along with dedicated post-processing methods for segmenting cell/nuclei instances. As the name might suggest, this library is heavily inspired by [segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch) library for semantic segmentation.
2828

29-
<br><br>
29+
## Features
3030

31-
![Architecture](./images/architecture_overview.png)
31+
- High level API to define cell/nuclei instance segmentation models.
32+
- 4 cell/nuclei instance segmentation models and more to come.
33+
- Open source datasets for training and benchmarking.
34+
- Pre-trained backbones/encoders from the [timm](https://github.com/rwightman/pytorch-image-models) library.
35+
- All the architectures can be augmented to **panoptic segmentation**.
36+
- A lot of flexibility to modify the components of the model architectures.
37+
- Sliding window inference for large images.
38+
- Multi-GPU inference.
39+
- Popular training losses and benchmarking metrics.
40+
- Simple model training with [pytorch-lightning](https://www.pytorchlightning.ai/).
41+
- Benchmarking utilities both for model latency & segmentation performance.
3242

3343
## Installation
3444

@@ -44,17 +54,6 @@ pip install cellseg-models-pytorch
4454
pip install cellseg-models-pytorch[all]
4555
```
4656

47-
## Features
48-
49-
- High level API to define cell/nuclei instance segmentation models.
50-
- 4 cell/nuclei instance segmentation models and more to come.
51-
- Pre-trained backbones/encoders from the [timm](https://github.com/rwightman/pytorch-image-models) library.
52-
- All the architectures can be augmented to output semantic segmentation outputs along with instance semgentation outputs (panoptic segmentation).
53-
- A lot of flexibility to modify the components of the model architectures.
54-
- Multi-GPU inference.
55-
- Popular training losses and benchmarking metrics.
56-
- Simple model training with [pytorch-lightning](https://www.pytorchlightning.ai/).
57-
5857
## Models
5958

6059
| Model | Paper |
@@ -109,10 +108,10 @@ y = model(x) # {"cellpose": [1, 2, 256, 256], "type": [1, 5, 256, 256], "sem": [
109108
```python
110109
import cellseg_models_pytorch as csmp
111110

112-
# two decoder branches.
111+
# the model will include two decoder branches.
113112
decoders = ("cellpose", "sem")
114113

115-
# three segmentation heads from the decoders.
114+
# and in total three segmentation heads emerging from the decoders.
116115
heads = {
117116
"cellpose": {"cellpose": 2, "type": 5},
118117
"sem": {"sem": 3}
@@ -148,32 +147,33 @@ y = model(x) # {"cellpose": [1, 2, 256, 256], "type": [1, 5, 256, 256], "sem": [
148147
```python
149148
import cellseg_models_pytorch as csmp
150149

150+
# define the model
151151
model = csmp.models.hovernet_base(type_classes=5)
152-
# returns {"hovernet": [B, 2, H, W], "type": [B, 5, H, W], "inst": [B, 2, H, W]}
153152

154-
# the final activations for each model output
153+
# define the final activations for each model output
155154
out_activations = {"hovernet": "tanh", "type": "softmax", "inst": "softmax"}
156155

157-
# models perform the poorest at the image boundaries, with overlapping patches this
158-
# causes issues which can be overcome by adding smoothing to the prediction boundaries
156+
# define whether to weight down the predictions at the image boundaries
157+
# typically, models perform the poorest at the image boundaries and with
158+
# overlapping patches this causes issues which can be overcome by down-
159+
# weighting the prediction boundaries
159160
out_boundary_weights = {"hovernet": True, "type": False, "inst": False}
160161

161-
# Sliding window inference for big images using overlapping patches
162+
# define the inferer
162163
inferer = csmp.inference.SlidingWindowInferer(
163164
model=model,
164165
input_folder="/path/to/images/",
165166
checkpoint_path="/path/to/model/weights/",
166167
out_activations=out_activations,
167168
out_boundary_weights=out_boundary_weights,
168-
instance_postproc="hovernet", # THE POST-PROCESSING METHOD
169+
instance_postproc="hovernet", # THE POST-PROCESSING METHOD
170+
normalization="percentile", # same normalization as in training
169171
patch_size=(256, 256),
170172
stride=128,
171173
padding=80,
172174
batch_size=8,
173-
normalization="percentile", # same normalization as in training
174175
)
175176

176-
# Run sliding window inference.
177177
inferer.infer()
178178

179179
inferer.out_masks
@@ -182,9 +182,14 @@ inferer.out_masks
182182

183183
## Models API
184184

185+
Generally, the model building API enables the effortless creation of hard-parameter sharing multi-task encoder-decoder CNN architectures. The general architectural schema is illustrated in the below image.
186+
187+
<br><br>
188+
![Architecture](./images/architecture_overview.png)
189+
185190
### Class API
186191

187-
The class API enables the most flexibility in defining different model architectures. It allows for defining a multitude of hard-parameter sharing multi-task encoder-decoder architectures with (relatively) low effort. The class API is borrowing a lot from [segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch) models API.
192+
The class API enables the most flexibility in defining different model architectures. It borrows a lot from [segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch) models API.
188193

189194
**Model classes**:
190195

cellseg_models_pytorch/utils/file_manager.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -424,7 +424,7 @@ def save_masks(
424424
save_dir = fname.parent / "cells"
425425
if not Path(save_dir).exists():
426426
Path(save_dir).mkdir(parents=True, exist_ok=True)
427-
# print(save_dir)
427+
428428
fn = save_dir / f"{fname.name}_cells"
429429
FileHandler.write_gson(
430430
fname=fn,

examples/lizard_nuclei_segmentation_cellpose.ipynb

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -289,10 +289,9 @@
289289
"\n",
290290
"Other important params include: \n",
291291
"- `out_activations` - Sets the output activation functions for each of the model outputs\n",
292-
"- `out_boundary_weights` - Sets whether we will use a weight matrix to add less weight to boundaries of the prediction of the image. This can only be useful when inference is run for bigger images that are patched in overlapping patches (this can be done with the `SlidingWindowInferer`).\n",
292+
"- `out_boundary_weights` - Sets whether we will use a weight matrix to add less weight to boundaries of the predictions. This can only be useful when inference is run for bigger images that are patched in overlapping patches (inference with overlapping patches can be done with the `SlidingWindowInferer`).\n",
293293
"- `normalization` - Should be set to the same one as during training.\n",
294-
"- `n_images` - Run inference only for the 3 first images of inside the input folder.\n",
295-
"- `batch_size` -This needs to be set to 1 since the input images have different sizes and the dataloader can't stack them."
294+
"- `n_images` - Run inference only for the 50 first images of inside the input folder."
296295
]
297296
},
298297
{

examples/pannuke_nuclei_segmentation_stardist.ipynb

Lines changed: 5 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -178,7 +178,7 @@
178178
"\n",
179179
"For the nuclei type masks we will monitor mIoU metric during training.\n",
180180
"\n",
181-
"The optimizer used here is [AdamP](https://arxiv.org/abs/2006.08217)."
181+
"The optimizer used here is [AdamW](https://arxiv.org/abs/1711.05101).."
182182
]
183183
},
184184
{
@@ -206,7 +206,7 @@
206206
" model=model,\n",
207207
" branch_losses={\"dist\": \"ssim_mse\", \"stardist\": \"ssim_mse\", \"type\": \"ce_dice\"},\n",
208208
" branch_metrics={\"dist\": [None], \"stardist\":[None], \"type\": [\"miou\"]},\n",
209-
" optimizer=\"adamp\",\n",
209+
" optimizer=\"adamw\",\n",
210210
" lookahead=False,\n",
211211
")\n",
212212
"\n",
@@ -279,12 +279,11 @@
279279
"\n",
280280
"Other important params include: \n",
281281
"- `out_activations` - Sets the output activation functions for each of the model outputs\n",
282-
"- `out_boundary_weights` - Sets whether we will use a weight matrix to add less weight to boundaries of the prediction of the image. This can only be useful when inference is run for bigger images that are patched in overlapping patches (this can be done with the `SlidingWindowInferer`).\n",
282+
"- `out_boundary_weights` - Sets whether we will use a weight matrix to add less weight to boundaries of the predictions. This can only be useful when inference is run for bigger images that are patched in overlapping patches (inference with overlapping patches can be done with the `SlidingWindowInferer`).\n",
283283
"- `normalization` - Should be set to the same one as during training.\n",
284284
"- `n_images` - Run inference only for the 50 first images of inside the input folder.\n",
285-
"- `use_mask` - Use a mask to get cell type classifications for only to the same pixels as in the instance segmentation.\n",
286285
"\n",
287-
"**NOTE**: Another important thing to note here, is that the `\"stardist\"` post-proc method is not the original one introduced in the [Stardist](https://github.com/stardist/stardist) but rather a workaround that's is not as optimal as the original one but still does the job. You can use the original by setting it to `\"stardist_orig\"`, however, this requires also the original `stardist` library that can be installed with `pip install stardist`."
286+
"**NOTE**: Another important thing to note here, is that the `\"stardist\"` post-proc method is not the original one introduced in the [Stardist](https://github.com/stardist/stardist) paper. It is a python rewrite of the original one and can be even twice as fast as the orig one with only neglible differneces in the output. However, if you like, you can use the original by setting `instance_postproc` to `\"stardist_orig\"`. It should be noted that the original version requires also the original `stardist` library that can be installed with `pip install stardist`."
288287
]
289288
},
290289
{
@@ -304,14 +303,13 @@
304303
"inferer = csmp.inference.ResizeInferer(\n",
305304
" model=experiment,\n",
306305
" input_folder=save_dir / \"test\" / \"images\",\n",
307-
" out_activations={\"dist\": \"tanh\", \"stardist\": None, \"type\": \"softmax\"},\n",
306+
" out_activations={\"dist\": None, \"stardist\": None, \"type\": \"softmax\"},\n",
308307
" out_boundary_weights={\"dist\": False, \"stardist\": False, \"type\": False},\n",
309308
" resize=(256, 256), # Not actually resizing anything,\n",
310309
" instance_postproc=\"stardist\",\n",
311310
" save_intermediate=True, # save intermediate soft masks for visualization\n",
312311
" normalization=\"percentile\", # same normalization as during training\n",
313312
" batch_size=8,\n",
314-
" use_mask=True,\n",
315313
" n_images=50 # Use only the 50 first images of the folder\n",
316314
")\n",
317315
"inferer.infer()"

0 commit comments

Comments
 (0)