You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+30-25Lines changed: 30 additions & 25 deletions
Original file line number
Diff line number
Diff line change
@@ -24,11 +24,21 @@
24
24
25
25
## Introduction
26
26
27
-
Contains multi-task encoder-decoder architectures along with dedicated post-processing methods for segmenting cell/nuclei instances. As the name suggests, this library is heavily inspired by [segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch) library for semantic segmentation.
27
+
**cellseg-models.pytorch** is a library built upon [PyTorch](https://pytorch.org/) that contains multi-task encoder-decoder architectures along with dedicated post-processing methods for segmenting cell/nuclei instances. As the name might suggest, this library is heavily inspired by [segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch) library for semantic segmentation.
#Sliding window inference for big images using overlapping patches
162
+
#define the inferer
162
163
inferer = csmp.inference.SlidingWindowInferer(
163
164
model=model,
164
165
input_folder="/path/to/images/",
165
166
checkpoint_path="/path/to/model/weights/",
166
167
out_activations=out_activations,
167
168
out_boundary_weights=out_boundary_weights,
168
-
instance_postproc="hovernet", # THE POST-PROCESSING METHOD
169
+
instance_postproc="hovernet", # THE POST-PROCESSING METHOD
170
+
normalization="percentile", # same normalization as in training
169
171
patch_size=(256, 256),
170
172
stride=128,
171
173
padding=80,
172
174
batch_size=8,
173
-
normalization="percentile", # same normalization as in training
174
175
)
175
176
176
-
# Run sliding window inference.
177
177
inferer.infer()
178
178
179
179
inferer.out_masks
@@ -182,9 +182,14 @@ inferer.out_masks
182
182
183
183
## Models API
184
184
185
+
Generally, the model building API enables the effortless creation of hard-parameter sharing multi-task encoder-decoder CNN architectures. The general architectural schema is illustrated in the below image.
The class API enables the most flexibility in defining different model architectures. It allows for defining a multitude of hard-parameter sharing multi-task encoder-decoder architectures with (relatively) low effort. The class API is borrowing a lot from [segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch) models API.
192
+
The class API enables the most flexibility in defining different model architectures. It borrows a lot from [segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch) models API.
Copy file name to clipboardExpand all lines: examples/lizard_nuclei_segmentation_cellpose.ipynb
+2-3Lines changed: 2 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -289,10 +289,9 @@
289
289
"\n",
290
290
"Other important params include: \n",
291
291
"- `out_activations` - Sets the output activation functions for each of the model outputs\n",
292
-
"- `out_boundary_weights` - Sets whether we will use a weight matrix to add less weight to boundaries of the prediction of the image. This can only be useful when inference is run for bigger images that are patched in overlapping patches (this can be done with the `SlidingWindowInferer`).\n",
292
+
"- `out_boundary_weights` - Sets whether we will use a weight matrix to add less weight to boundaries of the predictions. This can only be useful when inference is run for bigger images that are patched in overlapping patches (inference with overlapping patches can be done with the `SlidingWindowInferer`).\n",
293
293
"- `normalization` - Should be set to the same one as during training.\n",
294
-
"- `n_images` - Run inference only for the 3 first images of inside the input folder.\n",
295
-
"- `batch_size` -This needs to be set to 1 since the input images have different sizes and the dataloader can't stack them."
294
+
"- `n_images` - Run inference only for the 50 first images of inside the input folder."
"- `out_activations` - Sets the output activation functions for each of the model outputs\n",
282
-
"- `out_boundary_weights` - Sets whether we will use a weight matrix to add less weight to boundaries of the prediction of the image. This can only be useful when inference is run for bigger images that are patched in overlapping patches (this can be done with the `SlidingWindowInferer`).\n",
282
+
"- `out_boundary_weights` - Sets whether we will use a weight matrix to add less weight to boundaries of the predictions. This can only be useful when inference is run for bigger images that are patched in overlapping patches (inference with overlapping patches can be done with the `SlidingWindowInferer`).\n",
283
283
"- `normalization` - Should be set to the same one as during training.\n",
284
284
"- `n_images` - Run inference only for the 50 first images of inside the input folder.\n",
285
-
"- `use_mask` - Use a mask to get cell type classifications for only to the same pixels as in the instance segmentation.\n",
286
285
"\n",
287
-
"**NOTE**: Another important thing to note here, is that the `\"stardist\"` post-proc method is not the original one introduced in the [Stardist](https://github.com/stardist/stardist) but rather a workaround that's is not as optimal as the original one but still does the job. You can use the original by setting it to `\"stardist_orig\"`, however, this requires also the original `stardist` library that can be installed with `pip install stardist`."
286
+
"**NOTE**: Another important thing to note here, is that the `\"stardist\"` post-proc method is not the original one introduced in the [Stardist](https://github.com/stardist/stardist) paper. It is a python rewrite of the original one and can be even twice as fast as the orig one with only neglible differneces in the output. However, if you like, you can use the original by setting `instance_postproc` to `\"stardist_orig\"`. It should be noted that the original version requires also the original `stardist` library that can be installed with `pip install stardist`."
0 commit comments