You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The following is a list of supported encoders in the CDP. Select the appropriate family of encoders and click to expand the table and select a specific encoder and its pre-trained weights (`encoder_name` and `encoder_weights` parameters).
Copy file name to clipboardExpand all lines: change_detection_pytorch/stanet/model.py
+40-3Lines changed: 40 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -3,17 +3,42 @@
3
3
fromtorch.nnimportfunctionalasF
4
4
from ..encodersimportget_encoder
5
5
from .decoderimportSTANetDecoder
6
+
from ..baseimportSegmentationHead
6
7
7
8
8
9
classSTANet(torch.nn.Module):
10
+
"""
11
+
Args:
12
+
encoder_name: Name of the classification model that will be used as an encoder (a.k.a backbone)
13
+
to extract features of different spatial resolution
14
+
encoder_weights: One of **None** (random initialization), **"imagenet"** (pre-training on ImageNet) and
15
+
other pretrained weights (see table with available weights for each encoder_name)
16
+
in_channels: A number of input channels for the model, default is 3 (RGB images)
17
+
classes: A number of classes for output mask (or you can think as a number of channels of output mask)
18
+
activation: An activation function to apply after the final convolution layer.
19
+
Available options are **"sigmoid"**, **"softmax"**, **"logsoftmax"**, **"tanh"**, **"identity"**, **callable** and **None**.
20
+
Default is **None**
21
+
return_distance_map: If True, return distance map, which shape is (BatchSize, Height, Width), of feature maps from images of two periods. Default False.
0 commit comments