Skip to content

Commit 82b88d7

Browse files
author
Divam Gupta
committed
readme updated
1 parent f9fc0f6 commit 82b88d7

File tree

3 files changed

+12
-10
lines changed

3 files changed

+12
-10
lines changed

Models/FCN32.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11

22
# https://github.com/wkentaro/pytorch-fcn/blob/master/torchfcn/models/fcn32s.py
3-
# assert 0 == 1 # fc weights into the 1x1 convs , get_upsampling_weight
3+
# fc weights into the 1x1 convs , get_upsampling_weight
44

55

66

@@ -65,7 +65,7 @@ def FCN32( nClasses , input_height=416, input_width=608 , vgg_level=3):
6565
x = Dense( 1024 , activation='softmax', name='predictions')(x)
6666

6767
vgg = Model( img_input , x )
68-
# vgg.load_weights(VGG_Weights_path)
68+
vgg.load_weights(VGG_Weights_path)
6969

7070
o = f5
7171

Models/FCN8.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11

22
# https://github.com/wkentaro/pytorch-fcn/blob/master/torchfcn/models/fcn32s.py
3-
# assert 0 == 1 # fc weights into the 1x1 convs , get_upsampling_weight
3+
# fc weights into the 1x1 convs , get_upsampling_weight
44

55

66

@@ -89,7 +89,7 @@ def FCN8( nClasses , input_height=416, input_width=608 , vgg_level=3):
8989
x = Dense( 1024 , activation='softmax', name='predictions')(x)
9090

9191
vgg = Model( img_input , x )
92-
# vgg.load_weights(VGG_Weights_path)
92+
vgg.load_weights(VGG_Weights_path)
9393

9494
o = f5
9595

README.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ Implememnation of various Deep Image Segmentation models in keras.
1818
* Keras 2.0
1919
* opencv for python
2020

21-
```
21+
```shell
2222
sudo apt-get install python-opencv
2323
sudo pip install --upgrade tensorflow-gpu
2424
sudo pip install --upgrade keras
@@ -55,9 +55,9 @@ Only use bmp or png format for the annotation images.
5555

5656
## Visualizing the prepared data
5757

58-
You can also visulize your prepared annotations for verification of the prepared data.
58+
You can also visualize your prepared annotations for verification of the prepared data.
5959

60-
```
60+
```shell
6161
python visualizeDataset.py \
6262
--images="data/dataset1/images_prepped_train/" \
6363
--annotations="data/dataset1/annotations_prepped_train/" \
@@ -70,7 +70,7 @@ python visualizeDataset.py \
7070

7171
You need to download the pretrained VGG-16 weights trained on imagenet if you want to use VGG based models
7272

73-
```
73+
```shell
7474
mkdir data
7575
cd data
7676
wget "https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_th_dim_ordering_th_kernels.h5"
@@ -82,7 +82,7 @@ wget "https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vg
8282

8383
To train the model run the following command:
8484

85-
```
85+
```shell
8686
python train.py \
8787
--save_weights_path=weights/ex1 \
8888
--train_images="data/dataset1/images_prepped_train/" \
@@ -92,8 +92,10 @@ python train.py \
9292
--n_classes=10 \
9393
--input_height=800 \
9494
--input_width=550 \
95-
--model_name="vgg_segnet"
95+
--model_name="vgg_segnet"
9696
```
9797

98+
Choose model_name from vgg_segnet vgg_unet, vgg_unet2, fcn8, fcn32
99+
98100

99101

0 commit comments

Comments
 (0)