-
Notifications
You must be signed in to change notification settings - Fork 171
Other Models
Currently, the following models are supported in addition to the default models: VGG16_SOD_finetune
, fcn32s-heavy-pascal
, nyud-fcn32s-color-heavy
, channel_pruning
, vgg16_places365
, vgg16_hybrid1365
, VGG16-Stylized-ImageNet
.
You can download any of these models that I have converted from Caffe to PyTorch here: https://drive.google.com/open?id=1OGKfoIehp2MiJL2Iq_8VMTy76L6waGC8
- Please note that the
VGG16-Stylized-ImageNet
model requires around a power of 2 increase to both content and style weights in order to achieve good results.
The fcn32s-heavy-pascal
and nyud-fcn32s-color-heavy
models come from here: https://github.com/shelhamer/fcn.berkeleyvision.org
The vgg16_places365
and vgg16_hybrid1365
models come from here: https://github.com/CSAILVision/places365
The VGG16_SOD_finetune
model comes from here: https://www.cs.bu.edu/groups/ivc/Subitizing/model/VGG16/
The channel_pruning
model comes from here: https://github.com/yihui-he/channel-pruning/
The VGG16-Stylized-ImageNet
model comes from: https://github.com/rgeirhos/texture-vs-shape, https://github.com/rgeirhos/Stylized-ImageNet
VGG-19 by VGG team
The standard caffemodel from the original neural style converted to PyTorch, this one is installed with the included model download script. Creates good results without tweaking, but uses a high amount of resources even with smaller images.
Model file: vgg19-d01eb7cb.pth
Usable Layers: relu1_1, relu1_2, relu2_1, relu2_2, relu3_1, relu3_2, relu3_3, relu3_4, relu4_1, relu4_2, relu4_3, relu4_4, relu5_1, relu5_2, relu5_3, relu5_4
Basic command:
python3 neural_style.py -style_image [image1] -content_image [image2] -output_image [outimage] -model_file models/vgg19-d01eb7cb.pth -content_layers relu1_1,relu2_1,relu3_1,relu4_1,relu5_1 -style_layers relu1_1,relu2_1,relu3_1,relu4_1,relu5_1
Source:
VGG-16 by VGG team
The standard VGG-16 caffemodel from the original neural style converted to PyTorch, this one is installed with the included model download script.
Model file: vgg16-00b39a1b.pth
Usable Layers: relu1_1, relu1_2, relu2_1, relu2_2, relu3_1, relu3_2, relu3_3, relu4_1, relu4_2, relu4_3, relu5_1, relu5_2, relu5_3
Basic command:
python3 neural_style.py -style_image [image1] -content_image [image2] -output_image [outimage] -model_file models/vgg16-00b39a1b.pth -content_layers relu1_1,relu2_1,relu3_1,relu4_1,relu5_1 -style_layers relu1_1,relu2_1,relu3_1,relu4_1,relu5_1
Source:
NIN
The standard NIN caffemodel from the original neural style converted to PyTorch, this one is installed with the included model download script.
Model file: nin_imagenet.pth
Usable layers: relu0, relu1, relu2, relu3, relu5, relu6, relu7, relu8, relu9, relu10, relu11, relu12
Basic command:
python3 neural_style.lua -style_image [image1] -content_image [image2] -output_image [outimage] -model_file models/nin_imagenet.pth -content_layers relu0,relu3,relu7,relu12 -style_layers relu0,relu3,relu7,relu12
CNN Object Proposal Models for Salient Object Detection by VGG16_SOD_finetune team
Similar to VGG-ILSVRC-16, but tends to create more smooth/clean results. Same resource usage as VGG-16. Released in 2016.
Model file: VGG16_SOD_finetune.pth
Usable layers: relu1_1, relu1_2, relu2_1, relu2_2, relu3_1, relu3_2, relu3_3, relu4_1, relu4_2, relu4_3, relu5_1, relu5_2, relu5_3
Basic command:
python3 neural_style.py -style_image [image1] -content_image [image2] -output_image [outimage] -model_file models/VGG16_SOD_finetune.pth -content_layers relu1_1,relu2_1,relu3_1,relu4_1,relu5_1 -style_layers relu1_1,relu2_1,relu3_1,relu4_1,relu5_1
Source:
VGG16-Stylized-ImageNet from ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness
A VGG-16 model trained on a stylized version of the standard ImageNet dataset.
Model file: VGG16-Stylized-ImageNet.pth
Usable Layers: relu1_1, relu1_2, relu2_1, relu2_2, relu3_1, relu3_2, relu3_3, relu4_1, relu4_2, relu4_3, relu5_1, relu5_2, relu5_3
Basic command:
python3 neural_style.py -style_image [image1] -content_image [image2] -output_image [outimage] -model_file models/VGG16-Stylized-ImageNet.pth -content_layers relu1_1,relu2_1,relu3_1,relu4_1,relu5_1 -style_layers relu1_1,relu2_1,relu3_1,relu4_1,relu5_1
Source: https://github.com/rgeirhos/texture-vs-shape, https://github.com/rgeirhos/Stylized-ImageNet
VGG-16 Places365 by MIT
Made for the Places365-Challenge which includes the Places2 Challenge 2016, the ILSVRC and the COCO joint workshop at ECCV 2016. Places365 is the successor to the Places205 model.
Model file: vgg16_places365.pth
Usable layers: relu1_1, relu1_2, relu2_1, relu2_2, relu3_1, relu3_2, relu3_3, relu4_1, relu4_2, relu4_3, relu5_1, relu5_2, relu5_3
Basic command:
python3 neural_style.py -style_image [image1] -content_image [image2] -output_image [outimage] -model_file models/vgg16_places365.pth -content_layers relu1_1,relu2_1,relu3_1,relu4_1,relu5_1 -style_layers relu1_1,relu2_1,relu3_1,relu4_1,relu5_1
Source: https://github.com/CSAILVision/places365
VGG16 Hybrid1365 by MIT
Made for the Places365-Challenge which includes the Places2 Challenge 2016, the ILSVRC and the COCO joint workshop at ECCV 2016. Places365 is the successor to the Places205 model.
Model file: vgg16_hybrid1365.pth
Usable layers: relu1_1, relu1_2, relu2_1, relu2_2, relu3_1, relu3_2, relu3_3, relu4_1, relu4_2, relu4_3, relu5_1, relu5_2, relu5_3
Basic command:
python3 neural_style.py -style_image [image1] -content_image [image2] -output_image [outimage] -model_file models/vgg16_hybrid1365.pth -content_layers relu1_1,relu2_1,relu3_1,relu4_1,relu5_1 -style_layers relu1_1,relu2_1,relu3_1,relu4_1,relu5_1
Source: https://github.com/CSAILVision/places365
PASCAL VOC FCN-32s by University of California, Berkeley
Model file: fcn32s-heavy-pascal.pth
Usable layers: relu1_1, relu1_2, relu2_1, relu2_2, relu3_1, relu3_2, relu3_3, relu4_1, relu4_2, relu4_3, relu5_1, relu5_2, relu5_3
Basic command:
python3 neural_style.py -style_image [image1] -content_image [image2] -output_image [outimage] -model_file models/fcn32s-heavy-pascal.pth -content_layers relu1_1,relu2_1,relu3_1,relu4_1,relu5_1 -style_layers relu1_1,relu2_1,relu3_1,relu4_1,relu5_1
Source: https://github.com/shelhamer/fcn.berkeleyvision.org
PASCAL VOC NYUD FCN-32s Color by University of California, Berkeley
Model file: nyud-fcn32s-color-heavy.pth
Usable layers: relu1_1, relu1_2, relu2_1, relu2_2, relu3_1, relu3_2, relu3_3, relu4_1, relu4_2, relu4_3, relu5_1, relu5_2, relu5_3
Basic command:
python3 neural_style.py -style_image [image1] -content_image [image2] -output_image [outimage] -model_file models/nyud-fcn32s-color-heavy.pth -content_layers relu1_1,relu2_1,relu3_1,relu4_1,relu5_1 -style_layers relu1_1,relu2_1,relu3_1,relu4_1,relu5_1
Source: https://github.com/shelhamer/fcn.berkeleyvision.org
VGG-16 channel pruning from Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)
A pruned version of the standard VGG-16 model. Uses less memory, and is a middle ground between NIN and VGG-16 in terms of quality.
Model file: channel_pruning.pth
Usable layers: relu1_1, relu1_2, relu2_1, relu2_2, relu3_1, relu3_2, relu3_3, relu4_1, relu4_2, relu4_3, relu5_1, relu5_2, relu5_3
Basic command:
python3 neural_style.py -style_image [image1] -content_image [image2] -output_image [outimage] -model_file models/channel_pruning.pth -content_layers relu1_1,relu2_1,relu3_1,relu4_1,relu5_1 -style_layers relu1_1,relu2_1,relu3_1,relu4_1,relu5_1
Source: https://github.com/yihui-he/channel-pruning/