Skip to content

Bug in Encoding and Decoding Parameters #25

@EoinKenny

Description

@EoinKenny

Hi I've been messing around with this code and found an error.

If I use these params in the AttGAN class

    def __init__(self, enc_dim=64, enc_layers=6, enc_norm_fn='batchnorm', enc_acti_fn='lrelu',
                 dec_dim=64, dec_layers=6, dec_norm_fn='batchnorm', dec_acti_fn='relu',
                 n_attrs=1, shortcut_layers=1, inject_layers=0, img_size=128):

I get the following error

-----------------------------------------------
RuntimeError  Traceback (most recent call last)
<ipython-input-274-d001aaf018ea> in <module>
----> 1 netG(imgs, a).shape

~/Documents/University/Ph.D/Contrastive Explanations Experiments/Image/MNIST/Final Experiments/Substitutability Test/senv/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
    720             result = self._slow_forward(*input, **kwargs)
    721         else:
--> 722             result = self.forward(*input, **kwargs)
    723         for hook in itertools.chain(
    724                 _global_forward_hooks.values(),

<ipython-input-267-7eb972155d92> in forward(self, x, a, mode)
     61         if mode == 'enc-dec':
     62             assert a is not None, 'No given attribute.'
---> 63             return self.decode(self.encode(x), a)
     64         if mode == 'enc':
     65             return self.encode(x)

<ipython-input-267-7eb972155d92> in decode(self, zs, a)
     47         z = torch.cat([zs[-1], a_tile], dim=1)
     48         for i, layer in enumerate(self.dec_layers):
---> 49             z = layer(z)
     50             if self.shortcut_layers > i:  # Concat 1024 with 512
     51                 print(z.shape, zs[len(self.dec_layers) - 2 - i].shape)

~/Documents/University/Ph.D/Contrastive Explanations Experiments/Image/MNIST/Final Experiments/Substitutability Test/senv/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
    720             result = self._slow_forward(*input, **kwargs)
    721         else:
--> 722             result = self.forward(*input, **kwargs)
    723         for hook in itertools.chain(
    724                 _global_forward_hooks.values(),

<ipython-input-43-fa89d761303c> in forward(self, x)
    189 
    190         def forward(self, x):
--> 191                 return self.layers(x)
    192 
    193 

~/Documents/University/Ph.D/Contrastive Explanations Experiments/Image/MNIST/Final Experiments/Substitutability Test/senv/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
    720             result = self._slow_forward(*input, **kwargs)
    721         else:
--> 722             result = self.forward(*input, **kwargs)
    723         for hook in itertools.chain(
    724                 _global_forward_hooks.values(),

~/Documents/University/Ph.D/Contrastive Explanations Experiments/Image/MNIST/Final Experiments/Substitutability Test/senv/lib/python3.7/site-packages/torch/nn/modules/container.py in forward(self, input)
    115     def forward(self, input):
    116         for module in self:
--> 117             input = module(input)
    118         return input
    119 

~/Documents/University/Ph.D/Contrastive Explanations Experiments/Image/MNIST/Final Experiments/Substitutability Test/senv/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
    720             result = self._slow_forward(*input, **kwargs)
    721         else:
--> 722             result = self.forward(*input, **kwargs)
    723         for hook in itertools.chain(
    724                 _global_forward_hooks.values(),

~/Documents/University/Ph.D/Contrastive Explanations Experiments/Image/MNIST/Final Experiments/Substitutability Test/senv/lib/python3.7/site-packages/torch/nn/modules/conv.py in forward(self, input, output_size)
    905         return F.conv_transpose2d(
    906             input, self.weight, self.bias, self.stride, self.padding,
--> 907             output_padding, self.groups, self.dilation)
    908 
    909 

RuntimeError: Given transposed=1, weight of size [1536, 1024, 4, 4], expected input[32, 2048, 4, 4] to have 1536 channels, but got 2048 channels instead

Any idea how to fix this?

It seems that if you are going to use shortcut layers, you cannot have enc_layers/dec_layers bigger than 5.
As I would like to train a version which encodes into a 1D vector, it's a big problem.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions