Skip to content

in reconstruction_loss function, the divisor part should be Nmax+1 #4

@rardz

Description

@rardz

It was in the class Model of sketch_rnn.py:

def reconstruction_loss(self, mask, dx, dy, p, epoch):
    pdf = self.bivariate_normal_pdf(dx, dy)
    LS = -torch.sum(mask*torch.log(1e-5+torch.sum(self.pi * pdf, 2)))\
        /float(Nmax*hp.batch_size)
    LP = -torch.sum(p*torch.log(self.q))/float(Nmax*hp.batch_size)
    return LS+LP

Each Nmax in both LS and LP line, should be (Nmax+1) instead. As in the train function of class Model , each sequence has concated an sos at the begining:

# create start of sequence:
if use_cuda:
    sos = Variable(torch.stack([torch.Tensor([0,0,1,0,0])]\
        *hp.batch_size).cuda()).unsqueeze(0)
else:
    sos = Variable(torch.stack([torch.Tensor([0,0,1,0,0])]\
        *hp.batch_size)).unsqueeze(0)
# had sos at the begining of the batch:
batch_init = torch.cat([sos, batch],0)
# expend z to be ready to concatenate with inputs:
z_stack = torch.stack([z]*(Nmax+1))
# inputs is concatenation of z and batch_inputs
inputs = torch.cat([batch_init, z_stack],2)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions