-
Notifications
You must be signed in to change notification settings - Fork 63
Open
Description
It was in the class Model
of sketch_rnn.py:
def reconstruction_loss(self, mask, dx, dy, p, epoch):
pdf = self.bivariate_normal_pdf(dx, dy)
LS = -torch.sum(mask*torch.log(1e-5+torch.sum(self.pi * pdf, 2)))\
/float(Nmax*hp.batch_size)
LP = -torch.sum(p*torch.log(self.q))/float(Nmax*hp.batch_size)
return LS+LP
Each Nmax
in both LS
and LP
line, should be (Nmax+1)
instead. As in the train
function of class Model
, each sequence has concated an sos at the begining:
# create start of sequence:
if use_cuda:
sos = Variable(torch.stack([torch.Tensor([0,0,1,0,0])]\
*hp.batch_size).cuda()).unsqueeze(0)
else:
sos = Variable(torch.stack([torch.Tensor([0,0,1,0,0])]\
*hp.batch_size)).unsqueeze(0)
# had sos at the begining of the batch:
batch_init = torch.cat([sos, batch],0)
# expend z to be ready to concatenate with inputs:
z_stack = torch.stack([z]*(Nmax+1))
# inputs is concatenation of z and batch_inputs
inputs = torch.cat([batch_init, z_stack],2)
Metadata
Metadata
Assignees
Labels
No labels