-
Notifications
You must be signed in to change notification settings - Fork 85
Open
Description
Although non_saturating_d_loss
and non_saturating_gen_loss
are not default choices in this codebase, there seems a mistake of argument placement label
and input
in these two functions, which should be swapped (input
and label
).
LlamaGen/tokenizer/tokenizer_image/vq_loss.py
Lines 29 to 30 in ce98ec4
loss_real = torch.mean(F.binary_cross_entropy_with_logits(torch.ones_like(logits_real), logits_real)) | |
loss_fake = torch.mean(F.binary_cross_entropy_with_logits(torch.zeros_like(logits_fake), logits_fake)) |
return torch.mean(F.binary_cross_entropy_with_logits(torch.ones_like(logit_fake), logit_fake)) |
Also check https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html#torch.nn.BCEWithLogitsLoss.
WANGSSSSSSS
Metadata
Metadata
Assignees
Labels
No labels