Skip to content

loss function  #152

@riyajatar37003

Description

@riyajatar37003

i am trying to understand the loss function :

def calc_loss(self, y_true, y_pred): """ 矩阵计算batch内的cos loss """ y_true = y_true[::2] norms = (y_pred ** 2).sum(axis=1, keepdims=True) ** 0.5 y_pred = y_pred / norms y_pred = torch.sum(y_pred[::2] * y_pred[1::2], dim=1) * 20 y_pred = y_pred[:, None] - y_pred[None, :] y_true = y_true[:, None] < y_true[None, :] y_true = y_true.float() y_pred = y_pred - (1 - y_true) * 1e12 y_pred = y_pred.view(-1) y_pred = torch.cat((torch.tensor([0]).float().to(self.device), y_pred), dim=0) return torch.logsumexp(y_pred, dim=0)

  1. why we are taking alternate values from true labels?
  2. why we are taking dot product between alternate ypred?

if possible can you share any link or documentation of paper for this. thanks

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions