Skip to content

Loss differs slightly from paper #8

@poier

Description

@poier

I just saw that the final loss in the implementation differs from what was described in the paper (Eqn. (5)). In the implementation (https://github.com/JizhiziLi/P3M/blob/master/core/train.py#L85), loss_fusion_alpha is used in two terms, giving it a higher weight than described in the paper. The second term can simply be skipped to make it equivalent to the paper, i.e., change it to:

loss = loss_global/6+loss_local*2+loss_fusion_alpha*2+loss_fusion_comp

instead of:

loss = loss_global/6+loss_local*2+loss_fusion_alpha*2+loss_fusion_alpha+loss_fusion_comp

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions