Skip to content

the retrained pretrain model has a worse performance when finetuned #430

@SkytreeRom

Description

@SkytreeRom

Hi all,

The pretrained SimAMResNet34 model that I trained from scratch has normal eer results, which are the same as the open source model.
But when I continue to use this model to complete LM finetune training, the model results become worse.
But if I use the open source SimAMResNet34 pretrained model, the LM finetune results are normal, which are basically the same as the open source finetune model results.
Is there some operation step I am missing after completing the pretrain training.?(I confirmed that the model is read correctly when LM fintune)

Thanks

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions