Skip to content

Questions About Data chosen Strategies #29

@nantenT

Description

@nantenT

Hi, amazing work, and thank you for making it open source!

  1. After reviewing your code, I noticed multiple preference strategies are included when selecting DPO preference pairs. Have you compared these strategies, and if so, which one tends to perform better?

  2. When incorporating chosen preference data (SFT) into the original model, if the data distribution of the original model's outputs is completely inconsistent with the chosen data and of lower quality, would you recommend using OOD chosen + generated data as preference pairs for training, or only using preference pairs generated by the original model?

Thanks in advance for your insights!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions