-
Notifications
You must be signed in to change notification settings - Fork 709
Open
Description
Thank you for your excellent work! Recently, I have been using the pretrain script and checkpoint you provided to perform post-pretraining on my dataset. My goal is vector retrieval. I have a few questions I'd like to ask:
- I noticed that the performance of the text embeddings is relatively lower compared to other llm baselines, but it offers faster inference speed. Do you think replacing the text model (which also means discarding the checkpoint) would be beneficial? Would the code modifications required for the replacement be significant?
- Using the pretrain script, I have currently frozen the ITG loss. For my task, do you think it is necessary to include the ITG loss? If I include some images and image descriptions generated by other large models (e.g., 4o) and combine them with the ITG loss, could this also benefit vector retrieval? Perhaps I should include the ITG loss in the early stages of training and freeze it later?
Metadata
Metadata
Assignees
Labels
No labels