-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Description
Thanks for your great work.
I would like to know how you trained the models in Table 2 other than your own LLava-1.5 model (LLaVA-1.5 (Ours)).
For example, did you also re-train the LLaVA-1.5 model with the Qwen2-7B large language model (LLM)? If so, what are the pretraining stage and instruction tuning stage data you have used?
And another example:
For the Money model with the Qwen-7B LLM, where does the results in the Table2 comes from? Did you re-train the Money model?
Metadata
Metadata
Assignees
Labels
No labels