Skip to content

Questions about the Table2 in the paper. #6

@weiaicunzai

Description

@weiaicunzai

Image

Thanks for your great work.
I would like to know how you trained the models in Table 2 other than your own LLava-1.5 model (LLaVA-1.5 (Ours)).

For example, did you also re-train the LLaVA-1.5 model with the Qwen2-7B large language model (LLM)? If so, what are the pretraining stage and instruction tuning stage data you have used?

Image

And another example:
For the Money model with the Qwen-7B LLM, where does the results in the Table2 comes from? Did you re-train the Money model?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions