-
Notifications
You must be signed in to change notification settings - Fork 310
Open
Description
Hello.
Thank you for the great work on Tag2Text.
I'm trying to reproduce the captioning performance on the NoCaps Validation set (Table 3.) as reported paper(Tag2Text https://arxiv.org/pdf/2303.05657).
But, I'm running into some issues.
Could you please clarify the following points?
- During inference for NoCaps captioning, did you extract tags from the captions of NoCaps Validation set and use them as user-specified tags?
(I saw "For captioning vision-language models, we parse the caption and classify them into synonyms to obtain image tags." in the paper, and thought of that.) - or, Did you simply use image-only inputs without user-specified tags?
- If you used pre-extracted tags, would it be possible to share the tag file?
This information would be extremely helpful for accurate reproduction of your reported results.
Thank you in advance.
Best regards,
Metadata
Metadata
Assignees
Labels
No labels