Training adapter models in a multimodal setting #635
loukasilias
started this conversation in
General
Replies: 1 comment
-
Hi @loukasilias, This depends on your trainings script but in theory you can add adapters to your BERT model easily with very little changes of the training (see our docs). You would need to:
Hope this helps! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I have a model with text and images as inputs
I pass the text through BERT and images through Vision Transformers and train the multimodal model.
Is it possible to add adapter to BERT and train the multimodal model in the same way (as I train it without adapter / without changing the code of training).
Thank you.
Loukas
Beta Was this translation helpful? Give feedback.
All reactions