Is it possible to load a haystack-generated model in transformers #5517
Unanswered
Avs-safety
asked this question in
Questions
Replies: 1 comment 1 reply
-
There already exists a transformer_model = reader.inferencer.model.convert_to_transformers()[0]
reader.inferencer.processor.tokenizer.save_pretrained(your_save_path)
transformer_model.save_pretrained(your_save_path) |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I've fine-tuned a BERT model on my own SQuAD-like data using haystack, which results in several files (e.g. language_model.bin, prediction_head_0.bin, etc). Does anyone know if there is a way to load the model via a transformers pipeline? It doesn't work when I try unless I rename some of the files (e.g. language_model.bin to model.bin), however I assume this then ignores the prediction head...
for example loading via:
Appreciate any help!
Beta Was this translation helpful? Give feedback.
All reactions