KeyError: "Seq2SeqGenerator doesn't have input converter registered for google/t5-v1_1-base. Provide custom converter for google/t5-v1_1-base in Seq2SeqGenerator initialization" #4474
-
Coul someone kindly show me how to add in the "input_converter" as an initializayion in the Seq2SeqGenerator. i jave tried referring to the docs but it is not clear on my end. Here is the code I am using it in `from haystack.utils.preprocessing import Callable generator = Seq2SeqGenerator(model_name_or_path="google/t5-v1_1-base", input_converter=) Retrieve related documents from retrieverretrieved_docs = retriever.retrieve(query=query) Now generate answer from query and retrieved documentsanswer = generator.predict( I am trying to use on of the huggingface models as answer generator. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 2 replies
-
Hello @Chance-Obondo!
|
Beta Was this translation helpful? Give feedback.
-
Thank you @anakin87 We'll deprecate this class as everyone should move to PromptNode by now |
Beta Was this translation helpful? Give feedback.
Hello @Chance-Obondo!
Seq2SeqGenerator
As explained in the API reference, you should provide an
input_converter
: a Callable to prepare model input for the underlying language model.For example, see
haystack/haystack/nodes/answer_generator/transformers.py
Line 487 in 3d3e9c9
(I recognize that the docs are not very clear about this point)
google/t5-v1_1-base is not fine-tuned on downstream tasks, so you probably will get garbage out.
You should try models that are fine-tuned on downstream tasks, such as valhalla/t5-base-squad (only an example, probably not the best model).
Example