Replies: 1 comment
-
Duplicate with issue here #5572 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am using VLLM to use Llama models for RAG purposes. However, I am constantly facing a runnable error. This is my VLLM model initialization:
llm_vllm = LLM(
model="Llama-2-7b-chat-hf",
device="cuda"
)
When I try to create a chain with:
chain = (
{"context": retriever, "question": RunnablePassthrough()} | prompt | llm_vllm | StrOutputParser()
)
response = chain.invoke(user_question)
I get the following error: TypeError: Expected a Runnable, callable or dict. Instead got an unsupported type: <class 'vllm.entrypoints.llm.LLM'>
Similarly, if I use:
from langchain.chains.question_answering import load_qa_chain
chain = load_qa_chain(llm_vllm, chain_type="stuff")
I get an error: llm instance of Runnable expected.
Is there any solution for that?
Beta Was this translation helpful? Give feedback.
All reactions