"raise NotImplementedError()" while using "with_structured_output" - LLM- "Ollama- llama2"[ "working with Self Rag"] #22195
Replies: 5 comments 6 replies
-
To extend your local Ollama model to use the
Here is an example using your provided code: from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_openai import Ollama
# Data model
class GradeDocuments(BaseModel):
"""Binary score for relevance check on retrieved documents."""
binary_score: str = Field(description="Documents are relevant to the question, 'yes' or 'no'")
# LLM with function call
llm = Ollama(model="gemma:2b")
structured_llm_grader = llm.with_structured_output(GradeDocuments)
# Prompt
system = """You are a grader assessing relevance of a retrieved document to a user question. \n
It does not need to be a stringent test. The goal is to filter out erroneous retrievals. \n
If the document contains keyword(s) or semantic meaning related to the user question, grade it as relevant. \n
Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question."""
grade_prompt = ChatPromptTemplate.from_messages(
[
("system", system),
("human", "Retrieved document: \n\n {document} \n\n User question: {question}"),
]
)
retrieval_grader = grade_prompt | structured_llm_grader
question = "agent memory"
docs = retriever.get_relevant_documents(question)
doc_txt = docs[1].page_content
print(retrieval_grader.invoke({"question": question, "document": doc_txt})) This should help you avoid the
|
Beta Was this translation helpful? Give feedback.
-
@dosu even the code given by you is throwing the same error... |
Beta Was this translation helpful? Give feedback.
-
Same here, i get raise I also tested the example here: https://python.langchain.com/v0.1/docs/modules/model_io/chat/structured_output/ Edit: |
Beta Was this translation helpful? Give feedback.
-
You can use
|
Beta Was this translation helpful? Give feedback.
-
I was able to get your code to work by switching the import from For example: from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_ollama import ChatOllama
from langchain_core.prompts import ChatPromptTemplate
# Data model
class GradeDocuments(BaseModel):
"""Binary score for relevance check on retrieved documents."""
binary_score: str = Field(description="Documents are relevant to the question, 'yes' or 'no'")
# LLM with function call
llm = ChatOllama(model='mistral-nemo:12b', temperature=0)
structured_llm_grader = llm.with_structured_output(GradeDocuments)
# Prompt
system = """You are a grader assessing relevance of a retrieved document to a user question. \n
It does not need to be a stringent test. The goal is to filter out erroneous retrievals. \n
If the document contains keyword(s) or semantic meaning related to the user question, grade it as relevant. \n
Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question."""
grade_prompt = ChatPromptTemplate.from_messages(
[
("system", system),
("human", "Retrieved document: \n\n {document} \n\n User question: {question}"),
]
)
retrieval_grader = grade_prompt | structured_llm_grader
response = retrieval_grader.invoke({"question": "What is the answer to life, the universe, and everything?", "document": "The answer to life is 42."})
print(type(response))
print(response) |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
I am experimenting Self RAG, where i encounter "Not Implemented Error" . I am using Local Ollama model as LLM. How do I extend my LLM to use "with_structured_output" as it is defined only for few.
System Info
System Information
Package Information
Packages not installed (Not Necessarily a Problem)
The following packages were not found:
Beta Was this translation helpful? Give feedback.
All reactions