-
Checked other resources
Commit to Help
Example Codefrom enum import Enum
import ast
import json
from langchain_core.output_parsers import JsonOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_community.llms import Ollama
model = Ollama(base_url='http://155.198.89.235:11434',model="llama3")
class Prediction(Enum):
increase = "Increase"
decrease = "Decrease"
nochange = "No Change"
class Article(BaseModel):
prediction: Prediction = Field(description = "What can be inferred about natural gas prices from the article? increase, decrease or no change? Do not add square brackets.")
year: str = Field(description = "An estimate of when these changes will happen.")
year = NGID['_id'][:4]
query = 'The article headline is "' + title + '". The article subtitle is "' + subtitle + '". The body of the article is as follows: "' + body + '". The article was published on ' + date + '.'
parser = JsonOutputParser(pydantic_object=Article)
prompt = PromptTemplate(
template = "Answer the user query in python dictionary format only.\n{format_instructions}\n{query}\n",
input_variables = ["query"],
partial_variables = {"format_instructions": parser.get_format_instructions()},
)
chain = prompt | model | parser
dic = chain.invoke({"query": query}) DescriptionI'm trying to use the Json output parser to give me a prediction and year, but the json output parser sometimes returns outputs that give extra information or its just not in dictionary format, e.g. the error below: langchain_core.exceptions.OutputParserException: Invalid json output: Here is the answer in Python dictionary format:
Is there a way to use the output-fixing parser or the retry parser with the json parser? System InfoSystem Information
Package Information
Packages not installed (Not Necessarily a Problem)The following packages were not found:
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 19 replies
-
To handle issues with extra information or incorrect dictionary formats in the output using the
Here is an example code snippet demonstrating this: from enum import Enum
from langchain_core.output_parsers import JsonOutputParser
from langchain.output_parsers.fix import OutputFixingParser
from langchain.output_parsers.retry import RetryOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_community.llms import Ollama
model = Ollama(base_url='http://155.198.89.235:11434', model="llama3")
class Prediction(Enum):
increase = "Increase"
decrease = "Decrease"
nochange = "No Change"
class Article(BaseModel):
prediction: Prediction = Field(description="What can be inferred about natural gas prices from the article? increase, decrease or no change? Do not add square brackets.")
year: str = Field(description="An estimate of when these changes will happen.")
year = NGID['_id'][:4]
query = 'The article headline is "' + title + '". The article subtitle is "' + subtitle + '". The body of the article is as follows: "' + body + '". The article was published on ' + date + '.'
# Step 1: Create an instance of JsonOutputParser
json_parser = JsonOutputParser(pydantic_object=Article)
# Step 2: Create an instance of OutputFixingParser using the from_llm method
output_fixing_parser = OutputFixingParser.from_llm(
llm=model,
parser=json_parser,
max_retries=3 # You can set the maximum number of retries
)
# Alternatively, create an instance of RetryOutputParser using the from_llm method
retry_parser = RetryOutputParser.from_llm(
llm=model,
parser=json_parser,
max_retries=3 # You can set the maximum number of retries
)
prompt = PromptTemplate(
template="Answer the user query in python dictionary format only.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": json_parser.get_format_instructions()},
)
chain = prompt | model | output_fixing_parser # or use retry_parser
dic = chain.invoke({"query": query})
print(dic) In this example:
This setup will help handle issues with extra information or incorrect dictionary formats in the output by retrying the parsing process using the language model [1][2][3][4].
|
Beta Was this translation helpful? Give feedback.
-
This is not an issue with the JSONParser, but with the LLM itself. You can filter out the output. Also you may get better performance by forcing the model to use JSON mode. |
Beta Was this translation helpful? Give feedback.
To make the check for the keys
pricechange
andyear
case-insensitive in theCustomRetryOutputParser
, you can modify theparse_with_prompt
method to convert the keys to lowercase before performing the check. Here is an example of how you can achieve this: