Replies: 2 comments 6 replies
-
Hello, @braisdev! I'm here to help you with your coding questions and bug fixes. Let's work together to resolve the issue you're experiencing. To resolve the issue where the structured JSON output from the responder LLM does not include the expected Here is the relevant part of the code that expects the def _convert_dict_to_message(_dict: Mapping[str, Any]) -> BaseMessage:
role = _dict["role"]
if role == "user":
return HumanMessage(content=_dict["content"])
elif role == "assistant":
content = _dict.get("content") or ""
if _dict.get("function_call"):
_dict["function_call"]["arguments"] = json.dumps(
_dict["function_call"]["arguments"]
)
additional_kwargs = {"function_call": dict(_dict["function_call"])}
else:
additional_kwargs = {}
return AIMessage(content=content, additional_kwargs=additional_kwargs)
elif role == "system":
return SystemMessage(content=_dict["content"])
elif role == "function":
return FunctionMessage(content=_dict["content"], name=_dict["name"])
else:
return ChatMessage(content=_dict["content"], role=role) To fix the issue, you need to ensure that the JSON response from the LLM includes the Here is an example of how you can modify the def _create_chat_result(self, response: Mapping[str, Any]) -> ChatResult:
generations = []
for res in response["choices"]:
message_dict = res.get("message", {})
if "role" not in message_dict or "content" not in message_dict:
logger.error("Missing 'role' or 'content' in LLM response: %s", message_dict)
continue # Skip this message if required keys are missing
message = _convert_dict_to_message(message_dict)
gen = ChatGeneration(
message=message,
generation_info=dict(finish_reason=res.get("finish_reason")),
)
generations.append(gen)
return ChatResult(generations=generations) Additionally, you can use the def _parse_lc_message(message: BaseMessage) -> Dict[str, Any]:
keys = ["function_call", "tool_calls", "tool_call_id", "name"]
parsed = {"text": message.content, "role": _parse_lc_role(message.type)}
parsed.update(
{
key: cast(Any, message.additional_kwargs.get(key))
for key in keys
if message.additional_kwargs.get(key) is not None
}
)
return parsed This function ensures that the Moreover, you can leverage the Here is an example of how to define the schema and use the from langchain.output_parsers.structured import StructuredOutputParser, ResponseSchema
response_schemas = [
ResponseSchema(name="role", description="The role of the message", type="string"),
ResponseSchema(name="content", description="The content of the message", type="string"),
]
parser = StructuredOutputParser.from_response_schemas(response_schemas)
# Get format instructions to ensure the LLM response adheres to the schema
format_instructions = parser.get_format_instructions()
# Use the parser to parse the LLM response
parsed_response = parser.parse(llm_response_text) By using the |
Beta Was this translation helpful? Give feedback.
-
Hi @braisdev , were you able to resolve this issue? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
Hello,
I’m currently working on building a Reflexion Agent using LangGraph and Langchain.
The agent is designed to iteratively refine its response through a cycle between an initial responder and a reviser, utilizing Tavily as a real-time search engine as a tool to return a polished and accurate answer.
I'm leveraging OpenAI's structured output feature, recently supported by Langchain using json_schema with strict=true, and attempting to integrate this into my graph.
Issue:
The initial responder successfully generates a structured JSON output as expected. However, the process fails immediately afterward, leading to an error that I haven’t been able to resolve.:
Error Context:
The error appears to occur because the output from the responder LLM, which is in a structured JSON format, does not include the expected role and content keys that the next node in the graph expects.
I attached a minimum reproducible code with just "draft" node, but here's the code ready to be deployed: https://github.com/braisdev/basic-reflexion-agent
System Info
System Information
Beta Was this translation helpful? Give feedback.
All reactions