docs/concepts/structured_outputs/ #28978
Replies: 4 comments 2 replies
-
model = ChatGroq(model="llama3-8b-8192") Invoke the model to produce structured output that matches the schemastructured_output = model_with_structured_output.invoke( |
Beta Was this translation helpful? Give feedback.
-
Can we return the response metadata that are contained to the simple invoke eg (AIMessage) with the structured schema? response_metadata={'token_usage': {'completion_tokens': 107, 'prompt_tokens': 409, 'total_tokens': 516, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_xxxxxxxx', 'finish_reason': 'stop', 'logprobs': None}, id='run-xxxxxxxxxxxxxx', usage_metadata={'input_tokens': 409, 'output_tokens': 107, 'total_tokens': 516} |
Beta Was this translation helpful? Give feedback.
-
Using a prompt template along with tool calling is giving me an error. Also it seems the llm is not able to read the input prompt provided. import dotenv
import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage, SystemMessage
from pydantic import BaseModel, Field
dotenv.load_dotenv()
class ResponseFormatter(BaseModel):
"""Always use this tool to structure your response to the user."""
language: str = Field(description="The language of the translation")
text: str = Field(description="The text to be translated")
translation: str = Field(description="The translation of the text")
open_ai = ChatOpenAI(
model="gpt-4o-mini"
)
open_ai_tools = open_ai.bind_tools([ResponseFormatter])
system_template = """
Translate the following from English into {language}:
"""
prompt_template = ChatPromptTemplate.from_messages(
[("system", system_template), ("user", "{text}")]
)
prompt = prompt_template.invoke({"language" : "German", "text" : "hi!"})
print(prompt)
ai_msg = open_ai_tools.invoke(prompt)
print(ai_msg)
print(ai_msg.tool_calls[0]["args"])
pydantic_object = ResponseFormatter.model_validate(ai_msg.tool_calls[0]["args"])
print(pydantic_object) Output
|
Beta Was this translation helpful? Give feedback.
-
When I tried to use the with_structured_output (include_raw = True) method on the LLM after binding the tool, I found that the content returned by the LLM no longer contained toolcalls. use with_structured_output
from pydantic import BaseModel, Field
from langchain.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage
from langchain_core.messages.tool import ToolCall
from agent_project.agent.llm import chatGPT_llm
from agent_project.agent.tools import load_robot,move_robot_to_target,create_cube,clear_env
from langchain.output_parsers import PydanticOutputParser
from typing import List, Dict
from typing_extensions import Annotated, TypedDict
class CustomOutput(BaseModel):
task: Annotated[str, ..., "The task you are performing"]
prompt_template = ChatPromptTemplate.from_template(
"""You are an intelligent assistant who must use tools to solve problems。
{input}
"""
)
llm = chatGPT_llm(
model_name="gpt-4o-mini", temperature=0
)
simulation_tools = [load_robot,move_robot_to_target,create_cube,clear_env]
llm_with_tools = llm.bind_tools(simulation_tools)
llm_with_structure_output = llm_with_tools.with_structured_output(CustomOutput,include_raw=True)
chain = prompt_template | llm_with_structure_output
# Run the chain and get the output
result = chain.invoke({"input": "Please help me add a robotic arm to the simulation environment at (1,1,0) with the file path F:\ sw\ urdf_files\ minicobo_v1\ urdf\ minicobo_v1. 4.urdf '.",})
print(result) output
not use with_structured_output
output
I would like to know why on the LLM bound to tools, using formatted output, the LLM will no longer call the tools. And how to solve it? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
docs/concepts/structured_outputs/
Overview
https://python.langchain.com/docs/concepts/structured_outputs/
Beta Was this translation helpful? Give feedback.
All reactions