How to get confidence score and with openai model with structured output #30491
AsmaaMHadir
announced in
Ask Dosu (Archived)
Replies: 1 comment
-
To access the response metadata and get the log probabilities when using an OpenAI model with structured output in LangChain, you can use the # Assuming PydanticClass is defined elsewhere
llm_w_structured_output = llm.with_structured_output(PydanticClass)
response = llm_w_structured_output.invoke(question)
# Access the response metadata to get log probabilities
log_probs = response.response_metadata["logprobs"]["content"]
# Calculate the confidence score using your existing function
confidence_score = calculate_weighted_confidence_score(response.response_metadata) The To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
When I invoke an openai model without structured output, I have no problem getting its confidence score:
however, if I run the llm with structured output like in here:
I have no idea how to access the response metadata to get the logprobs and calculate the confidence score
Beta Was this translation helpful? Give feedback.
All reactions