UQLM: Uncertainty Quantification for Language Models #31172
dylanbouchard
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
My team just released UQLM: Uncertainty Quantification for Language Models, a Python library built on top of LangChain that enables generation-time, zero-resource hallucination detection using state-of-the-art uncertainty quantification techniques.
UQLM offers a versatile suite of response-level scorers, each providing a confidence score to indicate the likelihood of errors or hallucinations. The scorers are categorized into four main types:
🎯 Black-Box Scorers: Assess uncertainty through response consistency, compatible with any LLM.
🎲 White-Box Scorers: Utilize token probabilities for faster and cost-effective uncertainty estimation.
⚖️ LLM-as-a-Judge Scorers: Employ LLMs to evaluate response reliability, customizable through prompt engineering.
🔀 Ensemble Scorers: Combine multiple scorers for robust and flexible uncertainty/confidence estimates.
Check it out, share feedback if you have any, and reach out if you are interested in contributing! Links below:
🔗 GitHub Repo
🔗 Associated Research Paper
Beta Was this translation helpful? Give feedback.
All reactions