UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection
-
Updated
Sep 15, 2025 - Python
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection
up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources
[ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
[ACL 2024] ANAH & [NeurIPS 2024] ANAH-v2 & [ICLR 2025] Mask-DPO
A novel alignment framework that leverages image retrieval to mitigate hallucinations in Vision Language Models.
[ICLR 2025] Data-Augmented Phrase-Level Alignment for Mitigating Object Hallucination
✨ Official code for our paper: "Uncertainty-o: One Model-agnostic Framework for Unveiling Epistemic Uncertainty in Large Multimodal Models".
[CVPR 2025 Workshop] PAINT (Paying Attention to INformed Tokens) is a plug-and-play framework that intervenes in the self-attention of the LLM and selectively boost the visual attention informed tokens to mitigate hallucination of Vision Language Models
Official PyTorch implementation of "LPOI: Listwise Preference Optimization for Vision Language Models" (ACL 2025 Main)
Agentic-AI framework w/o the headaches
Noetic Geodesic Framework for AI Reasoning
[NAACL Findings 2025] Code and data of "Mitigating Hallucinations in Multimodal Spatial Relations through Constraint-Aware Prompting"
Fully automated LLM evaluator
Unofficial implementation of Microsoft’s Claimify Paper: extracts specific, verifiable, decontextualized claims from LLM Q&A to be used for Hallucination, Groundedness, Relevancy and Truthfulness detection
<<<<<<< HEAD Production firewall & hygiene layer for AI JSON. Stop shipping brittle parsing glue. AIS turns noisy / malformed / risky model JSON into clean, validated, policy‑compliant objects — and only bills when an LLM rescue is actually needed.
This repository contains all code to support the paper: "On the Importance of Text Preprocessing for Multimodal Representation Learning and Pathology Report Generation".
Detecting Hallucinations in LLMs
[ACL findings 2025] "Retrieval Visual Contrastive Decoding to Mitigate Object Hallucinations in Large Vision-Language Models"
MedRAG-2 is an enhanced Retrieval-Augmented Generation pipeline that addresses the challenges of LLM hallucinations through prompt redesign of the MedRAG framework, enforcing strict grounding, structured output validation using Pydantic schemas, and cross-encoder based re-ranking for improved retrieval precision. KAUST's RAG Course.
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection
Add a description, image, and links to the hallucination-mitigation topic page so that developers can more easily learn about it.
To associate your repository with the hallucination-mitigation topic, visit your repo's landing page and select "manage topics."