UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection
-
Updated
Jun 19, 2025 - Python
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection
up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources
[ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
[ACL 2024] ANAH & [NeurIPS 2024] ANAH-v2 & [ICLR 2025] Mask-DPO
A novel alignment framework that leverages image retrieval to mitigate hallucinations in Vision Language Models.
✨ Official code for our paper: "Uncertainty-o: One Model-agnostic Framework for Unveiling Epistemic Uncertainty in Large Multimodal Models".
[ICLR 2025] Data-Augmented Phrase-Level Alignment for Mitigating Object Hallucination
[CVPR 2025 Workshop] PAINT (Paying Attention to INformed Tokens) is a plug-and-play framework that intervenes in the self-attention of the LLM and selectively boost the visual attention informed tokens to mitigate hallucination of Vision Language Models
Agentic-AI framework w/o the headaches
[NAACL Findings 2025] Code and data of "Mitigating Hallucinations in Multimodal Spatial Relations through Constraint-Aware Prompting"
Fully automated LLM evaluator
Official PyTorch implementation of "LPOI: Listwise Preference Optimization for Vision Language Models" (ACL 2025 Main)
This repository contains all code to support the paper: "On the Importance of Text Preprocessing for Multimodal Representation Learning and Pathology Report Generation".
Detecting Hallucinations in LLMs
[ACL findings 2025] "Retrieval Visual Contrastive Decoding to Mitigate Object Hallucinations in Large Vision-Language Models"
An interactive Python chatbot demonstrating real-time contextual hallucination detection in Large Language Models using the "Lookback Lens" method. This project implements the attention-based ratio feature extraction and a trained classifier to identify when an LLM deviates from the provided context during generation.
Add a description, image, and links to the hallucination-mitigation topic page so that developers can more easily learn about it.
To associate your repository with the hallucination-mitigation topic, visit your repo's landing page and select "manage topics."