Official PyTorch implementation of "LPOI: Listwise Preference Optimization for Vision Language Models", (ACL 2025 Main)
Abstract Aligning large VLMs with human preferences is a challenging task, as methods like RLHF and DPO often overfit to textual information or exacerbate hallucinations.Although augmenting negative image samples partially addresses these pitfalls, no prior work has employed listwise preference optimization for VLMs, due to the complexity and cost of constructing listwise image samples.In this work, we propose LPOI, the first object-aware listwise preference optimization developed for reducing hallucinations in VLMs.LPOI identifies and masks a critical object in the image, and then interpolates the masked region between the positive and negative images to form a sequence of incrementally more complete images.The model is trained to rank these images in ascending order of object visibility, effectively reducing hallucinations while retaining visual fidelity.LPOI requires no extra annotations beyond standard pairwise preference data, as it automatically constructs the ranked lists through object masking and interpolation.Comprehensive experiments on MMHalBench, AMBER, and Object HalBench confirm that LPOI outperforms existing preference optimization methods in reducing hallucinations and enhancing VLM performance.
We used transformers==4.45.0
for Idefics-2 model and transformers==4.43.0
for LLaVA-v1.5 models.
Install them using:
pip install transformers==4.45.0 # for Idefics-2
pip install transformers==4.43.0 # for LLaVA-v1.5
To train each model, first download the required files:
sample_10k_llava.json
: Download hereimages.zip
: Download hereours_cmask_until_list5.zip
: Download here
- Move
sample_10k_llava.json
to thedata
folder. - Unzip
images.zip
andours_cmask_until_list5.zip
, then move theimages
andours_cmask_until_list5
folders intodata
.
To train Idefics-2 model, run:
python lpoi.py
To train LLaVA-v1.5-7B model, run:
python lpoi_llava.py
To train LLaVA-v1.5-13B model, run:
python lpoi_llava_13b.py
We used a single GPU with 48GB memory for training Idefics-2 and LLaVA-v1.5-7B models and two GPUs with 48GB memory for training LLaVA-v1.5-13B model.
After training, move the checkpoint to the checkpoints
folder.
To generate using the Idefics-2 model, run:
python generate-idefics-lpoi-amber.py
We used a single GPU with 48GB memory for generation.
If you want to use the same checkpoint we trained, you can download them here:
Model | Checkpoints |
---|---|
Idefics2-8B | download |
LLaVA-v1.5-7B | download |
LLaVA-v1.5-13B | download |
Move the downloaded checkpoint to the checkpoints
folder.
Please make sure to clone the Amber dataset, using:
git clone https://github.com/junyangwang0410/AMBER.git
Adjust the paths in generate-idefics-lpoi-amber.py
based on where you save the AMBER dataset.
Cite our paper if you use this code 😊:
@inproceedings{pesaranzadeh2025lpoi,
title = "LPOI: Listwise Preference Optimization for Vision Language Models",
author = "Pesaran zadeh, Fatemeh and Oh, Yoojin and Kim, Gunhee",
booktitle = "Proceedings of the 2025 Conference on Association for Computational Linguistics",
year = "2025",
}