Skip to content

Official PyTorch implementation of "LPOI: Listwise Preference Optimization for Vision Language Models" (ACL 2025 Main)

Notifications You must be signed in to change notification settings

fatemehpesaran310/lpoi

Repository files navigation

LPOI

Official PyTorch implementation of "LPOI: Listwise Preference Optimization for Vision Language Models", (ACL 2025 Main)

image samples

Abstract Aligning large VLMs with human preferences is a challenging task, as methods like RLHF and DPO often overfit to textual information or exacerbate hallucinations.Although augmenting negative image samples partially addresses these pitfalls, no prior work has employed listwise preference optimization for VLMs, due to the complexity and cost of constructing listwise image samples.In this work, we propose LPOI, the first object-aware listwise preference optimization developed for reducing hallucinations in VLMs.LPOI identifies and masks a critical object in the image, and then interpolates the masked region between the positive and negative images to form a sequence of incrementally more complete images.The model is trained to rank these images in ascending order of object visibility, effectively reducing hallucinations while retaining visual fidelity.LPOI requires no extra annotations beyond standard pairwise preference data, as it automatically constructs the ranked lists through object masking and interpolation.Comprehensive experiments on MMHalBench, AMBER, and Object HalBench confirm that LPOI outperforms existing preference optimization methods in reducing hallucinations and enhancing VLM performance.

Running the Code

Installation

We used transformers==4.45.0 for Idefics-2 model and transformers==4.43.0 for LLaVA-v1.5 models.

Install them using:

pip install transformers==4.45.0 # for Idefics-2
pip install transformers==4.43.0 # for LLaVA-v1.5

Training

To train each model, first download the required files:

Preparing the Data

  1. Move sample_10k_llava.json to the data folder.
  2. Unzip images.zip and ours_cmask_until_list5.zip, then move the images and ours_cmask_until_list5 folders into data.

Running Training

To train Idefics-2 model, run:

python lpoi.py 

To train LLaVA-v1.5-7B model, run:

python lpoi_llava.py

To train LLaVA-v1.5-13B model, run:

python lpoi_llava_13b.py

We used a single GPU with 48GB memory for training Idefics-2 and LLaVA-v1.5-7B models and two GPUs with 48GB memory for training LLaVA-v1.5-13B model.

Generating

After training, move the checkpoint to the checkpoints folder.

To generate using the Idefics-2 model, run:

python generate-idefics-lpoi-amber.py

We used a single GPU with 48GB memory for generation.

If you want to use the same checkpoint we trained, you can download them here:

Model Checkpoints
Idefics2-8B download
LLaVA-v1.5-7B download
LLaVA-v1.5-13B download

Move the downloaded checkpoint to the checkpoints folder.

Note

Please make sure to clone the Amber dataset, using:

git clone https://github.com/junyangwang0410/AMBER.git

Adjust the paths in generate-idefics-lpoi-amber.py based on where you save the AMBER dataset.

Cite our paper if you use this code 😊:

@inproceedings{pesaranzadeh2025lpoi,
  title = "LPOI: Listwise Preference Optimization for Vision Language Models",
  author = "Pesaran zadeh, Fatemeh  and Oh, Yoojin  and Kim, Gunhee",
  booktitle = "Proceedings of the 2025 Conference on Association for Computational Linguistics",
  year = "2025",
}

About

Official PyTorch implementation of "LPOI: Listwise Preference Optimization for Vision Language Models" (ACL 2025 Main)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages