Skip to content

[Model] Support VLMs with transformers backend #20543

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

zucchini-nlp
Copy link
Contributor

@zucchini-nlp zucchini-nlp commented Jul 7, 2025

Same as #13754 but has all commits squashed and signed off. Rebasing caused too much trouble while resolving merge conflicts.

cc @Isotr0py

zucchini-nlp and others added 3 commits July 7, 2025 07:05
Signed-off-by: raushan <raushan@huggingface.co>
Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Copy link

github-actions bot commented Jul 7, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added documentation Improvements or additions to documentation ci/build multi-modality Related to multi-modality (#4194) labels Jul 7, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @zucchini-nlp, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces support for Vision Language Models (VLMs) within vLLM by utilizing the Transformers backend. It modifies the model loading process to accommodate multimodal models and adds necessary components for processing multimodal inputs, including images. The changes are accompanied by new tests to ensure the correct functionality of VLMs.

Highlights

  • Vision Language Model Support: Extends vLLM to support Vision Language Models (VLMs) using the Transformers backend. This allows vLLM to leverage existing Transformers implementations for VLMs, broadening the range of supported models.
  • Transformers Backend Enhancement: Modifies the model loading logic to correctly identify and load both causal language models (TransformersForCausalLM) and multimodal language models (TransformersForMultimodalLM) from Transformers.
  • Multimodal Input Processing: Introduces new classes and functions to handle multimodal inputs, including image processing and embedding, enabling VLMs to process both text and image data.
  • Testing: Adds new tests to verify the functionality of VLMs with the Transformers backend, ensuring compatibility with various multimodal models.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for Vision Language Models (VLMs) using the transformers backend, which is a significant and valuable addition. The implementation is comprehensive, covering model loading, a new TransformersForMultimodalLM class, and multimodal data processing. The code is generally well-structured. I've identified a few areas for improvement, mainly concerning maintainability and the robustness of the VLM support for models beyond the ones tested. My feedback includes a minor documentation fix, a suggestion to refactor duplicated code, and a note on some hardcoded logic that might need to be generalized in the future.

Comment on lines +328 to +331
# HF processor will return `mm_token_type_ids` from which
# we can infer mm_placeholders. Until then hardcode to make code run
# Below tested on Llava. Prompts and `mm_token_type_ids` are always bs=1
mm_positions = torch.where(mm_token_type_ids == 1)[1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The comment here indicates that the logic for inferring mm_placeholders is hardcoded and has only been tested on Llava. This approach is brittle and may not generalize to other vision-language models, potentially leading to incorrect behavior or silent failures when new models are used with this backend.

For long-term maintainability and correctness, it would be more robust to implement a more generic mechanism or a model-specific registration system for this kind of logic. While this might be a temporary solution, it introduces significant technical debt.


!!! note
vLLM may not fully optimise the Transformers implementation so you may see degraded performance if comparing a native model to a Transformers model in vLLM.

!!! note
In case of vision language models if you are loading with `dtype="auto"`, vLLM loads the whole model with config's `dtype` if it exists. In contrast the native Trasnformers will respect the `dtype` attribute of each backbone in the model. That might cause a slight difference in performance.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There is a typo in Trasnformers. It should be Transformers.

Suggested change
In case of vision language models if you are loading with `dtype="auto"`, vLLM loads the whole model with config's `dtype` if it exists. In contrast the native Trasnformers will respect the `dtype` attribute of each backbone in the model. That might cause a slight difference in performance.
In case of vision language models if you are loading with `dtype="auto"`, vLLM loads the whole model with config's `dtype` if it exists. In contrast the native Transformers will respect the `dtype` attribute of each backbone in the model. That might cause a slight difference in performance.

Comment on lines +226 to +232
# Check if text-config is `self`. If not most probably it is
# a composite config, i.e. mutlimodal
if model_config.hf_config.get_text_config(
) != model_config.hf_config:
architectures[i] = "TransformersForMultimodalLM"
else:
architectures[i] = "TransformersForCausalLM"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This block of logic for determining the transformer architecture is duplicated from lines 209-215. To improve maintainability and prevent potential inconsistencies, I recommend extracting this logic into a local helper function within resolve_transformers_arch.

For example:

def _get_transformers_arch():
    # Check if text-config is `self`. If not, it is a composite config, i.e. multimodal
    if model_config.hf_config.get_text_config() != model_config.hf_config:
        return "TransformersForMultimodalLM"
    else:
        return "TransformersForCausalLM"

# ... then call _get_transformers_arch() in both places.

Copy link

mergify bot commented Jul 8, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @zucchini-nlp.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added needs-rebase new-model Requests to new models labels Jul 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/build documentation Improvements or additions to documentation multi-modality Related to multi-modality (#4194) needs-rebase new-model Requests to new models
Projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

4 participants