-
-
Notifications
You must be signed in to change notification settings - Fork 8.7k
[Model] Support VLMs with transformers backend #20543
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: raushan <raushan@huggingface.co>
Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @zucchini-nlp, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces support for Vision Language Models (VLMs) within vLLM by utilizing the Transformers backend. It modifies the model loading process to accommodate multimodal models and adds necessary components for processing multimodal inputs, including images. The changes are accompanied by new tests to ensure the correct functionality of VLMs.
Highlights
- Vision Language Model Support: Extends vLLM to support Vision Language Models (VLMs) using the Transformers backend. This allows vLLM to leverage existing Transformers implementations for VLMs, broadening the range of supported models.
- Transformers Backend Enhancement: Modifies the model loading logic to correctly identify and load both causal language models (
TransformersForCausalLM
) and multimodal language models (TransformersForMultimodalLM
) from Transformers. - Multimodal Input Processing: Introduces new classes and functions to handle multimodal inputs, including image processing and embedding, enabling VLMs to process both text and image data.
- Testing: Adds new tests to verify the functionality of VLMs with the Transformers backend, ensuring compatibility with various multimodal models.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces support for Vision Language Models (VLMs) using the transformers backend, which is a significant and valuable addition. The implementation is comprehensive, covering model loading, a new TransformersForMultimodalLM
class, and multimodal data processing. The code is generally well-structured. I've identified a few areas for improvement, mainly concerning maintainability and the robustness of the VLM support for models beyond the ones tested. My feedback includes a minor documentation fix, a suggestion to refactor duplicated code, and a note on some hardcoded logic that might need to be generalized in the future.
# HF processor will return `mm_token_type_ids` from which | ||
# we can infer mm_placeholders. Until then hardcode to make code run | ||
# Below tested on Llava. Prompts and `mm_token_type_ids` are always bs=1 | ||
mm_positions = torch.where(mm_token_type_ids == 1)[1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment here indicates that the logic for inferring mm_placeholders
is hardcoded and has only been tested on Llava. This approach is brittle and may not generalize to other vision-language models, potentially leading to incorrect behavior or silent failures when new models are used with this backend.
For long-term maintainability and correctness, it would be more robust to implement a more generic mechanism or a model-specific registration system for this kind of logic. While this might be a temporary solution, it introduces significant technical debt.
|
||
!!! note | ||
vLLM may not fully optimise the Transformers implementation so you may see degraded performance if comparing a native model to a Transformers model in vLLM. | ||
|
||
!!! note | ||
In case of vision language models if you are loading with `dtype="auto"`, vLLM loads the whole model with config's `dtype` if it exists. In contrast the native Trasnformers will respect the `dtype` attribute of each backbone in the model. That might cause a slight difference in performance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a typo in Trasnformers
. It should be Transformers
.
In case of vision language models if you are loading with `dtype="auto"`, vLLM loads the whole model with config's `dtype` if it exists. In contrast the native Trasnformers will respect the `dtype` attribute of each backbone in the model. That might cause a slight difference in performance. | |
In case of vision language models if you are loading with `dtype="auto"`, vLLM loads the whole model with config's `dtype` if it exists. In contrast the native Transformers will respect the `dtype` attribute of each backbone in the model. That might cause a slight difference in performance. |
# Check if text-config is `self`. If not most probably it is | ||
# a composite config, i.e. mutlimodal | ||
if model_config.hf_config.get_text_config( | ||
) != model_config.hf_config: | ||
architectures[i] = "TransformersForMultimodalLM" | ||
else: | ||
architectures[i] = "TransformersForCausalLM" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This block of logic for determining the transformer architecture is duplicated from lines 209-215. To improve maintainability and prevent potential inconsistencies, I recommend extracting this logic into a local helper function within resolve_transformers_arch
.
For example:
def _get_transformers_arch():
# Check if text-config is `self`. If not, it is a composite config, i.e. multimodal
if model_config.hf_config.get_text_config() != model_config.hf_config:
return "TransformersForMultimodalLM"
else:
return "TransformersForCausalLM"
# ... then call _get_transformers_arch() in both places.
This pull request has merge conflicts that must be resolved before it can be |
Same as #13754 but has all commits squashed and signed off. Rebasing caused too much trouble while resolving merge conflicts.
cc @Isotr0py