-
-
Notifications
You must be signed in to change notification settings - Fork 8.7k
LFM2 #20797
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
LFM2 #20797
Conversation
…ormers >= 4.54.0.dev0
…usal_conv1d kernel
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @paulpak58, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces the LFM2 model architecture into vLLM, including necessary components for hybrid attention and convolution-based layers. It also incorporates changes to handle CUDA versions, transformers library updates, and integration with the vLLM v1 framework.
Highlights
- CMakeLists.txt: Added logic to ignore
nvToolsExt
for CUDA 12.9 by creating an imported interface library if it's not already a target. - vllm/config.py: Modified
get_num_layers_by_block_type
to handle hybrid models, specifically addressing an attribute change intransformers
library versions >= 4.54.0.dev0. It now checks forlayer_types
inhf_text_config
iflayers_block_type
is not found inhf_config
. Also, the logic for summing the block types has been updated to account for 'full_attention' blocks when the block type is 'attention'. - vllm/model_executor/layers/conv.py: Introduced a new
ShortConv
custom operator for implementing a short convolution layer, including forward passes for both native and CUDA execution. The CUDA forward pass includes logic for prefill and decode stages, utilizingcausal_conv1d_fn
andcausal_conv1d_update
respectively. The class also defines aget_state_shape
method. - vllm/model_executor/models/conv_cache.py: Introduced
ConvCacheParams
dataclass andConvCacheManager
class to manage the convolution state cache. TheConvCacheManager
inherits fromConstantSizeCache
and provides methods for copying cache, retrieving tensors for the current run, and providing CUDA graph capture inputs. - vllm/model_executor/models/lfm2.py: Added new modules and classes for the LFM2 model architecture, including
LFM2MLP
,LFM2Attention
,LFM2AttentionDecoderLayer
,LFM2ShortConvDecoderLayer
,LFM2Model
, andLFM2ForCausalLM
. These components define the layers and overall structure of the LFM2 model, supporting hybrid attention and convolution-based layers. TheLFM2ForCausalLM
class integrates the LFM2 model with the vLLM framework, including cache management and logits processing. - vllm/model_executor/models/registry.py: Registered
LFM2ForCausalLM
in the model registry, associating it with the 'lfm2' identifier. - vllm/transformers_utils/configs/ovis.py: Wrapped the
AutoConfig.register
call forAIMv2Config
in a try-except block to prevent errors ifAutoConfig
is not available. - vllm/utils/init.py: Added
conv
to theLayerBlockType
enum. - vllm/v1/attention/backends/mamba_attn.py: Added
get_short_conv_chunk_size
function and updatedMamba2AttentionMetadataBuilder
to supportShortConvSpec
. - vllm/v1/core/single_type_kv_cache_manager.py: Added
ShortConvSpec
to the dictionary of kv cache specs. - vllm/v1/kv_cache_interface.py: Added
ShortConvSpec
dataclass to define the specification for the short convolution KV cache. - vllm/v1/worker/gpu_model_runner.py: Imported
ShortConv
, updatedinitialize_attn_backend
to supportShortConvSpec
, and modified_reshape_kv_cache_tensors
andget_kv_cache_spec
to handle short convolution layers.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces support for the LFM2 model, a new hybrid architecture. The changes are comprehensive, affecting model definition, caching mechanisms, and the core engine logic. The implementation cleverly reuses existing infrastructure for Mamba-like layers to handle the new convolution layers, which is a great approach.
I've provided several suggestions to enhance code clarity, maintainability, and robustness. These include refactoring duplicated code, simplifying complex conditions, and improving exception handling. Overall, this is a solid contribution.
try: | ||
AutoConfig.register("aimv2", AIMv2Config) | ||
except Exception as e: | ||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using a bare except Exception: pass
is generally discouraged as it can silently swallow any and all exceptions, potentially hiding bugs or important issues during initialization. If the goal is to handle cases where a config is already registered, it would be better to catch a more specific exception, like ValueError
or whatever AutoConfig.register
raises in that case. If a broad exception is truly needed, it should at least log the exception details for debugging purposes.
try: | |
AutoConfig.register("aimv2", AIMv2Config) | |
except Exception as e: | |
pass | |
try: | |
AutoConfig.register("aimv2", AIMv2Config) | |
except Exception as e: | |
# It's possible the config is already registered in some environments. | |
# Log the exception for debugging purposes. | |
logger.debug("Failed to register 'aimv2' config: %s", e) |
return sum( | ||
1 for t in layers_block_type_value[start:end] | ||
if (t == "full_attention" and "attention" == block_type.value) or (t == block_type.value) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This conditional logic for counting layer types is a bit complex and hard to read. It can be simplified by handling the special case for attention
layers separately, which would make the code more readable and easier to maintain.
return sum( | |
1 for t in layers_block_type_value[start:end] | |
if (t == "full_attention" and "attention" == block_type.value) or (t == block_type.value) | |
) | |
if block_type == LayerBlockType.attention: | |
return sum(t in ("attention", "full_attention") | |
for t in layers_block_type_value[start:end]) | |
return sum(t == block_type.value | |
for t in layers_block_type_value[start:end]) |
return contextualized_states | ||
|
||
num_prefills = attn_metadata.num_prefills # request count | ||
num_decodes = attn_metadata.num_decode_tokens # token count (=request) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The variable name num_decodes
is a bit misleading as it stores the number of decode tokens, not requests. In decode phase, this is usually 1 token per request, but for clarity and consistency with num_prefill_tokens
, it would be better to name it num_decode_tokens
. This would improve readability for future maintainers.
num_decodes = attn_metadata.num_decode_tokens # token count (=request) | |
num_decode_tokens = attn_metadata.num_decode_tokens # token count (=request) |
return contextualized_states | ||
|
||
|
||
def get_state_shape(self) -> tuple[tuple[int, ...], tuple[int, ...]]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The return type hint tuple[tuple[int, ...], tuple[int, ...]]
indicates a tuple containing two tuples of integers. However, the function returns a tuple containing only one tuple: (conv_state_shape,)
.
To match the implementation and the expected usage with MambaSpec
-like structures, the type hint should be tuple[tuple[int, ...], ...]
, which correctly represents a tuple containing one or more tuples of integers.
def get_state_shape(self) -> tuple[tuple[int, ...], tuple[int, ...]]: | |
def get_state_shape(self) -> tuple[tuple[int, ...], ...]: |
class ShortConvSpec(KVCacheSpec): | ||
"""Nearly identical to MambaSpec above. """ | ||
shapes: tuple[tuple[int, ...], ...] | ||
dtype: torch.dtype | ||
page_size_padded: Optional[int] = None | ||
|
||
def __post_init__(self): | ||
self.num_elements = sum(prod(shape) for shape in self.shapes) | ||
|
||
@property | ||
def type_id(self) -> str: | ||
return f"short_conv_{self.shapes}_{self.dtype}" | ||
|
||
@property | ||
def page_size_bytes(self) -> int: | ||
page_size = self.num_elements * get_dtype_size(self.dtype) | ||
if self.page_size_padded is not None: | ||
assert self.page_size_padded >= page_size | ||
return self.page_size_padded | ||
return page_size | ||
|
||
def max_memory_usage_bytes(self, vllm_config: VllmConfig) -> int: | ||
# We allocate 1 block for each request now, so max_memory_usage_bytes is | ||
# the same as page_size_bytes. | ||
# Need to update this when supporting prefix caching. | ||
return self.page_size_bytes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ShortConvSpec
class is almost identical to MambaSpec
. This code duplication can lead to maintenance issues in the future. Consider creating a common base class to encapsulate the shared logic, and have both MambaSpec
and ShortConvSpec
inherit from it, overriding only the type_id
property. This would make the code more DRY (Don't Repeat Yourself) and easier to maintain.
elif has_conv_layer: | ||
if self.vllm_config.speculative_config is not None: | ||
raise NotImplementedError( | ||
"ShortConv's with speculative decoding is not supported yet.") | ||
if not self.vllm_config.model_config.enforce_eager: | ||
raise NotImplementedError( | ||
"ShortConv's with cuda graph is not supported yet.") | ||
if self.vllm_config.cache_config.enable_prefix_caching: | ||
raise NotImplementedError( | ||
"Prefix caching is not supported for ShortConv's yet.") | ||
max_model_len = self.vllm_config.model_config.max_model_len | ||
|
||
page_size_padded = self._maybe_pad_fixed_state_page_size( | ||
attn_layers, short_conv_layers, kv_cache_spec, ShortConvSpec, | ||
max_model_len, block_size) | ||
|
||
# Set block_size to max_model_len, so that mamba model will always | ||
# have only one block in the KV cache. | ||
for layer_name, short_conv_module in short_conv_layers.items(): | ||
kv_cache_spec[layer_name] = ShortConvSpec( | ||
shapes=short_conv_module.get_state_shape(), | ||
dtype=self.kv_cache_dtype, | ||
block_size=max_model_len, | ||
page_size_padded=page_size_padded) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic for checking feature support (speculative decoding, CUDA graph, prefix caching) is duplicated for mamba
and conv
layers. This could be refactored into a helper function to reduce code duplication and improve maintainability. A single function could take the layer type name as an argument and raise the appropriate NotImplementedError
.
This pull request has merge conflicts that must be resolved before it can be |
Essential Elements of an Effective PR Description Checklist
supported_models.md
andexamples
for a new model.Purpose
Test Plan
Test Result
(Optional) Documentation Update