You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**We have started the process of deprecating V0. Please read [RFC #18571](https://github.com/vllm-project/vllm/issues/18571) for more details.**
3
+
!!! important
4
+
5
+
We have started the process of deprecating V0. Please read [RFC #18571](https://github.com/vllm-project/vllm/issues/18571) for more details.
4
6
5
7
V1 is now enabled by default for all supported use cases, and we will gradually enable it for every use case we plan to support. Please share any feedback on [GitHub](https://github.com/vllm-project/vllm) or in the [vLLM Slack](https://inviter.co/vllm-slack).
6
8
@@ -32,53 +34,92 @@ Upgrade to vLLM’s Core Architecture](https://blog.vllm.ai/2025/01/27/v1-alpha-
32
34
33
35
This living user guide outlines a few known **important changes and limitations** introduced by vLLM V1. The team has been working actively to bring V1 as the default engine, therefore this guide will be updated constantly as more features get supported on vLLM V1.
34
36
35
-
### Supports Overview
36
-
#### Hardware
37
+
## Current Status
38
+
39
+
For each item, our progress towards V1 support falls into one of the following states:
40
+
41
+
-**🚀 Optimized**: Nearly fully optimized, with no further work currently planned.
42
+
-**🟢 Functional**: Fully operational, with ongoing optimizations.
43
+
-**🚧 WIP**: Under active development.
44
+
-**🟡 Planned**: Scheduled for future implementation (some may have open PRs/RFCs).
45
+
-**🟠 Delayed**: Temporarily dropped in V1 but planned to be re-introduced later.
46
+
-**🔴 Deprecated**: Not planned for V1 unless there is strong demand.
vLLM V1 currently excludes model architectures with the `SupportsV0Only` protocol,
78
+
and the majority fall into the following categories:
44
79
45
-
#### Feature / Model
80
+
**Embedding Models**
81
+
The initial support will be provided by [PR #16188](https://github.com/vllm-project/vllm/pull/16188).
46
82
47
-
| Feature / Model | Status |
83
+
Later, we will consider using [hidden states processor](https://github.com/vllm-project/vllm/issues/12249),
84
+
which is based on [global logits processor](https://github.com/vllm-project/vllm/pull/13360)
85
+
to enable simultaneous generation and embedding using the same engine instance in V1.
86
+
87
+
**Mamba Models**
88
+
Models using selective state-space mechanisms instead of standard transformer attention (e.g., `MambaForCausalLM`, `JambaForCausalLM`)
89
+
will be supported via [PR #19327](https://github.com/vllm-project/vllm/pull/19327).
90
+
91
+
**Encoder-Decoder Models**
92
+
vLLM V1 is currently optimized for decoder-only transformers.
93
+
Models requiring cross-attention between separate encoder and decoder are not yet supported (e.g., `BartForConditionalGeneration`, `MllamaForConditionalGeneration`).
94
+
95
+
For a complete list of supported models, see the [list of supported models](https://docs.vllm.ai/en/latest/models/supported_models.html).
|**GPU <> CPU KV Cache Swapping**| <nobr>🔴 Deprecated</nobr> |
65
113
66
-
-**🚀 Optimized**: Nearly fully optimized, with no further work currently planned.
67
-
-**🟢 Functional**: Fully operational, with ongoing optimizations.
68
-
-**🚧 WIP**: Under active development.
69
-
-**🟡 Planned**: Scheduled for future implementation (some may have open PRs/RFCs).
70
-
-**🟠 Delayed**: Temporarily dropped in V1 but planned to be re-introduced later.
71
-
-**🔴 Deprecated**: Not planned for V1 unless there is strong demand.
114
+
!!! note
72
115
73
-
**Note**: vLLM V1’s unified scheduler treats both prompt and output tokens the same
74
-
way by using a simple dictionary (e.g., `{request_id: num_tokens}`) to dynamically
75
-
allocate a fixed token budget per request, enabling features like chunked prefills,
76
-
prefix caching, and speculative decoding without a strict separation between prefill
77
-
and decode phases.
116
+
vLLM V1’s unified scheduler treats both prompt and output tokens the same
117
+
way by using a simple dictionary (e.g., `{request_id: num_tokens}`) to dynamically
118
+
allocate a fixed token budget per request, enabling features like chunked prefills,
119
+
prefix caching, and speculative decoding without a strict separation between prefill
120
+
and decode phases.
78
121
79
-
### Semantic Changes and Deprecated Features
80
-
81
-
#### Logprobs
122
+
#### Semantic Changes to Logprobs
82
123
83
124
vLLM V1 supports logprobs and prompt logprobs. However, there are some important semantic
84
125
differences compared to V0:
@@ -96,6 +137,14 @@ Support for logprobs with post-sampling adjustments is in progress and will be a
96
137
97
138
Currently prompt logprobs are only supported when prefix caching is turned off via `--no-enable-prefix-caching`. In a future release, prompt logprobs will be compatible with prefix caching, but a recomputation will be triggered to recover the full prompt logprobs even upon a prefix cache hit. See details in [RFC #13414](https://github.com/vllm-project/vllm/issues/13414).
98
139
140
+
#### WIP Features
141
+
142
+
These features are already supported in vLLM V1, but their optimization is still
143
+
in progress.
144
+
145
+
-**Spec Decode**: Currently, only ngram-based spec decode is supported in V1. There
146
+
will be follow-up work to support other types of spec decode (e.g., see [PR #13933](https://github.com/vllm-project/vllm/pull/13933)). We will prioritize the support for Eagle, MTP compared to draft model based spec decode.
147
+
99
148
#### Deprecated Features
100
149
101
150
As part of the major architectural rework in vLLM V1, several legacy features have been deprecated.
@@ -115,39 +164,4 @@ to handle request preemptions.
115
164
116
165
**Structured Output features**
117
166
118
-
-**Request-level Structured Output Backend**: Deprecated, alternative backends
119
-
(outlines, guidance) with fallbacks is WIP.
120
-
### Feature & Model Support in Progress
121
-
122
-
Although we have re-implemented and partially optimized many features and models from V0 in vLLM V1, optimization work is still ongoing for some, and others remain unsupported.
123
-
124
-
#### Features to Be Optimized
125
-
126
-
These features are already supported in vLLM V1, but their optimization is still
127
-
in progress.
128
-
129
-
-**Spec Decode**: Currently, only ngram-based spec decode is supported in V1. There
130
-
will be follow-up work to support other types of spec decode (e.g., see [PR #13933](https://github.com/vllm-project/vllm/pull/13933)). We will prioritize the support for Eagle, MTP compared to draft model based spec decode.
131
-
132
-
-**Multimodal Models**: V1 is almost fully compatible with V0 except that interleaved modality input is not supported yet.
133
-
See [here](https://github.com/orgs/vllm-project/projects/8) for the status of upcoming features and optimizations.
134
-
135
-
#### Models to Be Supported
136
-
137
-
vLLM V1 currently excludes model architectures with the `SupportsV0Only` protocol,
138
-
and the majority fall into the following categories. V1 support for these models will be added eventually.
139
-
140
-
**Embedding Models**
141
-
The initial support will be provided by [PR #16188](https://github.com/vllm-project/vllm/pull/16188).
142
-
143
-
Later, we will consider using [hidden states processor](https://github.com/vllm-project/vllm/issues/12249), which is based on [global logits processor](https://github.com/vllm-project/vllm/pull/13360) to enable simultaneous generation and embedding using the same engine instance in V1.
144
-
145
-
**Mamba Models**
146
-
Models using selective state-space mechanisms (instead of standard transformer attention)
147
-
are not yet supported (e.g., `MambaForCausalLM`, `JambaForCausalLM`).
148
-
149
-
**Encoder-Decoder Models**
150
-
vLLM V1 is currently optimized for decoder-only transformers. Models requiring
151
-
cross-attention between separate encoder and decoder are not yet supported (e.g., `BartForConditionalGeneration`, `MllamaForConditionalGeneration`).
152
-
153
-
For a complete list of supported models, see the [list of supported models](https://docs.vllm.ai/en/latest/models/supported_models.html).
167
+
-**Request-level Structured Output Backend**: Deprecated, alternative backends (outlines, guidance) with fallbacks is supported now.
0 commit comments