You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|`LlamaModel`, `LlamaForCausalLM`, `MistralModel`, etc. | Llama-based |`intfloat/e5-mistral-7b-instruct`, etc. | ✅︎ | ✅︎ ||
417
-
|`Qwen2Model`, `Qwen2ForCausalLM`| Qwen2-based |`ssmits/Qwen2-7B-Instruct-embed-base` (see note), `Alibaba-NLP/gte-Qwen2-7B-instruct` (see note), etc. | ✅︎ | ✅︎ ||
418
-
|`Qwen3Model`, `Qwen3ForCausalLM`| Qwen3-based |`Qwen/Qwen3-Embedding-0.6B`, etc. | ✅︎ | ✅︎ ||
416
+
|`LlamaModel`, `LlamaForCausalLM`, `MistralModel`, etc. | Llama-based |`intfloat/e5-mistral-7b-instruct`, etc. | ✅︎ | ✅︎ |✅︎|
417
+
|`Qwen2Model`, `Qwen2ForCausalLM`| Qwen2-based |`ssmits/Qwen2-7B-Instruct-embed-base` (see note), `Alibaba-NLP/gte-Qwen2-7B-instruct` (see note), etc. | ✅︎ | ✅︎ |✅︎|
418
+
|`Qwen3Model`, `Qwen3ForCausalLM`| Qwen3-based |`Qwen/Qwen3-Embedding-0.6B`, etc. | ✅︎ | ✅︎ |✅︎|
419
419
|`RobertaModel`, `RobertaForMaskedLM`| RoBERTa-based |`sentence-transformers/all-roberta-large-v1`, etc. ||||
420
420
421
421
!!! note
@@ -442,9 +442,10 @@ Specified using `--task reward`.
442
442
443
443
| Architecture | Models | Example HF Models |[LoRA][lora-adapter]|[PP][distributed-serving]|[V1](gh-issue:8779)|
If your model is not in the above list, we will try to automatically convert the model using
465
466
[as_classification_model][vllm.model_executor.models.adapters.as_classification_model]. By default, the class probabilities are extracted from the softmaxed hidden state corresponding to the last token.
466
467
@@ -471,7 +472,7 @@ Specified using `--task score`.
471
472
| Architecture | Models | Example HF Models |[V1](gh-issue:8779)|
0 commit comments