Skip to content

Commit 07b8fae

Browse files
authored
[Doc] correct LoRA capitalization (#20135)
Signed-off-by: kyolebu <kyu@redhat.com>
1 parent 5623088 commit 07b8fae

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

docs/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ vLLM is flexible and easy to use with:
4040
- OpenAI-compatible API server
4141
- Support NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs, Gaudi® accelerators and GPUs, IBM Power CPUs, TPU, and AWS Trainium and Inferentia Accelerators.
4242
- Prefix caching support
43-
- Multi-lora support
43+
- Multi-LoRA support
4444

4545
For more information, check out the following:
4646

docs/models/supported_models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -427,7 +427,7 @@ Specified using `--task embed`.
427427
See [relevant issue on HF Transformers](https://github.com/huggingface/transformers/issues/34882).
428428

429429
!!! note
430-
`jinaai/jina-embeddings-v3` supports multiple tasks through lora, while vllm temporarily only supports text-matching tasks by merging lora weights.
430+
`jinaai/jina-embeddings-v3` supports multiple tasks through LoRA, while vllm temporarily only supports text-matching tasks by merging LoRA weights.
431431

432432
!!! note
433433
The second-generation GTE model (mGTE-TRM) is named `NewModel`. The name `NewModel` is too generic, you should set `--hf-overrides '{"architectures": ["GteNewModel"]}'` to specify the use of the `GteNewModel` architecture.

0 commit comments

Comments
 (0)