[Platform][Worker][ModelRunner] Add LoRA & Multi-LoRA support #521
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What this PR does / why we need it?
According to this RFC [RFC]: Join the MultiLora and MultiLora Dynammic Serving feature develop #396 and this vLLM Ascend Roadmap Q2 2025 #448, we pull request relavant code to support (1) Multi-LoRA and (2) Multi-LoRA Dynamic Serving.
LoRA reference is here: LoRA reference
Does this PR introduce any user-facing change?
Following openai HTTP apis will be supported:
/v1/load_lora_adapter
/v1/unload_lora_adapter
How was this patch tested?
git clone https://github.com/vllm-project/vllm.git
cd vllm/examples/offline_inference/ && python3 multilora_inference.py