Including other models rather than HF models in vLLM backend #2593
HalteroXHunter
announced in
Q&A
Replies: 1 comment
-
trying to run a local model but also only finding hf configs, any answers? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I wanted to ask regarding the configuration enabled by the vLLM Backend: https://github.com/triton-inference-server/vllm_backend
I wan to load a trained model in local that I don't have in the HF hub. Is there a possibility of loading models that are not in HF in the model.json file?
vllm/vllm/engine/arg_utils.py
Line 11 in ee8217e
Or does the vLLM backend only support HF models?
Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions