Replies: 1 comment
-
Hi @Kaszebe Try to change port num to 8000 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Can anyone tell me how to connect VLLM to Open WebUI?
I think VLLM is installed in a container on my PoPOS sever. http://localhost:3000/admin/settings gets Open WebUI to open in a browser window.
However, I cannot upload a GGUF from my oobabooga /models folder nor can I figure out how to get it to work with Open WebUI.
I'm wondering if this is because VLLM is not connected to Open WebUI? Or is this a question I should be posting to the Open WebUI discussion board?
If you can't tell, I'm a bit of a noob. I had to get ChatGPT to help me install VLLM in a docker container.
The reason I want to run VLLM is because I have 4 GPUs and plan on buying 3 more. This is for work (I am a copywriter) and for learning. What I have is overkill (maybe) but I am spending my spare time reading and researching and tinkering around.
Beta Was this translation helpful? Give feedback.
All reactions