|
1 | 1 | # Open WebUI
|
2 | 2 |
|
3 |
| -1. Install the [Docker](https://docs.docker.com/engine/install/) |
| 3 | +[Open WebUI](https://github.com/open-webui/open-webui) is an extensible, feature-rich, |
| 4 | +and user-friendly self-hosted AI platform designed to operate entirely offline. |
| 5 | +It supports various LLM runners like Ollama and OpenAI-compatible APIs, |
| 6 | +with built-in RAG capabilities, making it a powerful AI deployment solution. |
4 | 7 |
|
5 |
| -2. Start the vLLM server with the supported chat completion model, e.g. |
| 8 | +To get started with Open WebUI using vLLM, follow these steps: |
6 | 9 |
|
7 |
| -```bash |
8 |
| -vllm serve qwen/Qwen1.5-0.5B-Chat |
9 |
| -``` |
| 10 | +1. Install the [Docker](https://docs.docker.com/engine/install/). |
10 | 11 |
|
11 |
| -1. Start the [Open WebUI](https://github.com/open-webui/open-webui) docker container (replace the vllm serve host and vllm serve port): |
| 12 | +2. Start the vLLM server with a supported chat completion model: |
12 | 13 |
|
13 |
| -```bash |
14 |
| -docker run -d -p 3000:8080 \ |
15 |
| ---name open-webui \ |
16 |
| --v open-webui:/app/backend/data \ |
17 |
| --e OPENAI_API_BASE_URL=http://<vllm serve host>:<vllm serve port>/v1 \ |
18 |
| ---restart always \ |
19 |
| -ghcr.io/open-webui/open-webui:main |
20 |
| -``` |
| 14 | + ```console |
| 15 | + vllm serve Qwen/Qwen3-0.6B-Chat |
| 16 | + ``` |
21 | 17 |
|
22 |
| -1. Open it in the browser: <http://open-webui-host:3000/> |
| 18 | + !!! note |
| 19 | + When starting the vLLM server, be sure to specify the host and port using the `--host` and `--port` flags. |
| 20 | + For example: |
23 | 21 |
|
24 |
| -On the top of the web page, you can see the model `qwen/Qwen1.5-0.5B-Chat`. |
| 22 | + ```console |
| 23 | + python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 8000 |
| 24 | + ``` |
25 | 25 |
|
26 |
| - |
| 26 | +3. Start the Open WebUI Docker container: |
| 27 | + |
| 28 | + ```console |
| 29 | + docker run -d \ |
| 30 | + --name open-webui \ |
| 31 | + -p 3000:8080 \ |
| 32 | + -v open-webui:/app/backend/data \ |
| 33 | + -e OPENAI_API_BASE_URL=http://0.0.0.0:8000/v1 \ |
| 34 | + --restart always \ |
| 35 | + ghcr.io/open-webui/open-webui:main |
| 36 | + ``` |
| 37 | + |
| 38 | +4. Open it in the browser: <http://open-webui-host:3000/> |
| 39 | + |
| 40 | + At the top of the page, you should see the model `Qwen/Qwen3-0.6B-Chat`. |
| 41 | + |
| 42 | +  |
0 commit comments