Skip to content

Commit 260127e

Browse files
authored
[Docs] Add intro and fix 1-2-3 list in frameworks/open-webui.md (#19199)
Signed-off-by: windsonsea <haifeng.yao@daocloud.io>
1 parent d0dc4cf commit 260127e

File tree

2 files changed

+33
-17
lines changed

2 files changed

+33
-17
lines changed

docs/assets/deployment/open_webui.png

-10.4 KB
Loading
Lines changed: 33 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1,26 +1,42 @@
11
# Open WebUI
22

3-
1. Install the [Docker](https://docs.docker.com/engine/install/)
3+
[Open WebUI](https://github.com/open-webui/open-webui) is an extensible, feature-rich,
4+
and user-friendly self-hosted AI platform designed to operate entirely offline.
5+
It supports various LLM runners like Ollama and OpenAI-compatible APIs,
6+
with built-in RAG capabilities, making it a powerful AI deployment solution.
47

5-
2. Start the vLLM server with the supported chat completion model, e.g.
8+
To get started with Open WebUI using vLLM, follow these steps:
69

7-
```bash
8-
vllm serve qwen/Qwen1.5-0.5B-Chat
9-
```
10+
1. Install the [Docker](https://docs.docker.com/engine/install/).
1011

11-
1. Start the [Open WebUI](https://github.com/open-webui/open-webui) docker container (replace the vllm serve host and vllm serve port):
12+
2. Start the vLLM server with a supported chat completion model:
1213

13-
```bash
14-
docker run -d -p 3000:8080 \
15-
--name open-webui \
16-
-v open-webui:/app/backend/data \
17-
-e OPENAI_API_BASE_URL=http://<vllm serve host>:<vllm serve port>/v1 \
18-
--restart always \
19-
ghcr.io/open-webui/open-webui:main
20-
```
14+
```console
15+
vllm serve Qwen/Qwen3-0.6B-Chat
16+
```
2117

22-
1. Open it in the browser: <http://open-webui-host:3000/>
18+
!!! note
19+
When starting the vLLM server, be sure to specify the host and port using the `--host` and `--port` flags.
20+
For example:
2321

24-
On the top of the web page, you can see the model `qwen/Qwen1.5-0.5B-Chat`.
22+
```console
23+
python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 8000
24+
```
2525

26-
![](../../assets/deployment/open_webui.png)
26+
3. Start the Open WebUI Docker container:
27+
28+
```console
29+
docker run -d \
30+
--name open-webui \
31+
-p 3000:8080 \
32+
-v open-webui:/app/backend/data \
33+
-e OPENAI_API_BASE_URL=http://0.0.0.0:8000/v1 \
34+
--restart always \
35+
ghcr.io/open-webui/open-webui:main
36+
```
37+
38+
4. Open it in the browser: <http://open-webui-host:3000/>
39+
40+
At the top of the page, you should see the model `Qwen/Qwen3-0.6B-Chat`.
41+
42+
![Web portal of model Qwen/Qwen3-0.6B-Chat](../../assets/deployment/open_webui.png)

0 commit comments

Comments
 (0)