Skip to content

Commit 1015296

Browse files
authored
[doc][mkdocs] fix the duplicate Supported features sections in GPU docs (#19606)
Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com>
1 parent ce9dc02 commit 1015296

File tree

3 files changed

+12
-3
lines changed

3 files changed

+12
-3
lines changed

docs/getting_started/installation/gpu/cuda.inc.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -254,7 +254,10 @@ The latest code can contain bugs and may not be stable. Please use it with cauti
254254

255255
See [deployment-docker-build-image-from-source][deployment-docker-build-image-from-source] for instructions on building the Docker image.
256256

257-
## Supported features
257+
# --8<-- [end:build-image-from-source]
258+
# --8<-- [start:supported-features]
258259

259260
See [feature-x-hardware][feature-x-hardware] compatibility matrix for feature support information.
261+
262+
# --8<-- [end:supported-features]
260263
# --8<-- [end:extra-information]

docs/getting_started/installation/gpu/rocm.inc.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,10 @@ docker run -it \
217217

218218
Where the `<path/to/model>` is the location where the model is stored, for example, the weights for llama2 or llama3 models.
219219

220-
## Supported features
220+
# --8<-- [end:build-image-from-source]
221+
# --8<-- [start:supported-features]
221222

222223
See [feature-x-hardware][feature-x-hardware] compatibility matrix for feature support information.
224+
225+
# --8<-- [end:supported-features]
223226
# --8<-- [end:extra-information]

docs/getting_started/installation/gpu/xpu.inc.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,8 @@ $ docker run -it \
6363
vllm-xpu-env
6464
```
6565

66-
## Supported features
66+
# --8<-- [end:build-image-from-source]
67+
# --8<-- [start:supported-features]
6768

6869
XPU platform supports **tensor parallel** inference/serving and also supports **pipeline parallel** as a beta feature for online serving. We require Ray as the distributed runtime backend. For example, a reference execution like following:
6970

@@ -78,4 +79,6 @@ python -m vllm.entrypoints.openai.api_server \
7879
```
7980

8081
By default, a ray instance will be launched automatically if no existing one is detected in the system, with `num-gpus` equals to `parallel_config.world_size`. We recommend properly starting a ray cluster before execution, referring to the <gh-file:examples/online_serving/run_cluster.sh> helper script.
82+
83+
# --8<-- [end:supported-features]
8184
# --8<-- [end:extra-information]

0 commit comments

Comments
 (0)