Skip to content

Commit 2e5ac2a

Browse files
hmellorChen-zexi
authored andcommitted
[Doc] Fix some MkDocs snippets used in the installation docs (vllm-project#20572)
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
1 parent 3fb267f commit 2e5ac2a

File tree

8 files changed

+10
-26
lines changed

8 files changed

+10
-26
lines changed

docs/getting_started/installation/cpu/apple.inc.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -54,9 +54,6 @@ If the build has error like the following snippet where standard C++ headers can
5454
```
5555

5656
# --8<-- [end:build-wheel-from-source]
57-
# --8<-- [start:set-up-using-docker]
58-
59-
# --8<-- [end:set-up-using-docker]
6057
# --8<-- [start:pre-built-images]
6158

6259
# --8<-- [end:pre-built-images]

docs/getting_started/installation/cpu/arm.inc.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -28,9 +28,6 @@ ARM CPU backend currently supports Float32, FP16 and BFloat16 datatypes.
2828
Testing has been conducted on AWS Graviton3 instances for compatibility.
2929

3030
# --8<-- [end:build-wheel-from-source]
31-
# --8<-- [start:set-up-using-docker]
32-
33-
# --8<-- [end:set-up-using-docker]
3431
# --8<-- [start:pre-built-images]
3532

3633
# --8<-- [end:pre-built-images]

docs/getting_started/installation/cpu/s390x.inc.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -56,9 +56,6 @@ Execute the following commands to build and install vLLM from the source.
5656
```
5757

5858
# --8<-- [end:build-wheel-from-source]
59-
# --8<-- [start:set-up-using-docker]
60-
61-
# --8<-- [end:set-up-using-docker]
6259
# --8<-- [start:pre-built-images]
6360

6461
# --8<-- [end:pre-built-images]

docs/getting_started/installation/cpu/x86.inc.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -31,9 +31,6 @@ vLLM initially supports basic model inferencing and serving on x86 CPU platform,
3131
- If you want to force enable AVX512_BF16 for the cross-compilation, please set environment variable `VLLM_CPU_AVX512BF16=1` before the building.
3232

3333
# --8<-- [end:build-wheel-from-source]
34-
# --8<-- [start:set-up-using-docker]
35-
36-
# --8<-- [end:set-up-using-docker]
3734
# --8<-- [start:pre-built-images]
3835

3936
See [https://gallery.ecr.aws/q9t5s3a7/vllm-cpu-release-repo](https://gallery.ecr.aws/q9t5s3a7/vllm-cpu-release-repo)

docs/getting_started/installation/gpu.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -46,11 +46,11 @@ vLLM is a Python library that supports the following GPU variants. Select your G
4646

4747
=== "AMD ROCm"
4848

49-
There is no extra information on creating a new Python environment for this device.
49+
--8<-- "docs/getting_started/installation/gpu/rocm.inc.md:set-up-using-python"
5050

5151
=== "Intel XPU"
5252

53-
There is no extra information on creating a new Python environment for this device.
53+
--8<-- "docs/getting_started/installation/gpu/xpu.inc.md:set-up-using-python"
5454

5555
### Pre-built wheels
5656

docs/getting_started/installation/gpu/cuda.inc.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -232,9 +232,6 @@ pip install -e .
232232
```
233233

234234
# --8<-- [end:build-wheel-from-source]
235-
# --8<-- [start:set-up-using-docker]
236-
237-
# --8<-- [end:set-up-using-docker]
238235
# --8<-- [start:pre-built-images]
239236

240237
See [deployment-docker-pre-built-image][deployment-docker-pre-built-image] for instructions on using the official Docker image.
@@ -261,4 +258,3 @@ See [deployment-docker-build-image-from-source][deployment-docker-build-image-fr
261258
See [feature-x-hardware][feature-x-hardware] compatibility matrix for feature support information.
262259

263260
# --8<-- [end:supported-features]
264-
# --8<-- [end:extra-information]

docs/getting_started/installation/gpu/rocm.inc.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,9 @@
22

33
vLLM supports AMD GPUs with ROCm 6.3.
44

5+
!!! tip
6+
[Docker](#set-up-using-docker) is the recommended way to use vLLM on ROCm.
7+
58
!!! warning
69
There are no pre-built wheels for this device, so you must either use the pre-built Docker image or build vLLM from source.
710

@@ -14,6 +17,8 @@ vLLM supports AMD GPUs with ROCm 6.3.
1417
# --8<-- [end:requirements]
1518
# --8<-- [start:set-up-using-python]
1619

20+
There is no extra information on creating a new Python environment for this device.
21+
1722
# --8<-- [end:set-up-using-python]
1823
# --8<-- [start:pre-built-wheels]
1924

@@ -123,9 +128,7 @@ Currently, there are no pre-built ROCm wheels.
123128
- For MI300x (gfx942) users, to achieve optimal performance, please refer to [MI300x tuning guide](https://rocm.docs.amd.com/en/latest/how-to/tuning-guides/mi300x/index.html) for performance optimization and tuning tips on system and workflow level.
124129
For vLLM, please refer to [vLLM performance optimization](https://rocm.docs.amd.com/en/latest/how-to/tuning-guides/mi300x/workload.html#vllm-performance-optimization).
125130

126-
## Set up using Docker (Recommended)
127-
128-
# --8<-- [end:set-up-using-docker]
131+
# --8<-- [end:build-wheel-from-source]
129132
# --8<-- [start:pre-built-images]
130133

131134
The [AMD Infinity hub for vLLM](https://hub.docker.com/r/rocm/vllm/tags) offers a prebuilt, optimized
@@ -227,4 +230,3 @@ Where the `<path/to/model>` is the location where the model is stored, for examp
227230
See [feature-x-hardware][feature-x-hardware] compatibility matrix for feature support information.
228231

229232
# --8<-- [end:supported-features]
230-
# --8<-- [end:extra-information]

docs/getting_started/installation/gpu/xpu.inc.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,8 @@ vLLM initially supports basic model inference and serving on Intel GPU platform.
1414
# --8<-- [end:requirements]
1515
# --8<-- [start:set-up-using-python]
1616

17+
There is no extra information on creating a new Python environment for this device.
18+
1719
# --8<-- [end:set-up-using-python]
1820
# --8<-- [start:pre-built-wheels]
1921

@@ -43,9 +45,6 @@ VLLM_TARGET_DEVICE=xpu python setup.py install
4345
type is supported on Intel Data Center GPU, not supported on Intel Arc GPU yet.
4446

4547
# --8<-- [end:build-wheel-from-source]
46-
# --8<-- [start:set-up-using-docker]
47-
48-
# --8<-- [end:set-up-using-docker]
4948
# --8<-- [start:pre-built-images]
5049

5150
Currently, there are no pre-built XPU images.
@@ -86,4 +85,3 @@ By default, a ray instance will be launched automatically if no existing one is
8685
XPU platform uses **torch-ccl** for torch<2.8 and **xccl** for torch>=2.8 as distributed backend, since torch 2.8 supports **xccl** as built-in backend for XPU.
8786

8887
# --8<-- [end:distributed-backend]
89-
# --8<-- [end:extra-information]

0 commit comments

Comments
 (0)