Skip to content

Commit b4bab81

Browse files
authored
Remove unnecessary explicit title anchors and use relative links instead (#20620)
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
1 parent b91cb3f commit b4bab81

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

86 files changed

+75
-147
lines changed

docs/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,4 +48,4 @@ For more information, check out the following:
4848
- [vLLM announcing blog post](https://vllm.ai) (intro to PagedAttention)
4949
- [vLLM paper](https://arxiv.org/abs/2309.06180) (SOSP 2023)
5050
- [How continuous batching enables 23x throughput in LLM inference while reducing p50 latency](https://www.anyscale.com/blog/continuous-batching-llm-inference) by Cade Daniel et al.
51-
- [vLLM Meetups][meetups]
51+
- [vLLM Meetups](community/meetups.md)

docs/api/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ vLLM provides experimental support for multi-modal models through the [vllm.mult
6464
Multi-modal inputs can be passed alongside text and token prompts to [supported models][supported-mm-models]
6565
via the `multi_modal_data` field in [vllm.inputs.PromptType][].
6666

67-
Looking to add your own multi-modal model? Please follow the instructions listed [here][supports-multimodal].
67+
Looking to add your own multi-modal model? Please follow the instructions listed [here](../contributing/model/multimodal.md).
6868

6969
- [vllm.multimodal.MULTIMODAL_REGISTRY][]
7070

docs/community/contact_us.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
---
22
title: Contact Us
33
---
4-
[](){ #contactus }
54

65
--8<-- "README.md:contact-us"

docs/community/meetups.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
---
22
title: Meetups
33
---
4-
[](){ #meetups }
54

65
We host regular meetups in San Francisco Bay Area every 2 months. We will share the project updates from the vLLM team and have guest speakers from the industry to share their experience and insights. Please find the materials of our previous meetups below:
76

docs/configuration/conserving_memory.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Quantized models take less memory at the cost of lower precision.
3333
Statically quantized models can be downloaded from HF Hub (some popular ones are available at [Red Hat AI](https://huggingface.co/RedHatAI))
3434
and used directly without extra configuration.
3535

36-
Dynamic quantization is also supported via the `quantization` option -- see [here][quantization-index] for more details.
36+
Dynamic quantization is also supported via the `quantization` option -- see [here](../features/quantization/README.md) for more details.
3737

3838
## Context length and batch size
3939

docs/configuration/engine_args.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,11 @@
11
---
22
title: Engine Arguments
33
---
4-
[](){ #engine-args }
54

65
Engine arguments control the behavior of the vLLM engine.
76

8-
- For [offline inference][offline-inference], they are part of the arguments to [LLM][vllm.LLM] class.
9-
- For [online serving][serving-openai-compatible-server], they are part of the arguments to `vllm serve`.
7+
- For [offline inference](../serving/offline_inference.md), they are part of the arguments to [LLM][vllm.LLM] class.
8+
- For [online serving](../serving/openai_compatible_server.md), they are part of the arguments to `vllm serve`.
109

1110
You can look at [EngineArgs][vllm.engine.arg_utils.EngineArgs] and [AsyncEngineArgs][vllm.engine.arg_utils.AsyncEngineArgs] to see the available engine arguments.
1211

docs/configuration/model_resolution.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,4 +20,4 @@ model = LLM(
2020
)
2121
```
2222

23-
Our [list of supported models][supported-models] shows the model architectures that are recognized by vLLM.
23+
Our [list of supported models](../models/supported_models.md) shows the model architectures that are recognized by vLLM.

docs/configuration/serve_args.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
---
22
title: Server Arguments
33
---
4-
[](){ #serve-args }
54

65
The `vllm serve` command is used to launch the OpenAI-compatible server.
76

@@ -13,7 +12,7 @@ To see the available CLI arguments, run `vllm serve --help`!
1312
## Configuration file
1413

1514
You can load CLI arguments via a [YAML](https://yaml.org/) config file.
16-
The argument names must be the long form of those outlined [above][serve-args].
15+
The argument names must be the long form of those outlined [above](serve_args.md).
1716

1817
For example:
1918

docs/contributing/benchmarks.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
---
22
title: Benchmark Suites
33
---
4-
[](){ #benchmarks }
54

65
vLLM contains two sets of benchmarks:
76

docs/contributing/dockerfile/dockerfile.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Dockerfile
22

33
We provide a <gh-file:docker/Dockerfile> to construct the image for running an OpenAI compatible server with vLLM.
4-
More information about deploying with Docker can be found [here][deployment-docker].
4+
More information about deploying with Docker can be found [here](../../deployment/docker.md).
55

66
Below is a visual representation of the multi-stage Dockerfile. The build graph contains the following nodes:
77

0 commit comments

Comments
 (0)