Skip to content

Commit 3ed94f9

Browse files
authored
[Docs] Enhance Anyscale documentation, add quickstart links for vLLM (#21018)
Signed-off-by: Ricardo Decal <rdecal@anyscale.com>
1 parent fa83956 commit 3ed94f9

File tree

1 file changed

+11
-2
lines changed

1 file changed

+11
-2
lines changed

docs/deployment/frameworks/anyscale.md

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,15 @@
33
[](){ #deployment-anyscale }
44

55
[Anyscale](https://www.anyscale.com) is a managed, multi-cloud platform developed by the creators of Ray.
6-
It hosts Ray clusters inside your own AWS, GCP, or Azure account, delivering the flexibility of open-source Ray
7-
without the operational overhead of maintaining Kubernetes control planes, configuring autoscalers, or managing observability stacks.
6+
7+
Anyscale automates the entire lifecycle of Ray clusters in your AWS, GCP, or Azure account, delivering the flexibility of open-source Ray
8+
without the operational overhead of maintaining Kubernetes control planes, configuring autoscalers, managing observability stacks, or manually managing head and worker nodes with helper scripts like <gh-file:examples/online_serving/run_cluster.sh>.
9+
810
When serving large language models with vLLM, Anyscale can rapidly provision [production-ready HTTPS endpoints](https://docs.anyscale.com/examples/deploy-ray-serve-llms) or [fault-tolerant batch inference jobs](https://docs.anyscale.com/examples/ray-data-llm).
11+
12+
## Production-ready vLLM on Anyscale quickstarts
13+
14+
- [Offline batch inference](https://console.anyscale.com/template-preview/llm_batch_inference?utm_source=vllm_docs)
15+
- [Deploy vLLM services](https://console.anyscale.com/template-preview/llm_serving?utm_source=vllm_docs)
16+
- [Curate a dataset](https://console.anyscale.com/template-preview/audio-dataset-curation-llm-judge?utm_source=vllm_docs)
17+
- [Finetune an LLM](https://console.anyscale.com/template-preview/entity-recognition-with-llms?utm_source=vllm_docs)

0 commit comments

Comments
 (0)