Skip to content

Commit 15f016f

Browse files
committed
tests_gaudi: Fix vllm readme
Signed-off-by: vbedida79 <veenadhari.bedida@intel.com>
1 parent 8d5c358 commit 15f016f

File tree

1 file changed

+2
-5
lines changed

1 file changed

+2
-5
lines changed

tests/gaudi/l2/README.md

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -75,8 +75,6 @@ Welcome to HCCL demo
7575
[BENCHMARK] Algo Bandwidth : 147.548069 GB/s
7676
####################################################################################################
7777
```
78-
<<<<<<< HEAD
79-
=======
8078

8179
## vLLM
8280
vLLM is a serving engine for LLM's. The following workloads deploys a VLLM server with an LLM using Intel Gaudi. Refer to [Intel Gaudi vLLM fork](https://github.com/HabanaAI/vllm-fork.git) for more details.
@@ -85,7 +83,7 @@ Build the workload container image:
8583
```
8684
git clone https://github.com/HabanaAI/vllm-fork.git --branch v1.18.0
8785
88-
cd vllm/
86+
cd vllm-fork/
8987
9088
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/gaudi/l2/vllm_buildconfig.yaml
9189
@@ -174,5 +172,4 @@ sh-5.1# curl http://vllm-workload.gaudi-validation.svc.cluster.local:8000/v1/com
174172
"max_tokens": 10
175173
}'
176174
{"id":"cmpl-9a0442d0da67411081837a3a32a354f2","object":"text_completion","created":1730321284,"model":"meta-llama/Llama-3.1-8B","choices":[{"index":0,"text":" group of individual stars that forms a pattern or figure","logprobs":null,"finish_reason":"length","stop_reason":null}],"usage":{"prompt_tokens":5,"total_tokens":15,"completion_tokens":10}}
177-
```
178-
>>>>>>> 46ef40e (tests_gaudi: Added L2 vllm workload)
175+
```

0 commit comments

Comments
 (0)