Skip to content

Commit bad40ff

Browse files
authored
Merge pull request #291 from vbedida79/patch-310724-5
workloads_opea: minor fixes for readme links and shell script
2 parents 23b323f + 6718cfc commit bad40ff

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

workloads/opea/chatqna/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,8 +11,8 @@ The workload is based on the [OPEA ChatQnA Application](https://github.com/opea-
1111

1212
**Note**: Refer to [documentation](https://docs.openshift.com/container-platform/4.16/storage/index.html) for setting up other types of persistent storages.
1313
* Provisioned Intel Gaudi accelerator on RHOCP cluster. Follow steps [here](/gaudi/README.md)
14-
* RHOAI is installed. Follow steps [here](../inference/README.md/#install-rhoai)
15-
* The Intel Gaudi AI accelerator is enabled with RHOAI. Follow steps [here]((../inference/README.md/#enable-intel-gaudi-ai-accelerator-with-rhoai))
14+
* RHOAI is installed. Follow steps [here](/e2e/inference/README.md/#install-rhoai)
15+
* The Intel Gaudi AI accelerator is enabled with RHOAI. Follow steps [here](/e2e/inference/README.md/#enable-intel-gaudi-ai-accelerator-with-rhoai)
1616
* Minio based S3 service ready for RHOAI. Follow steps [here](https://ai-on-openshift.io/tools-and-applications/minio/minio/#create-a-matching-data-connection-for-minio)
1717

1818
## Deploy Model Serving for OPEA ChatQnA Microservices with RHOAI
@@ -33,7 +33,7 @@ The workload is based on the [OPEA ChatQnA Application](https://github.com/opea-
3333

3434
### Launch the Model Serving with Intel Gaudi AI Accelerator
3535

36-
* Click on the Settings and choose ```ServingRuntime```. Copy or import the [tgi_gaudi_servingruntime.yaml](tgi-gaudi-servingruntime.yaml). The [tgi-gaudi](https://github.com/huggingface/tgi-gaudi) serving runtime is used. Follow the image below.
36+
* Click on the Settings and choose ```ServingRuntime```. Copy or import the [tgi_gaudi_servingruntime.yaml](tgi_gaudi_servingruntime.yaml). The [tgi-gaudi](https://github.com/huggingface/tgi-gaudi) serving runtime is used. Follow the image below.
3737

3838
![Alt text](/docs/images/tgi-serving-runtime.png)
3939

workloads/opea/chatqna/create_megaservice_container.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ namespace="opea-chatqna"
77
repo="https://github.com/opea-project/GenAIExamples.git"
88
yaml_url="https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/workloads/opea/chatqna/chatqna_megaservice_buildconfig.yaml"
99

10-
oc $namespace &&
10+
oc project $namespace &&
1111
git clone --depth 1 --branch $tag $repo &&
1212
cd GenAIExamples/ChatQnA/deprecated/langchain/docker &&
1313
oc extract secret/knative-serving-cert -n istio-system --to=. --keys=tls.crt &&

0 commit comments

Comments
 (0)