Skip to content

Commit 22bcdf8

Browse files
authored
inference: update README.md
Signed-off-by: uMartinXu martin.xu@intel.com
1 parent ec5fb71 commit 22bcdf8

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed

e2e/inference/README.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,14 @@
11
## Intel AI Inference E2E Solution for OpenShift
22

33
### Overview
4-
Intel AI inference e2e solution for OCP is built upon Intel® dGPU provisioning for OpenShift and Intel® Xeon® processors. Two AI inferencing modes are supported:
4+
Intel AI inference e2e solution for OCP is built upon Intel® dGPU provisioning for OpenShift and Intel® Xeon® processors. The two following AI inference modes are used to test with the Intel Data Center GPU Card provisioning:
55
* **Interactive Mode**
6-
[Open Data Hub (ODH)](https://github.com/opendatahub-io) and [Red Hat OpenShift Data Science (RHODS)](https://www.redhat.com/en/technologies/cloud-computing/openshift/openshift-data-science) provides Intel OpenVINO™ based [Jupiter Notebook](https://jupyter.org/) to help users interactively debug the inferencing applications or optimize the models with OCP using Intel Data Center GPU cards and Intel Xeon processors.
6+
[Open Data Hub (ODH)](https://github.com/opendatahub-io) and [Red Hat OpenShift Data Science (RHODS)](https://www.redhat.com/en/technologies/cloud-computing/openshift/openshift-data-science) provides Intel® OpenVINO™ based [Jupiter Notebook](https://jupyter.org/) to help users interactively debug the inferencing applications or optimize the models with OCP using Intel Data Center GPU cards and Intel Xeon processors.
77
* **Deployment Mode**
88
[Intel OpenVINO™ Toolkit](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html) and [Operator](https://github.com/openvinotoolkit/operator) provide the [OpenVINO Model Server (OVMS)](https://github.com/openvinotoolkit/model_server) for users to deploy their inferencing workload using Intel Data Center GPU cards and Intel Xeon processors on OCP cluster in cloud or edge environment.
99

10+
`note: The verification on this mode is ongoing`
11+
1012
### Deploy Intel AI Inference E2E Solution
1113

1214
* **Install RHODS on OpenShift**

0 commit comments

Comments
 (0)