You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: e2e/inference/README.md
+4-2Lines changed: 4 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,14 @@
1
1
## Intel AI Inference E2E Solution for OpenShift
2
2
3
3
### Overview
4
-
Intel AI inference e2e solution for OCP is built upon Intel® dGPU provisioning for OpenShift and Intel® Xeon® processors. Two AI inferencing modes are supported:
4
+
Intel AI inference e2e solution for OCP is built upon Intel® dGPU provisioning for OpenShift and Intel® Xeon® processors. The two following AI inference modes are used to test with the Intel Data Center GPU Card provisioning:
5
5
***Interactive Mode**
6
-
[Open Data Hub (ODH)](https://github.com/opendatahub-io) and [Red Hat OpenShift Data Science (RHODS)](https://www.redhat.com/en/technologies/cloud-computing/openshift/openshift-data-science) provides Intel OpenVINO™ based [Jupiter Notebook](https://jupyter.org/) to help users interactively debug the inferencing applications or optimize the models with OCP using Intel Data Center GPU cards and Intel Xeon processors.
6
+
[Open Data Hub (ODH)](https://github.com/opendatahub-io) and [Red Hat OpenShift Data Science (RHODS)](https://www.redhat.com/en/technologies/cloud-computing/openshift/openshift-data-science) provides Intel® OpenVINO™ based [Jupiter Notebook](https://jupyter.org/) to help users interactively debug the inferencing applications or optimize the models with OCP using Intel Data Center GPU cards and Intel Xeon processors.
7
7
***Deployment Mode**
8
8
[Intel OpenVINO™ Toolkit](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html) and [Operator](https://github.com/openvinotoolkit/operator) provide the [OpenVINO Model Server (OVMS)](https://github.com/openvinotoolkit/model_server) for users to deploy their inferencing workload using Intel Data Center GPU cards and Intel Xeon processors on OCP cluster in cloud or edge environment.
0 commit comments