You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: e2e/inference/README.md
+5-2Lines changed: 5 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,6 @@
1
-
# Overview
1
+
# Intel AI Inference End-to-End Solution
2
+
3
+
## Overview
2
4
Intel AI inference end-to-end solution with RHOCP is based on the Intel® Data Center GPU Flex Series provisioning, Intel® OpenVINO™, and [Red Hat OpenShift AI](https://www.redhat.com/en/technologies/cloud-computing/openshift/openshift-ai) (RHOAI) on RHOCP. There are two AI inference modes verified with Intel® Xeon® processors and Intel Data Center GPU Flex Series with RHOCP.
3
5
* Interactive mode – RHOAI provides OpenVINO based Jupyter Notebooks for users to interactively debug the inference applications or [optimize the models](https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) on RHOCP using data center GPU cards or Intel Xeon processors.
4
6
* Deployment mode – [OpenVINO Model Sever](https://github.com/openvinotoolkit/model_server) (OVMS) can be used to deploy the inference workloads in data center and edge computing environments on RHOCP.
@@ -26,6 +28,7 @@ The OpenVINO operator is published at [Red Hat Ecosystem Catalog](https://catalo
26
28
### Install using CLI (To be added)
27
29
### Install using Web Console
28
30
Follow this [link](https://github.com/openvinotoolkit/operator/blob/v1.1.0/docs/operator_installation.md#operator-instalation) to install the operator via the web console.
31
+
29
32
## Work with Interactive Mode
30
33
To enable the interactive mode, the OpenVINO notebook CR needs to be created and integrated with RHOAI.
31
34
1. Click on the `create Notebook` option from the web console and follow these [steps](https://github.com/openvinotoolkit/operator/blob/main/docs/notebook_in_rhods.md#integration-with-openshift-data-science-and-open-data-hub) to create the notebook CR.
@@ -62,4 +65,4 @@ Follow the [link](https://github.com/openvinotoolkit/operator/blob/main/docs/not
62
65
63
66
64
67
## See Also
65
-
[GPU accelerated demo with OpenVINO](https://www.youtube.com/watch?v=3fTz_k4JT2A)
68
+
[GPU accelerated demo with OpenVINO](https://www.youtube.com/watch?v=3fTz_k4JT2A)
Copy file name to clipboardExpand all lines: kmmo/README.md
+9-9Lines changed: 9 additions & 9 deletions
Original file line number
Diff line number
Diff line change
@@ -1,29 +1,29 @@
1
1
# Setting up Out of Tree Drivers
2
2
3
-
# Introduction
3
+
##Introduction
4
4
[Kernel module management (KMM) operator](https://github.com/rh-ecosystem-edge/kernel-module-management) manages the deployment and lifecycle of out-of-tree kernel modules on RHOCP.
5
5
6
6
In this release, KMM operator is used to manage and deploy the Intel® Data Center GPU driver container image on the RHOCP cluster.
7
7
8
8
Intel data center GPU driver container images are released from [Intel Data Center GPU Driver for OpenShift Project](https://github.com/intel/intel-data-center-gpu-driver-for-openshift/tree/main/release#intel-data-center-gpu-driver-container-images-for-openshift-release).
9
9
10
-
# KMM operator working mode
10
+
##KMM operator working mode
11
11
-**Pre-build mode** - This is the default and recommended mode. KMM Operator uses [this pre-built and certified Intel Data Center GPU driver container image](https://catalog.redhat.com/software/containers/intel/intel-data-center-gpu-driver-container/6495ee55c8b2461e35fb8264), which is published on the Red Hat Ecosystem Catalog to provision Intel Data Center GPUs on a RHOCP cluster.
12
12
-**On-premises build mode** - Users can optionally build and deploy their own driver container images on-premises through the KMM operator.
Follow the installation guide below to install the KMM operator via CLI or web console.
20
20
-[Install from CLI](https://docs.openshift.com/container-platform/4.14/hardware_enablement/kmm-kernel-module-management.html#kmm-install-using-cli_kernel-module-management-operator)
21
21
-[Install from web console](https://docs.openshift.com/container-platform/4.14/hardware_enablement/kmm-kernel-module-management.html#kmm-install-using-web-console_kernel-module-management-operator)
22
22
23
-
# Canary deployment with KMM
23
+
##Canary deployment with KMM
24
24
Canary deployment is enabled by default to deploy the driver container image only on specific node(s) to ensure the initial deployment succeeds prior to rollout to all the eligible nodes in the cluster. This safety mechanism can reduce risk and prevent a deployment from adversely affecting the entire cluster.
25
25
26
-
# Set alternative firmware path at runtime with KMM
26
+
##Set alternative firmware path at runtime with KMM
27
27
Follow the steps below to set the alternative firmware path at runtime.
28
28
29
29
1. Update KMM operator `ConfigMap` to set `worker.setFirmwareClassPath` to `/var/lib/firmware`
Copy file name to clipboardExpand all lines: machine_configuration/README.md
+4-4Lines changed: 4 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Setting up Machine Configuration
2
2
3
-
# Introduction
3
+
##Introduction
4
4
Machine configuration operation is used to configure [Red Hat Enterprise Linux CoreOS (RHCOS)](https://docs.openshift.com/container-platform/4.14/architecture/architecture-rhcos.html) on each node in a RHOCP cluster.
5
5
6
6
[Machine config operator](https://github.com/openshift/machine-config-operator) (MCO) is provided by Red Hat to manage the operating system and machine configuration. In this project through the MCO, cluster administrators can configure and update the kernel to provision Intel Hardware features on the worker nodes.
@@ -15,11 +15,11 @@ If the configuration cannot be set as the default setting, we recommend using so
Copy file name to clipboardExpand all lines: nfd/README.md
+6-6Lines changed: 6 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -1,15 +1,15 @@
1
1
# Setting up Node Feature Discovery
2
2
[Node Feature Discovery (NFD) Operator](https://docs.openshift.com/container-platform/4.14/hardware_enablement/psap-node-feature-discovery-operator.html) manages the deployment and lifecycle of the NFD add-on to detect hardware features and system configuration, such as PCI cards, kernel, operating system version, etc.
Follow the guide below to install the NFD operator using CLI or web console.
9
9
-[Install from the CLI](https://docs.openshift.com/container-platform/4.14/hardware_enablement/psap-node-feature-discovery-operator.html#install-operator-cli_node-feature-discovery-operator)
10
10
-[Install from the web console](https://docs.openshift.com/container-platform/4.14/hardware_enablement/psap-node-feature-discovery-operator.html#install-operator-web-console_node-feature-discovery-operator)
11
11
12
-
# Configure NFD Operator
12
+
##Configure NFD Operator
13
13
Note: As RHOCP cluster administrator, you might need to merge the NFD operator config from the following Custom Resources (CRs) with other NFD operator configs that are already applied on your cluster.
Copy file name to clipboardExpand all lines: one_click/README.md
+3-2Lines changed: 3 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -5,8 +5,9 @@ Red Hat [Ansible](https://www.ansible.com/) and Operator technologies are used f
5
5
6
6
The referenced Ansible playbooks here can be used by the cluster administrators to customize their own playbooks.
7
7
8
-
>[!NOTE]
9
-
> It is recommended to start from [Get started](/README.md#getting-started) to get familiar with the installation and configuration of the general operator before composing the first playbook.
8
+
```{note}
9
+
It is recommended to start from [Get started](/README.md#getting-started) to get familiar with the installation and configuration of the general operator before composing the first playbook.
10
+
```
10
11
11
12
## Reference Playbook – Intel Data Center GPU Provisioning
0 commit comments