Skip to content

Commit bbfb5f0

Browse files
authored
Merge pull request #270 from chaitanya1731/readme-patch
documentation: Updated Readme Formatting
2 parents 77d2461 + 71704d4 commit bbfb5f0

File tree

5 files changed

+27
-23
lines changed

5 files changed

+27
-23
lines changed

e2e/inference/README.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,6 @@
1-
# Overview
1+
# Intel AI Inference End-to-End Solution
2+
3+
## Overview
24
Intel AI inference end-to-end solution with RHOCP is based on the Intel® Data Center GPU Flex Series provisioning, Intel® OpenVINO™, and [Red Hat OpenShift AI](https://www.redhat.com/en/technologies/cloud-computing/openshift/openshift-ai) (RHOAI) on RHOCP. There are two AI inference modes verified with Intel® Xeon® processors and Intel Data Center GPU Flex Series with RHOCP.
35
* Interactive mode – RHOAI provides OpenVINO based Jupyter Notebooks for users to interactively debug the inference applications or [optimize the models](https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) on RHOCP using data center GPU cards or Intel Xeon processors.
46
* Deployment mode – [OpenVINO Model Sever](https://github.com/openvinotoolkit/model_server) (OVMS) can be used to deploy the inference workloads in data center and edge computing environments on RHOCP.
@@ -26,6 +28,7 @@ The OpenVINO operator is published at [Red Hat Ecosystem Catalog](https://catalo
2628
### Install using CLI (To be added)
2729
### Install using Web Console
2830
Follow this [link](https://github.com/openvinotoolkit/operator/blob/v1.1.0/docs/operator_installation.md#operator-instalation) to install the operator via the web console.
31+
2932
## Work with Interactive Mode
3033
To enable the interactive mode, the OpenVINO notebook CR needs to be created and integrated with RHOAI.
3134
1. Click on the `create Notebook` option from the web console and follow these [steps](https://github.com/openvinotoolkit/operator/blob/main/docs/notebook_in_rhods.md#integration-with-openshift-data-science-and-open-data-hub) to create the notebook CR.
@@ -62,4 +65,4 @@ Follow the [link](https://github.com/openvinotoolkit/operator/blob/main/docs/not
6265

6366

6467
## See Also
65-
[GPU accelerated demo with OpenVINO](https://www.youtube.com/watch?v=3fTz_k4JT2A)
68+
[GPU accelerated demo with OpenVINO](https://www.youtube.com/watch?v=3fTz_k4JT2A)

kmmo/README.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,29 +1,29 @@
11
# Setting up Out of Tree Drivers
22

3-
# Introduction
3+
## Introduction
44
[Kernel module management (KMM) operator](https://github.com/rh-ecosystem-edge/kernel-module-management) manages the deployment and lifecycle of out-of-tree kernel modules on RHOCP.
55

66
In this release, KMM operator is used to manage and deploy the Intel® Data Center GPU driver container image on the RHOCP cluster.
77

88
Intel data center GPU driver container images are released from [Intel Data Center GPU Driver for OpenShift Project](https://github.com/intel/intel-data-center-gpu-driver-for-openshift/tree/main/release#intel-data-center-gpu-driver-container-images-for-openshift-release).
99

10-
# KMM operator working mode
10+
## KMM operator working mode
1111
- **Pre-build mode** - This is the default and recommended mode. KMM Operator uses [this pre-built and certified Intel Data Center GPU driver container image](https://catalog.redhat.com/software/containers/intel/intel-data-center-gpu-driver-container/6495ee55c8b2461e35fb8264), which is published on the Red Hat Ecosystem Catalog to provision Intel Data Center GPUs on a RHOCP cluster.
1212
- **On-premises build mode** - Users can optionally build and deploy their own driver container images on-premises through the KMM operator.
1313

14-
# Prerequisites
14+
## Prerequisites
1515
- Provisioned RHOCP cluster. Follow steps [here](/README.md#provisioning-rhocp-cluster).
1616
- Setup node feature discovery. Follow steps [here](/nfd/README.md).
1717

18-
# Install KMM operator
18+
## Install KMM operator
1919
Follow the installation guide below to install the KMM operator via CLI or web console.
2020
- [Install from CLI](https://docs.openshift.com/container-platform/4.14/hardware_enablement/kmm-kernel-module-management.html#kmm-install-using-cli_kernel-module-management-operator)
2121
- [Install from web console](https://docs.openshift.com/container-platform/4.14/hardware_enablement/kmm-kernel-module-management.html#kmm-install-using-web-console_kernel-module-management-operator)
2222

23-
# Canary deployment with KMM
23+
## Canary deployment with KMM
2424
Canary deployment is enabled by default to deploy the driver container image only on specific node(s) to ensure the initial deployment succeeds prior to rollout to all the eligible nodes in the cluster. This safety mechanism can reduce risk and prevent a deployment from adversely affecting the entire cluster.
2525

26-
# Set alternative firmware path at runtime with KMM
26+
## Set alternative firmware path at runtime with KMM
2727
Follow the steps below to set the alternative firmware path at runtime.
2828

2929
1. Update KMM operator `ConfigMap` to set `worker.setFirmwareClassPath` to `/var/lib/firmware`
@@ -38,7 +38,7 @@ $ oc get pods -n openshift-kmm | grep -i "kmm-operator-controller-" | awk '{prin
3838

3939
For more details, see [link.](https://openshift-kmm.netlify.app/documentation/firmwares/#setting-the-kernels-firmware-search-path)
4040

41-
# Deploy Intel Data Center GPU Driver with pre-build mode
41+
## Deploy Intel Data Center GPU Driver with pre-build mode
4242
Follow the steps below to deploy the driver container image with pre-build mode.
4343
1. Find all nodes with an Intel Data Center GPU card using the following command:
4444
```
@@ -65,7 +65,7 @@ $ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-
6565
intel.feature.node.kubernetes.io/dgpu-canary: 'true'
6666
```
6767

68-
# Verification
68+
## Verification
6969
To verify that the drivers have been loaded, follow the steps below:
7070
1. List the nodes labeled with `kmm.node.kubernetes.io/openshift-kmm.intel-dgpu.ready` using the command shown below:
7171
```
@@ -99,4 +99,4 @@ The label shown above indicates that the KMM operator has successfully deployed
9999
```
100100
c. Run dmesg to ensure there are no errors in the kernel message log.
101101

102-
# See Also
102+
## See Also

machine_configuration/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Setting up Machine Configuration
22

3-
# Introduction
3+
## Introduction
44
Machine configuration operation is used to configure [Red Hat Enterprise Linux CoreOS (RHCOS)](https://docs.openshift.com/container-platform/4.14/architecture/architecture-rhcos.html) on each node in a RHOCP cluster.
55

66
[Machine config operator](https://github.com/openshift/machine-config-operator) (MCO) is provided by Red Hat to manage the operating system and machine configuration. In this project through the MCO, cluster administrators can configure and update the kernel to provision Intel Hardware features on the worker nodes.
@@ -15,11 +15,11 @@ If the configuration cannot be set as the default setting, we recommend using so
1515

1616
Any contribution in this area is welcome.
1717

18-
# Prerequisites
18+
## Prerequisites
1919
- Provisioned RHOCP cluster. Follow steps [here](/README.md#provisioning-rhocp-cluster).
2020
- Setup node feature discovery (NFD). Follow steps [here](/nfd/README.md).
2121

22-
# Machine Configuration for Provisioning Intel® QAT
22+
## Machine Configuration for Provisioning Intel® QAT
2323

2424
* Turn on `intel_iommu` kernel parameter and load `vfio_pci` at boot for QAT provisioning
2525

@@ -42,5 +42,5 @@ $ lsmod | grep vfio_pci
4242
```
4343
Ensure that `vfio_pci` driver is present.
4444

45-
# See Also
45+
## See Also
4646
- [Red Hat OpenShift Container Platform Day-2 operations](https://www.ibm.com/cloud/architecture/content/course/red-hat-openshift-container-platform-day-2-ops/)

nfd/README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
# Setting up Node Feature Discovery
22
[Node Feature Discovery (NFD) Operator](https://docs.openshift.com/container-platform/4.14/hardware_enablement/psap-node-feature-discovery-operator.html) manages the deployment and lifecycle of the NFD add-on to detect hardware features and system configuration, such as PCI cards, kernel, operating system version, etc.
33

4-
# Prerequisites
4+
## Prerequisites
55
- Provisioned RHOCP cluster. Follow steps [here](/README.md#provisioning-rhocp-cluster).
66

7-
# Install NFD Operator
7+
## Install NFD Operator
88
Follow the guide below to install the NFD operator using CLI or web console.
99
- [Install from the CLI](https://docs.openshift.com/container-platform/4.14/hardware_enablement/psap-node-feature-discovery-operator.html#install-operator-cli_node-feature-discovery-operator)
1010
- [Install from the web console](https://docs.openshift.com/container-platform/4.14/hardware_enablement/psap-node-feature-discovery-operator.html#install-operator-web-console_node-feature-discovery-operator)
1111

12-
# Configure NFD Operator
12+
## Configure NFD Operator
1313
Note: As RHOCP cluster administrator, you might need to merge the NFD operator config from the following Custom Resources (CRs) with other NFD operator configs that are already applied on your cluster.
1414

1515
1. Create `NodeFeatureDiscovery` CR instance.
@@ -22,7 +22,7 @@ $ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-
2222
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/nfd/node-feature-rules-openshift.yaml
2323
```
2424

25-
# Verification
25+
## Verification
2626
Use the command shown below to verify whether the nodes are labeled properly by NFD:
2727
```
2828
$ oc describe node node_name | grep intel.feature.node.kubernetes.io
@@ -33,11 +33,11 @@ intel.feature.node.kubernetes.io/dgpu-canary=true
3333
intel.feature.node.kubernetes.io/gpu=true
3434
```
3535

36-
# Labels Table
36+
## Labels Table
3737
| Label | Intel hardware feature |
3838
| ----- | ---------------------- |
3939
| `intel.feature.node.kubernetes.io/gpu=true` | Intel® Data Center GPU Flex Series or Intel® Data Center GPU Max Series |
4040
| `intel.feature.node.kubernetes.io/sgx=true` | Intel® SGX |
4141
| `intel.feature.node.kubernetes.io/qat=true` | Intel® QAT |
4242

43-
# See Also
43+
## See Also

one_click/README.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,9 @@ Red Hat [Ansible](https://www.ansible.com/) and Operator technologies are used f
55

66
The referenced Ansible playbooks here can be used by the cluster administrators to customize their own playbooks.
77

8-
>[!NOTE]
9-
> It is recommended to start from [Get started](/README.md#getting-started) to get familiar with the installation and configuration of the general operator before composing the first playbook.
8+
```{note}
9+
It is recommended to start from [Get started](/README.md#getting-started) to get familiar with the installation and configuration of the general operator before composing the first playbook.
10+
```
1011

1112
## Reference Playbook – Intel Data Center GPU Provisioning
1213

0 commit comments

Comments
 (0)