Skip to content

Commit 1562aea

Browse files
authored
Merge pull request #77 from intel/hershpa-patch-1
kmmo: Update README.md
2 parents c79080b + 744918f commit 1562aea

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

kmmo/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,13 +8,13 @@ In this release, KMM operator is used to manage and deploy the Intel® Data Cent
88
Intel data center GPU driver container images are released from [Intel Data Center GPU Driver for OpenShift Project](https://github.com/intel/intel-data-center-gpu-driver-for-openshift/tree/main/release#intel-data-center-gpu-driver-container-images-for-openshift-release).
99

1010
# KMM operator working mode
11-
- **Pre-build mode** - This is the default and recommended mode. KMM Operator uses this pre-built and certified Intel Data Center GPU driver container image, which is published on the Red Hat Container Catalog to provision Intel Data Center GPUs on a RHOCP cluster.
11+
- **Pre-build mode** - This is the default and recommended mode. KMM Operator uses this pre-built and certified Intel Data Center GPU driver container image, which is published on the Red Hat Ecosystem Catalog to provision Intel Data Center GPUs on a RHOCP cluster.
1212
- **On-premises build mode** - Users can optionally build and deploy their own driver container images on-premises through the KMM operator.
1313

1414
# Prerequisites
1515
- Provisioned RHOCP 4.12 cluster. Follow steps [here](/README.md#provisioning-rhocp-cluster).
1616
- Setup node feature discovery. Follow steps [here](/nfd/README.md).
17-
- Setup machine configuration Follow steps [here](/machine_configuration/README.md).
17+
- Setup machine configuration. Follow steps [here](/machine_configuration/README.md).
1818

1919
# Install KMM operator
2020
Follow the installation guide below to install the KMM operator via CLI or web console.
@@ -43,7 +43,7 @@ $ oc label node <node_name> intel.feature.node.kubernetes.io/dgpu-canary=true
4343

4444
3. Use pre-build mode to deploy the driver container.
4545
```
46-
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/1.0.0/kmmo/intel-dgpu.yaml
46+
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/kmmo/intel-dgpu.yaml
4747
```
4848

4949
4. After the driver is verified on the cluster through the canary deployment, simply remove the line shown below from the [`intel-dgpu.yaml`](/kmmo/intel-dgpu.yaml) file and reapply the yaml file to deploy the driver to the entire cluster. As a cluster administrator, you can also select another deployment policy.
@@ -85,4 +85,4 @@ The label shown above indicates that the KMM operator has successfully deployed
8585
```
8686
c. Run dmesg to ensure there are no errors in the kernel message log.
8787

88-
# See Also
88+
# See Also

0 commit comments

Comments
 (0)