Skip to content

Commit ccc5e06

Browse files
RUN-16796 markdown and other fixes
1 parent 9b1b04a commit ccc5e06

File tree

2 files changed

+13
-6
lines changed

2 files changed

+13
-6
lines changed

docs/admin/runai-setup/cluster-setup/cluster-prerequisites.md

Lines changed: 12 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,11 @@
1-
Below are the prerequisites of a cluster installed with Run:ai.
1+
---
2+
title: Prerequisites in a nutshell
3+
summary: This article outlines the required prerequisites for a Run:ai installation.
4+
authors:
5+
- Jason Novich
6+
- Yaron Goldberg
7+
date: 2024-Apr-8
8+
---
29

310
## Prerequisites in a Nutshell
411

@@ -63,7 +70,6 @@ For an up-to-date end-of-life statement of Kubernetes see [Kubernetes Release Hi
6370

6471
#### Pod Security Admission
6572

66-
6773
Run:ai version 2.15 and above supports `restricted` policy for [Pod Security Admission](https://kubernetes.io/docs/concepts/security/pod-security-admission/){target=_blank} (PSA) on OpenShift only. Other Kubernetes distributions are only supported with `Privileged` policy.
6874

6975
For Run:ai on OpenShift to run with PSA `restricted` policy:
@@ -75,8 +81,9 @@ For Run:ai on OpenShift to run with PSA `restricted` policy:
7581
pod-security.kubernetes.io/enforce=privileged
7682
pod-security.kubernetes.io/warn=privileged
7783
```
84+
7885
2. The workloads submitted through Run:ai should comply with the restrictions of PSA `restricted` policy, which are dropping all Linux capabilities and setting `runAsNonRoot` to `true`. This can be done and enforced using [Policies](../../workloads/policies/policies.md).
79-
86+
8087
### NVIDIA
8188

8289
Run:ai has been certified on **NVIDIA GPU Operator** 22.9 to 23.9. Older versions (1.10 and 1.11) have a documented [NVIDIA issue](https://github.com/NVIDIA/gpu-feature-discovery/issues/26){target=_blank}.
@@ -123,7 +130,7 @@ Follow the [Getting Started guide](https://docs.nvidia.com/datacenter/cloud-nati
123130

124131
=== "RKE2"
125132
* Follow the [Getting Started guide](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/getting-started.html#rancher-kubernetes-engine-2){target=blank} to install the NVIDIA GPU Operator.
126-
* Make sure to specify the `CONTAINERD_CONFIG` option exactly with the value specified in the document `/var/lib/rancher/rke2/agent/etc/containerd/config.toml.tmpl` even though the file may not exist in your system.
133+
* Make sure to specify the `CONTAINERD_CONFIG` option exactly with the value specified in the document `/var/lib/rancher/rke2/agent/etc/containerd/config.toml.tmpl` even though the file may not exist in your system.
127134

128135
<!--
129136
=== "RKE2"
@@ -302,7 +309,7 @@ However, for the URL to be accessible outside the cluster you must configure you
302309
-H 'Host: <host-name>'
303310
```
304311

305-
# Hardware Requirements
312+
## Hardware Requirements
306313

307314
(see picture below)
308315

docs/admin/workloads/inference-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: Inference overview
3-
summary: This article describes inference worloads.
3+
summary: This article describes inference workloads.
44
authors:
55
- Jason Novich
66
date: 2024-Mar-29

0 commit comments

Comments
 (0)