Skip to content

Commit f4cd3cf

Browse files
committed
Merge pull request #917 from run-ai/new-node-doc
nodes
1 parent eee6a72 commit f4cd3cf

File tree

4 files changed

+108
-4
lines changed

4 files changed

+108
-4
lines changed

docs/admin/aiinitiatives/org/scheduling-rules.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,8 @@ This article explains the procedure of configuring and managing Scheduling rules
33
There are 3 types of rules:
44

55
* **Workload time limit** - This rule limits the duration of a workload run time. Workload run time is calculated as the total time in which the workload was in status “Running“.
6-
* **Idle GPU time limit** - This rule limits the total GPU time of a workload. Workload idle time is counted since the first time the workload was in status Running and the GPU was idle.
7-
For fractional workloads, workloads running on a MIG slice, multi GPU or multi-node workloads, each GPU idle second is calculated as follows: __<requires explanation about how it is calculated__
6+
* **Idle GPU time limit** - This rule limits the total GPU time of a workload. Workload idle time is counted from the first time the workload is in status Running and the GPU was idle. We calculate idleness by employing the `runai_gpu_idle_seconds_per_workload metric`. This metric determines the total duration of zero GPU utilization within each 30-second interval. If the GPU remains idle throughout the 30-second window, 30 seconds are added to the idleness sum; otherwise, the idleness count is reset.
7+
88
* **Node type (Affinity)** - This rule limits a workload to run on specific node types. node type is a node affinity applied on the node. Run:ai labels the nodes with the appropriate affinity and indicates the scheduler where it is allowed to schedule the workload.
99

1010
Adding a scheduling rule to a project
Loading
Lines changed: 104 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,104 @@
1+
2+
This article explains the procedure for managing Nodes.
3+
4+
Nodes are Kubernetes elements automatically discovered by the Run:ai platform. Once a node is discovered by the Run:ai platform, an associated instance is created in the Nodes table, administrators can view the Node’s relevant information, and Run:ai scheduler can use the node for Scheduling.
5+
6+
## Nodes table
7+
8+
The Nodes table can be found under Nodes in the Run:ai platform.
9+
10+
The Nodes table displays a list of predefined nodes available to users in the Run:ai platform.
11+
12+
!!! Note
13+
* It is not possible to create additional nodes, or edit, or delete existing nodes.
14+
* Only users with relevant permissions can view the table.
15+
16+
![](img/node-list.png)
17+
18+
The Nodes table consists of the following columns:
19+
20+
| Column | Description |
21+
| :---- | :---- |
22+
| Node | The Kubernetes name of the node |
23+
| Status | The state of the node. Nodes in the Ready state are eligible for scheduling. If the state is Not ready then the main reason appears in parenthesis on the right side of the state field. Hovering the state lists the reasons why a node is Not ready. |
24+
| Node pool | The name of the associated node pool. By default, every node in the Run:ai platform is associated with the default node pool, if no other node pool is associated |
25+
| GPU type | The GPU model, for example, H100, or V100 |
26+
| GPU devices | The number of GPU devices installed on the node. Clicking this field pops up a dialog with details per GPU (described below in this article) |
27+
| Free GPU devices | The current number of fully vacant GPU devices |
28+
| GPU memory | The total amount of GPU memory installed on this node. For example, if the number is 640GB and the number of GPU devices is 8, then each GPU is installed with 80GB of memory (assuming the node is assembled of homogenous GPU devices) |
29+
| Allocated GPUs | The total allocation of GPU devices in units of GPUs (decimal number). For example, if 3 GPUs are 50% allocated, the field prints out the value 1.50. This value represents the portion of GPU memory consumed by all running pods using this node |
30+
| Used GPU memory | The actual amount of memory (in GB or MB) used by pods running on this node. |
31+
| GPU compute utilization | The average compute utilization of all GPU devices in this node |
32+
| GPU memory utilization | The average memory utilization of all GPU devices in this node |
33+
| CPU (Cores) | The number of CPU cores installed on this node |
34+
| CPU memory | The total amount of CPU memory installed on this node |
35+
| Allocated CPU (Cores) | The number of CPU cores allocated by pods running on this node (decimal number, e.g. a pod allocating 350 mili-cores shows an allocation of 0.35 cores). |
36+
| Allocated CPU memory | The total amount of CPU memory allocated by pods running on this node (in GB or MB) |
37+
| Used CPU memory | The total amount of actually used CPU memory by pods running on this node. Pods may allocate memory but not use all of it, or go beyond their CPU memory allocation if using Limit > Request for CPU memory (burstable workload) |
38+
| CPU compute utilization | The utilization of all CPU compute resources on this node (percentage) |
39+
| CPU memory utilization | The utilization of all CPU memory resources on this node (percentage) |
40+
| Used swap CPU memory | The amount of CPU memory (in GB or MB) used for GPU swap memory (* future) |
41+
| Pod(s) | List of pods running on this node, click the field to view details (described below in this article) |
42+
43+
### GPU devices for node
44+
45+
Click one of the values in the GPU devices column, to view the list of GPU devices and their parameters.
46+
47+
| Column | Description |
48+
| :---- | :---- |
49+
| Index | The GPU index, read from the GPU hardware. The same index is used when accessing the GPU directly |
50+
| Used memory | The amount of memory used by pods and drivers using the GPU (in GB or MB) |
51+
| Compute utilization | The portion of time the GPU is being used by applications (percentage) |
52+
| Memory utilization | The portion of the GPU memory that is being used by applications (percentage) |
53+
| Idle time | The elapsed time since the GPU was used (i.e. the GPU is being idle for ‘Idle time’) |
54+
55+
### Pods associated with node
56+
57+
Click one of the values in the Pod(s) column, to view the list of pods and their parameters.
58+
59+
Note
60+
61+
This column is only viewable if your role in the Run:ai platform gives you read access to workloads, even if you are allowed to view workloads, you can only view the workloads within your allowed scope. This means, there might be more pods running on this node than appear in the list your are viewing.
62+
63+
| Column | Description |
64+
| :---- | :---- |
65+
| Pod | The Kubernetes name of the pod. Usually name of the pod is made of the name of the parent workload if there is one, and an index for unique for that pod instance within the workload |
66+
| Status | The state of the pod. In steady state this should be Running and the amount of time the pod is running |
67+
| Project | The Run:ai project name the pod belongs to. Clicking this field takes you to the Projects table filtered by this project name |
68+
| Workload | The workload name the pod belongs to. Clicking this field takes you to the Workloads table filtered by this workload name |
69+
| Image | The full path of the image used by the main container of this pod |
70+
| Creation time | The pod’s creation date and time |
71+
72+
### Customizing the table view
73+
74+
* Filter - Click ADD FILTER, select the column to filter by, and enter the filter values
75+
* Search - Click SEARCH and type the value to search by
76+
* Sort - Click each column header to sort by
77+
* Column selection - Click COLUMNS and select the columns to display in the table
78+
* Download table - Click MORE and then Click Download as CSV
79+
* Show/Hide details - Click to view additional information on the selected row
80+
81+
### Show/Hide details
82+
83+
Click a row in the Nodes table and then click the Show details button at the upper right side of the action bar. The details screen appears, presenting the following metrics graphs:
84+
85+
* GPU utilization
86+
Per GPU graph and an average of all GPUs graph, all on the same chart, along an adjustable period allows you to see the trends of all GPUs compute utilization (percentage of GPU compute) in this node.
87+
* GPU memory utilization
88+
Per GPU graph and an average of all GPUs graph, all on the same chart, along an adjustable period allows you to see the trends of all GPUs memory usage (percentage of the GPU memory) in this node.
89+
* CPU compute utilization
90+
The average of all CPUs’ cores compute utilization graph, along an adjustable period allows you to see the trends of CPU compute utilization (percentage of CPU compute) in this node.
91+
* CPU memory utilization
92+
The utilization of all CPUs memory in a single graph, along an adjustable period allows you to see the trends of CPU memory utilization (percentage of CPU memory) in this node.
93+
* CPU memory usage
94+
The usage of all CPUs memory in a single graph, along an adjustable period allows you to see the trends of CPU memory usage (in GB or MB of CPU memory) in this node.
95+
96+
* For GPUs charts - Click the GPU legend on the right-hand side of the chart, to activate or deactivate any of the GPU lines.
97+
* You can click the date picker to change the presented period
98+
* You can use your mouse to mark a sub-period in the graph for zooming in, and use the ‘Reset zoom’ button to go back to the preset period
99+
* Changes in the period affect all graphs on this screen.
100+
101+
## Using API
102+
103+
Go to the [Nodes](https://app.run.ai/api/docs#tag/Nodes) API reference to view the available actions
104+

mkdocs.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -216,8 +216,8 @@ nav:
216216
- 'Projects' : 'admin/aiinitiatives/org/projects.md'
217217
- 'Departments' : 'admin/aiinitiatives/org/departments.md'
218218
- 'Scheduling Rules' : 'admin/aiinitiatives/org/scheduling-rules.md'
219-
# - 'Managing your resources' :
220-
# - 'Nodes' : 'admin/aiinitiatives/resources/nodes.md'
219+
- 'Managing your resources' :
220+
- 'Nodes' : 'admin/aiinitiatives/resources/nodes.md'
221221
# - 'Node Pools' : 'admin/aiinitiatives/resources/node-pools.md'
222222
- 'Review your performance' :
223223
# - 'Overview' : 'admin/admin-ui-setup/overview.md'

0 commit comments

Comments
 (0)