Skip to content

Commit 262a30e

Browse files
RUN-126616 completed article.
1 parent 08f16c8 commit 262a30e

File tree

1 file changed

+56
-26
lines changed

1 file changed

+56
-26
lines changed
Lines changed: 56 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: GPU Memory SWAP
3-
summary: This article describes the Run:ai memory swap feature.
3+
summary: This article describes the Run:ai memory swap feature and includes configuration information.
44
authors:
55
- Jamie Weider
66
- Hagay Sharon
@@ -11,71 +11,101 @@ date: 2024-Jun-26
1111

1212
To ensure efficient and effective usage of an organization’s resources, Run:ai provides multiple features on multiple layers to help administrators and practitioners maximize their existing GPUs resource utilization.
1313

14-
Run:ai’s GPU memory swap feature helps administrators and AI practitioners to further increase the utilization of existing GPU HW by improving GPU sharing between AI initiatives and stakeholders. This is done by expending the GPU physical memory to the CPU memory which is typically an order of magnitude larger than that of the GPU.
14+
Run:ai’s *GPU memory swap* feature helps administrators and AI practitioners to further increase the utilization of existing GPU hardware by improving GPU sharing between AI initiatives and stakeholders. This is done by expending the GPU physical memory to the CPU memory which is typically an order of magnitude larger than that of the GPU.
1515

16-
Expending the GPU physical memory, helps the Run:ai system to put more workloads on the same GPU physical HW, and to provide a smooth workload context switching between GPU memory and CPU memory, eliminating the need to kill workloads when the memory requirement is larger than what the GPU physical memory can provide, as long as each single workload requires no more than the size of the GPU physical memory.
16+
Expending the GPU physical memory, helps the Run:ai system to put more workloads on the same GPU physical hardware, and to provide a smooth workload context switching between GPU memory and CPU memory, eliminating the need to kill workloads when the memory requirement is larger than what the GPU physical memory can provide, as long as each single workload requires no more than the size of the GPU physical memory.
1717

1818
## Benefits of GPU memory swap
1919

20-
There are a number of use cases where GPU memory swap can benefit and improve the user experience and the system's overall utilization:
20+
There are several use cases where GPU memory swap can benefit and improve the user experience and the system's overall utilization:
2121

2222
### Sharing a GPU between multiple interactive workloads (notebooks)
2323

24-
AI practitioners use notebooks to develop and test new AI models and to improve existing AI models. While developing or testing an AI model, notebooks use GPU resources intermittently, but the required resources of the GPU’s are pre-allocated by the notebook and cannot be used by other workloads after one notebook has already reserved them. To overcome this inefficiency, Run:ai introduced Dynamic Fractions and Node Level Scheduler.
24+
AI practitioners use notebooks to develop and test new AI models and to improve existing AI models. While developing or testing an AI model, notebooks use GPU resources intermittently, but the required resources of the GPU’s are pre-allocated by the notebook and cannot be used by other workloads after one notebook has already reserved them. To overcome this inefficiency, Run:ai introduced *Dynamic Fractions* and *Node Level Scheduler*.
2525

26-
When one or more workloads require more than their requested GPU resources, there’s a high probability not all workloads can run on a single GPU, because the total memory required is larger than the physical size of the GPU memory.
26+
When one or more workloads require more than their requested GPU resources, there’s a high probability not all workloads can run on a single GPU because the total memory required is larger than the physical size of the GPU memory.
2727

28-
With GPU memory swap, several workloads can run on the same GPU, even if the sum of their used memory is larger than the size of the physical GPU memory. While GPU memory swap can swap-in and swap-out workloads interchangeably, allowing multiple workloads to each use the full GPU memory, the more common scenario is for one workload to run, say an interactive notebook running, while other notebooks are either idle or using the CPU to develop new code but not using the GPU at the same time. Swap-in and swap-out is a smooth process from a user-experience point of view, the notebooks do not notice that they are being swapped in and out of the GPU memory, and the user-experience effect is a slower execution pace when multiple notebooks use the GPU simultaneously.
28+
With *GPU memory swap*, several workloads can run on the same GPU, even if the sum of their used memory is larger than the size of the physical GPU memory. *GPU memory swap* can swap in and out workloads interchangeably, allowing multiple workloads to each use the full amount of GPU memory. The most common scenario is for one workload to run on the GPU (for example, an interactive notebook),while other notebooks are either idle or using the CPU to develop new code (while not using the GPU). From a user experience point of view, the swap in and out is a smooth process since the notebooks do not notice that they are being swapped in and out of the GPU memory. On rare occasions, when multiple notebooks need to access the GPU simultaneously, slower workload execution may be experienced.
2929

30-
The assumption is that notebooks only use the GPU intermittently, therefore with high probability, only one workload (interactive notebook in this case) will use the GPU at a time. The more notebooks the system puts on a single GPU, the higher the chances are that there will be more than one notebook requiring the GPU resources at the same time. Admins have a significant role here in fine-tuning the amount of notebooks running on the same GPU, based on the customer’s specific use patterns and required SLA.
30+
The assumption is that notebooks only use the GPU intermittently, therefore with high probability, only one workload (for example, an interactive notebook), will use the GPU at a time. The more notebooks the system puts on a single GPU, the higher the chances are that there will be more than one notebook requiring the GPU resources at the same time. Admins have a significant role here in fine tuning the amount of notebooks running on the same GPU, based on specific use patterns and required SLAs.
3131

32-
### Sharing a GPU between "frontend" interaractive workloads and "background" training workloads
32+
### Sharing a GPU between "frontend" interactive workloads and "background" training workloads
3333

34-
A single GPU can be shared between an interactive frontend workload such as a notebook, and a backend training process that is not time-sensitive or delay-sensitive as an interactive workload. Whenever the interactive workload uses the GPU, both workloads share the GPU time, each running part of the time swapped-in and swapped-out in the CPU memory the rest of the time.
34+
A single GPU can be shared between an interactive frontend workload (for example, a Jupyter notebook, image recognition services or an LLM service), and a backend training process that is not time sensitive or delay sensitive. At times when the inference/interactive workload uses the GPU, both training and inference/interactive workloads share the GPU resources, each running part of the time swapped-in to the GPU memory, and swapped-out into the CPU memory the rest of the time.
3535

36-
Each time the interactive workload stops using the GPU (idle), the Run:ai system keeps the interactive workload data in the CPU memory while Kubernetes- wise, maintaining the POD state as running, and the same goes for the training session. This allows the training workload to run faster when the interactive workload is not using the GPU, and slower when it does, thus sharing the same resource between multiple workloads and maintaining uninterrupted service for both workloads.
36+
Whenever the inference/interactive workload stops using the GPU, the swap mechanism swaps out the inference/interactive workload GPU data to the CPU memory. In terms of Kubernetes, the POD is still alive and running using the CPU. This allows the training workload to run faster when the inference/interactive workload is not using the GPU, and slower when it does, thus sharing the same resource between multiple workloads, fully utilizing the GPU at all times, and maintaining uninterrupted service for both workloads.
3737

3838
### Serving inference warm models with GPU memory swap
3939

40-
When running multiple models, it is important to decide how to best utilize your GPU HW when you don’t know exactly how many instances you want to keep “hot” on a GPU, vs. cold models, stored in HDD or CPU memory, waiting to be loaded to the GPU.
40+
Running multiple inference models is a demanding task and need you will need to ensure that your SLA is met. You need to provide high performance and low latency, while maximizing GPU utilization. This becomes even more challenging when the exact model usage patterns are unpredictable. You must plan for the agility of inference services and strive to keep models on standby in a ready state rather than a idle state.
4141

42-
Run:ai’s GPU memory swap feature enables you to load multiple models to a single GPU, each can occupy up to the full GPU memory. Using a load balancer, the administrator can control which inference request is sent to which server. The GPU can be loaded with multiple models, where the model in use is loaded to the GPU memory and the rest of the models swapped-out to the CPU memory, stored as warm models and ready to be loaded when required. GPU memory swap always maintains the context of the workload on the GPU, it can easily and quickly switch between models, unlike cold models that must be loaded completely from scratch.
42+
Run:ai’s *GPU memory swap* feature enables you to load multiple models to a single GPU, where each can use up to the full amount GPU memory. Using a load balancer, the administrator can control to which server each inference request is sent. Then the GPU can be loaded with multiple models, where the model in use is loaded into the GPU memory and the rest of the models are swapped-out to the CPU memory. The swapped models are stored as ready models to be loaded when required. *GPU memory swap* always maintains the context of the workload on the GPU so it can easily and quickly switch between models, unlike idle models that must be loaded completely from scratch.
4343

4444
## Configuring memory swap
4545

46-
To enable ‘GPU memory swap’ in a Run:aAi cluster, the administrator must update runaiconfig file with the following configuration:
46+
**Perquisites**—before configuring the *GPU Memory Swap* the admin must configure the *Dynamic Fractions* feature, and optionally configure the *Node Level Scheduler* feature. Both these configurations are designed to maximize performance within a single node.
47+
48+
To enable *GPU memory swap* in a Run:aAi cluster, the administrator must update the `runaiconfig` file with the following parameters:
4749

4850
``` yaml
4951
spec:
5052
global:
5153
core:
5254
swap:
5355
enabled: true
56+
limits:
57+
cpuRam: 100Gi
5458
```
5559
56-
Or use the patch command from your host terminal:
60+
The example above uses `100Gi` as the size of the swap file.
61+
62+
You can also use the `patch` command from your terminal:
5763

5864
``` yaml
5965
kubectl patch -n runai runaiconfigs.run.ai/runai --type='merge' --patch '{"spec":{"global":{"core":{"swap":{"enabled": true}}}}}'
6066
```
6167

62-
This configuration is in addition to the ‘Dynamic Fractions’ configuration, and optionally ‘Node Level Scheduler’ to maximize performance within a single node.
63-
64-
** ADD SYSTEM RESERVED CONFIGURATION ** 1G x number of workloads, recommended 2-3 workloads
65-
6668
To make a workload swappable, a number of conditions must be met:
6769

6870
1. The workload MUST use Dynamic Fractions. This means the workload’s memory request is less than a full GPU, but it may add a GPU memory limit to allow the workload to effectively use the full GPU memory. If regular fractions are used instead of Dynamic Fractions is NOT used but regular fraction is (for that workload), the swap logic assumes this workload prefers NOT to be swapped-out and therefore, all other workloads on the same GPU are NOT swapped either.
6971

70-
2. The administrator must label each node that they want to provide GPU memory swap with a “run.ai/swap-enabled=true” this enables the feature on that node. Enabling the feature creates a local swap file in the CPU to serve the swapped memory from all GPUs on that node. Setting the size of the CPU swap file, the administrator sets that figure as a runaiconfigs value.
72+
2. The administrator must label each node that they want to provide GPU memory swap with a `run.ai/swap-enabled=true` this enables the feature on that node. Enabling the feature creates a local swap file in the CPU to serve the swapped memory from all GPUs on that node. The administrator sets the size of the CPU swap file as a value in the `runaiconfig` file.
73+
74+
3. Optionally configure *Node Level Scheduler*. Using node level scheduler can help in the following ways:
7175

72-
3. Optionally configure ‘Node Level Scheduler’ - using node level scheduler can help in two ways:
73-
When the cluster/node-pool
76+
* The Node Level Scheduler automatically spreads workloads between the different GPUs on a node, ensuring maximum workload performance and GPU utilization.
77+
* In scenarios where Interactive notebooks are involved, if the CPU reserved memory for the GPU swap is full, the Node Level Scheduler preempts the GPU process of that workload and potentially routes the workload to another GPU to run.
7478

75-
Anti affinity to avoid being swapped
79+
### Configure `system reserved` GPU Resources
80+
81+
Swappable workloads require reserving a small part of the GPU for non-swappable allocations like binaries and GPU context. To avoid getting out-of-memory (OOM) errors due to non-swappable memory regions, the system reserves a 2GiB of GPU RAM memory by default, effectively truncating the total size of the GPU. For example, a 16GiB T4 will appear as 14GiB on a swap-enabled node.
82+
The exact reserved size is application-dependent, and 2GiB is a safe assumption for 2-3 applications sharing and swapping on a GPU.
83+
This value can be changed by editing the `runaiconfig` specification as follows:
84+
85+
```yml
86+
spec:
87+
global:
88+
core:
89+
swap:
90+
limits:
91+
reservedGpuRam: 2Gi
92+
```
93+
94+
You can also use the `patch` command from your terminal:
95+
96+
```bash
97+
kubectl patch -n runai runaiconfigs.run.ai/runai --type='merge' --patch '{"spec":{"global":{"core":{"swap":{"limits":{"reservedGpuRam": <quantity>}}}}}}'
98+
```
99+
100+
This configuration is in addition to the *Dynamic Fractions* configuration, and optional *Node Level Scheduler* configuration.
101+
102+
## Preventing your workloads from getting swapped
103+
104+
If you prefer your workloads not to be swapped into CPU memory, you can specify an anti-affinity to `run.ai/swap-enabled=true` node label when submitting your workloads and the Scheduler will ensure not to use swap-enabled nodes.
76105

77106
## What happens when CPU SWAP file is exhausted?
78-
CPU memory is limited, and since a single CPU serves multiple GPUs on a node, this number is usually between 2 to 8. Even if we take as an example an 80GB GPU memory, each swapped workload consumes up to 80GB (but usually consumes less), and as you can easily foresee how the swap file can rapidly become very large. Therefore we limit the size of the CPU swap file per node.
79107

80-
## What happens when the swap file is full?
81-
In this case, the node level scheduler and dynamic fractions logic take over and provide the same capability of GPU resource optimization, you can read more about this in Dynamic Fraction and Node Level Scheduler.
108+
CPU memory is limited, and since a single CPU serves multiple GPUs on a node, this number is usually between 2 to 8. For example, when using 80GB of GPU memory, each swapped workload consumes up to 80GB (but may use less) assuming each GPU is shared between 2-4 workloads. In this example, you can see how the swap file can become very large. Therefore, we give administrators a way to limit the size of the CPU reserved memory for swapped GPU memory on each swap enabled node.
109+
110+
Limiting the CPU reserved memory means that there may be scenarios where the GPU memory cannot be swapped out to the CPU reserved RAM. Whenever the CPU reserved memory for swapped GPU memory is exhausted, the workloads currently running will not be swapped out to the CPU reserved RAM, instead, *Node Level Scheduler* and *Dynamic Fractions* logic takes over and provides GPU resource optimization. For more information, see [Dynamic Fractions]() and [Node Level Scheduler]().
111+
<!-- TODO add links to docs in section above -->

0 commit comments

Comments
 (0)