From 91e74aadbc1f531ad76c13792a6d54efb9239bb6 Mon Sep 17 00:00:00 2001 From: JamieWeider72 <147967555+JamieWeider72@users.noreply.github.com> Date: Wed, 7 Aug 2024 15:31:11 +0300 Subject: [PATCH] GPU memory swap update --- docs/Researcher/scheduling/gpu-memory-swap.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/Researcher/scheduling/gpu-memory-swap.md b/docs/Researcher/scheduling/gpu-memory-swap.md index 8ab1329954..a379851f1f 100644 --- a/docs/Researcher/scheduling/gpu-memory-swap.md +++ b/docs/Researcher/scheduling/gpu-memory-swap.md @@ -64,7 +64,7 @@ The example above uses `100Gi` as the size of the swap memory. You can also use the `patch` command from your terminal: ``` yaml -kubectl patch -n runai runaiconfigs.run.ai/runai --type='merge' --patch '{"spec":{"global":{"core":{"swap":{"enabled": true}}}}}' +kubectl patch -n runai runaiconfigs.run.ai/runai --type='merge' --patch '{"spec":{"global":{"core":{"swap":{"enabled": true, "limits": {"cpuRam": "100Gi"}}}}}}' ``` To make a workload swappable, a number of conditions must be met: