From 34ef4dceb6eca3d8e2fd4ec30dfb2e977ef2989a Mon Sep 17 00:00:00 2001 From: JamieWeider72 <147967555+JamieWeider72@users.noreply.github.com> Date: Tue, 23 Jul 2024 13:16:18 +0300 Subject: [PATCH 1/5] Update hotfixes-2-16.md --- docs/home/changelog/hotfixes-2-16.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/docs/home/changelog/hotfixes-2-16.md b/docs/home/changelog/hotfixes-2-16.md index eb6b8d392d..a07194f9e3 100644 --- a/docs/home/changelog/hotfixes-2-16.md +++ b/docs/home/changelog/hotfixes-2-16.md @@ -8,6 +8,12 @@ date: 2024-Feb-26 The following is a list of the known and fixed issues for Run:ai V2.16. +## Version 2.16.57 + +| Internal ID | Description | +|--|--| +| RUN-20388 | Fixed an issue where cluster-sync caused a memory leak. | + ## Version 2.16.25 | Internal ID | Description | From 9cbe0927c92a83a9c52e06908b34630d4fde6fa9 Mon Sep 17 00:00:00 2001 From: JamieWeider72 <147967555+JamieWeider72@users.noreply.github.com> Date: Thu, 25 Jul 2024 14:10:44 +0300 Subject: [PATCH 2/5] Update whats-new-2-18.md --- docs/home/whats-new-2-18.md | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/docs/home/whats-new-2-18.md b/docs/home/whats-new-2-18.md index ec555741a0..1f75cc4c56 100644 --- a/docs/home/whats-new-2-18.md +++ b/docs/home/whats-new-2-18.md @@ -22,19 +22,19 @@ date: 2024-June-14 * Added new *Data sources* of type *Secret* to workload form. *Data sources* of type *Secret* are used to hide 3rd party access credentials when submitting workloads. For more information, see [Submitting Workloads](../admin/workloads/submitting-workloads.md#how-to-submit-a-workload). -* Added new graphs for *Inference* workloads. The new graphs provide more information for *Inference* workloads to help analyze performance of the workloads. New graphs include Latency, Throughput, and number of replicas. For more information, see [Workloads View](../admin/workloads/README.md#workloads-view). +* Added new graphs for *Inference* workloads. The new graphs provide more information for *Inference* workloads to help analyze performance of the workloads. New graphs include Latency, Throughput, and number of replicas. For more information, see [Workloads View](../admin/workloads/README.md#workloads-view) (Requires minimum cluster version v2.18). -* Added latency metric for autoscaling. This feature allows automatic scale-up/down the number of replicas of a Run:ai inference workload based on the threshold set by the ML Engineer. This ensures that response time is kept under the target SLA. +* Added latency metric for autoscaling. This feature allows automatic scale-up/down the number of replicas of a Run:ai inference workload based on the threshold set by the ML Engineer. This ensures that response time is kept under the target SLA. (Requires minimum cluster version v2.18). * Improved autoscaling for inference models by taking out ChatBot UI from models images. By moving ChatBot UI to predefined *Environments*, autoscaling is more accurate by taking into account all types of requests (API, and ChatBot UI). Adding a ChatBot UI environment preset by Run:ai allows AI practitioners to easily connect them to workloads. -* Added more precision to trigger auto-scaling to zero. Now users can configure a precise consecutive idle threshold custom setting to trigger Run:ai inference workloads to scale-to-zero. +* Added more precision to trigger auto-scaling to zero. Now users can configure a precise consecutive idle threshold custom setting to trigger Run:ai inference workloads to scale-to-zero. (Requires minimum cluster version v2.18). * Added Hugging Face catalog integration of community models. Run:ai has added Hugging Face integration directly to the inference workload form, providing the ability to select models (vLLM models) from Hugging Face. This allows organizations to quickly experiment with the latest open source community language models. For more information on how Hugging Face is integrated, see [Hugging Face](../admin/workloads/submitting-workloads.md). -* Improved access permissions to external tools. This improvement now allows more granular control over which personas can access external tools (external URLs) such as Jupyter Notebooks, Chatbot UI, and others. For configuration information, see [Submitting workloads](../admin/workloads/submitting-workloads.md). +* Improved access permissions to external tools. This improvement now allows more granular control over which personas can access external tools (external URLs) such as Jupyter Notebooks, Chatbot UI, and others. For configuration information, see [Submitting workloads](../admin/workloads/submitting-workloads.md). (Requires minimum cluster version v2.18). -* Added a new API for submitting Run:ai inference workloads. This API allows users to easily submit inference workloads. This new API provides a consistent user experience for workload submission which maintains data integrity across all the user interfaces in the Run:ai platform. +* Added a new API for submitting Run:ai inference workloads. This API allows users to easily submit inference workloads. This new API provides a consistent user experience for workload submission which maintains data integrity across all the user interfaces in the Run:ai platform. (Requires minimum cluster version v2.18). #### Command Line Interface @@ -47,11 +47,11 @@ date: 2024-June-14 * Improved usability and performance This is an early access feature available for customers to use; however be aware that there may be functional gaps versus the legacy CLI. - For more information about installing and using the Improved CLI, see [Improved CLI](../Researcher/cli-reference/new-cli/runai.md). + For more information about installing and using the Improved CLI, see [Improved CLI](../Researcher/cli-reference/new-cli/runai.md). (Requires minimum cluster version v2.18). #### GPU memory swap -* Added new GPU to CPU memory swap. To ensure efficient usage of an organization’s resources, Run:ai provides multiple features on multiple layers to help administrators and practitioners maximize their existing GPUs resource utilization. Run:ai’s GPU memory swap feature helps administrators and AI practitioners to further increase the utilization of existing GPU HW by improving GPU sharing between AI initiatives and stakeholders. This is done by expending the GPU physical memory to the CPU memory which is typically an order of magnitude larger than that of the GPU. For more information see, [GPU Memory Swap](../Researcher/scheduling/gpu-memory-swap.md). +* Added new GPU to CPU memory swap. To ensure efficient usage of an organization’s resources, Run:ai provides multiple features on multiple layers to help administrators and practitioners maximize their existing GPUs resource utilization. Run:ai’s GPU memory swap feature helps administrators and AI practitioners to further increase the utilization of existing GPU HW by improving GPU sharing between AI initiatives and stakeholders. This is done by expending the GPU physical memory to the CPU memory which is typically an order of magnitude larger than that of the GPU. For more information see, [GPU Memory Swap](../Researcher/scheduling/gpu-memory-swap.md). (Requires minimum cluster version v2.18). #### YAML Workload Reference table @@ -69,19 +69,19 @@ date: 2024-June-14 #### Data Sources -* Added *Data Volumes* new feature. Data Volumes are snapshots of datasets stored in Kubernetes Persistent Volume Claims (PVCs). They act as a central repository for training data, and offer several key benefits. +* Added *Data Volumes* new feature. Data Volumes are snapshots of datasets stored in Kubernetes Persistent Volume Claims (PVCs). They act as a central repository for training data, and offer several key benefits. * Managed with dedicated permissions—Data Admins, a new role within Run.ai, have exclusive control over data volume creation, data population, and sharing. * Shared between multiple scopes—unlike other Run:ai data sources, data volumes can be shared across projects, departments, or clusters. This promotes data reuse and collaboration within your organization. * Coupled to workloads in the submission process—similar to other Run:ai data sources, Data volumes can be easily attached to AI workloads during submission, specifying the data path within the workload environment. - For more information, see [Data Volumes](../developer/admin-rest-api/data-volumes.md). + For more information, see [Data Volumes](../developer/admin-rest-api/data-volumes.md). (Requires minimum cluster version v2.18). -* Added new data source of type *Secret*. Run:ai now allows you to configure a *Credential* as a data source. A *Data source* of type *Secret* is best used in workloads so that access to 3rd party interfaces and storage used in containers, keep access credentials hidden. For more information, see [Secrets as a data source](../Researcher/user-interface/workspaces/create/create-ds.md/#create-a-secret-as-data-source). +* Added new data source of type *Secret*. Run:ai now allows you to configure a *Credential* as a data source. A *Data source* of type *Secret* is best used in workloads so that access to 3rd party interfaces and storage used in containers, keep access credentials hidden. For more information, see [Secrets as a data source](../Researcher/user-interface/workspaces/create/create-ds.md/#create-a-secret-as-data-source). #### Credentials -* Added new *Generic secret* to *Credentials*. *Credentials* had been used only for access to data sources (S3, Git, etc.). However, AI practitioners need to use secrets to access sensitive data (interacting with 3rd party APIs, or other services) without having to put their credentials in their source code. *Generic secrets* leverage multiple key value pairs which helps reduce the number of Kubernetes resources and simplifies resource management by reducing the overhead associated with maintaining multiple Secrets. *Generic secrets* are best used as a data source of type *Secret* so that they can be used in containers to keep access credentials hidden. +* Added new *Generic secret* to *Credentials*. *Credentials* had been used only for access to data sources (S3, Git, etc.). However, AI practitioners need to use secrets to access sensitive data (interacting with 3rd party APIs, or other services) without having to put their credentials in their source code. *Generic secrets* leverage multiple key value pairs which helps reduce the number of Kubernetes resources and simplifies resource management by reducing the overhead associated with maintaining multiple Secrets. *Generic secrets* are best used as a data source of type *Secret* so that they can be used in containers to keep access credentials hidden. (Requires minimum cluster version v2.18). #### Single Sign On From 452100e2e8dac0c6be985906a08601b53a6b6cd3 Mon Sep 17 00:00:00 2001 From: JamieWeider72 <147967555+JamieWeider72@users.noreply.github.com> Date: Tue, 30 Jul 2024 09:57:36 +0300 Subject: [PATCH 3/5] Update whats-new-2-18.md --- docs/home/whats-new-2-18.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/docs/home/whats-new-2-18.md b/docs/home/whats-new-2-18.md index 1f75cc4c56..193e81ea08 100644 --- a/docs/home/whats-new-2-18.md +++ b/docs/home/whats-new-2-18.md @@ -99,6 +99,10 @@ date: 2024-June-14 * System administrators will need to configure the email notifications. For more information, see [System notifications](../admin/runai-setup/notifications/notifications.md). +#### Policy for Distributed and Inference workloads in the API + +* Added a new API for creating distributed training workload policies and inference workload policies. These new policies in the API allow to set defaults, enforce rules and impose setup on distributed training and inference workloads. For distributed policies, worker and master may require different rules due to their different specifications. The new capability is currently available via API only. Documentation on submitting policies to follow shortly. + ## Deprecation Notifications [Existing notifications feature](https://docs.run.ai/v2.10/admin/researcher-setup/email-messaging/) requires cluster configuration, is being deprecated in favor of an improved Notification System. If you have been using the existing notifications feature in the cluster, you can continue to use it for the next **two** versions. It is recommend that you change to the new notifications system in the Control Plane for better control and improved message granularity. From 01e6b386cca04e0220feaf15657eda8d8cbfca09 Mon Sep 17 00:00:00 2001 From: JamieWeider72 <147967555+JamieWeider72@users.noreply.github.com> Date: Tue, 30 Jul 2024 11:41:40 +0300 Subject: [PATCH 4/5] RUN-12616 Added new known limitation --- docs/home/whats-new-2-18.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/docs/home/whats-new-2-18.md b/docs/home/whats-new-2-18.md index be0926b93b..0bede7215c 100644 --- a/docs/home/whats-new-2-18.md +++ b/docs/home/whats-new-2-18.md @@ -99,6 +99,10 @@ date: 2024-June-14 * System administrators will need to configure the email notifications. For more information, see [System notifications](../admin/runai-setup/notifications/notifications.md). +#### Policy for distributed and inference workloads in the API + +Added a new API for creating distributed training workload policies and inference workload policies. These new policies in the API allow to set defaults, enforce rules and impose setup on distributed training and inference workloads. For distributed policies, worker and master may require different rules due to their different specifications. The new capability is currently available via API only. Documentation on submitting policies to follow shortly. + ## Deprecation Notifications [Existing notifications feature](https://docs.run.ai/v2.10/admin/researcher-setup/email-messaging/)requires cluster configuration, is being deprecated in favor of an improved Notification System. If you have been using the existing notifications feature in the cluster, you can continue to use it for the next **two** versions. It is recommend that you change to the new notifications system in the Control Plane for better control and improved message granularity. From 6f9dcd9defb6c2d7016f23e0a3230094126913ab Mon Sep 17 00:00:00 2001 From: JamieWeider72 <147967555+JamieWeider72@users.noreply.github.com> Date: Tue, 30 Jul 2024 13:39:00 +0300 Subject: [PATCH 5/5] RUN-19295 added policy for workloads in api --- docs/home/whats-new-2-18.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/home/whats-new-2-18.md b/docs/home/whats-new-2-18.md index 6a4c840d3c..26dc05a307 100644 --- a/docs/home/whats-new-2-18.md +++ b/docs/home/whats-new-2-18.md @@ -101,7 +101,7 @@ date: 2024-June-14 #### Policy for distributed and inference workloads in the API -Added a new API for creating distributed training workload policies and inference workload policies. These new policies in the API allow to set defaults, enforce rules and impose setup on distributed training and inference workloads. For distributed policies, worker and master may require different rules due to their different specifications. The new capability is currently available via API only. Documentation on submitting policies to follow shortly. +* Added a new API for creating distributed training workload policies and inference workload policies. These new policies in the API allow to set defaults, enforce rules and impose setup on distributed training and inference workloads. For distributed policies, worker and master may require different rules due to their different specifications. The new capability is currently available via API only. Documentation on submitting policies to follow shortly. ## Deprecation Notifications