diff --git a/docs/Researcher/cli-reference/runai-submit-dist-TF.md b/docs/Researcher/cli-reference/runai-submit-dist-TF.md index 377c66ac58..ed5baaff8a 100644 --- a/docs/Researcher/cli-reference/runai-submit-dist-TF.md +++ b/docs/Researcher/cli-reference/runai-submit-dist-TF.md @@ -1,7 +1,5 @@ ## Description -:octicons-versions-24: Version 2.10 and later. - Submit a distributed TensorFlow training run:ai job to run. !!! Note diff --git a/docs/Researcher/cli-reference/runai-submit-dist-pytorch.md b/docs/Researcher/cli-reference/runai-submit-dist-pytorch.md index c65f3a7835..d6be6fe8a6 100644 --- a/docs/Researcher/cli-reference/runai-submit-dist-pytorch.md +++ b/docs/Researcher/cli-reference/runai-submit-dist-pytorch.md @@ -1,7 +1,5 @@ ## Description -:octicons-versions-24: Version 2.10 and later. - Submit a distributed PyTorch training run:ai job to run. !!! Note diff --git a/docs/Researcher/scheduling/using-node-pools.md b/docs/Researcher/scheduling/using-node-pools.md index 693b9bb168..4d4e3542d1 100644 --- a/docs/Researcher/scheduling/using-node-pools.md +++ b/docs/Researcher/scheduling/using-node-pools.md @@ -1,7 +1,5 @@ # Introduction -:octicons-versions-24: Version 2.8 and up. - Node pools assist in managing heterogeneous resources effectively. A node pool is a set of nodes grouped into a bucket of resources using a predefined (e.g. GPU-Type) or administrator-defined label (key & value). Typically, those nodes share a common feature or property, such as GPU type or other HW capability (such as Infiniband connectivity) or represent a proximity group (i.e. nodes interconnected via a local ultra-fast switch). Those nodes would typically be used by researchers to run specific workloads on specific resource types, or by MLops engineers to run specific Inference workloads that require specific node types. @@ -84,8 +82,6 @@ To download the Node-Pools table to a CSV: ## Multiple Node Pools Selection -:octicons-versions-24: Version 2.9 and up - Starting version 2.9, Run:ai system supports scheduling workloads to a node pool using a **list of prioritized node pools**. The scheduler will try to schedule the workload to the most prioritized node pool first, if it fails, it will try the second one and so forth. If the scheduler tried the entire list and failed to schedule the workload, it will start from the most prioritized node pool again. This pattern allows for maximizing the odds that a workload will be scheduled. ### Defining Project level 'default node pool priority list' diff --git a/docs/Researcher/user-interface/workspaces/overview.md b/docs/Researcher/user-interface/workspaces/overview.md index 1f962f3659..af4143831a 100644 --- a/docs/Researcher/user-interface/workspaces/overview.md +++ b/docs/Researcher/user-interface/workspaces/overview.md @@ -2,8 +2,6 @@ # Getting familiar with workspaces -:octicons-versions-24: Version 2.9 - Workspace is a simplified tool for researchers to conduct experiments, build AI models, access standard MLOps tools, and collaborate with their peers. Run:ai workspaces abstract complex concepts related to running containerized workloads in a Kubernetes environment. Aspects such as networking, storage, and secrets, are built from predefined abstracted setups, that ease and streamline the researcher's AI model development. diff --git a/docs/admin/researcher-setup/cluster-wide-pvc.md b/docs/admin/researcher-setup/cluster-wide-pvc.md index 944219a0fa..9a62ddc6d6 100644 --- a/docs/admin/researcher-setup/cluster-wide-pvc.md +++ b/docs/admin/researcher-setup/cluster-wide-pvc.md @@ -1,7 +1,5 @@ # Cluster wide PVCs -:octicons-versions-24: Version 2.10 and later. - A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes. For more information about PVCs, see [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/){targe=_blank}. PVCs are namespace-specific. If your PVC relates to all run:ai Projects, do the following to propagate the PVC to all Projects: diff --git a/docs/admin/runai-setup/config/allow-external-access-to-containers.md b/docs/admin/runai-setup/config/allow-external-access-to-containers.md index 18811c834c..e6e6d82627 100644 --- a/docs/admin/runai-setup/config/allow-external-access-to-containers.md +++ b/docs/admin/runai-setup/config/allow-external-access-to-containers.md @@ -23,8 +23,6 @@ See [https://kubernetes.io/docs/concepts/services-networking/service](https://ku ## Workspaces configuration -:octicons-versions-24: Version 2.9 and up - Version 2.9 introduces [Workspaces](../../../Researcher/user-interface/workspaces/overview.md) which allow the Researcher to build AI models interactively. Workspaces allow the Researcher to launch tools such as Visual Studio code, TensorFlow, TensorBoard etc. These tools require access to the container. Access is provided via URLs. diff --git a/docs/developer/cluster-api/submit-cron-yaml.md b/docs/developer/cluster-api/submit-cron-yaml.md index 9e9c74ad4f..b0fbf6e1af 100644 --- a/docs/developer/cluster-api/submit-cron-yaml.md +++ b/docs/developer/cluster-api/submit-cron-yaml.md @@ -1,7 +1,5 @@ # Submit a Cron job via YAML -:octicons-versions-24: Version 2.10 and later. - The cron command-line utility is a job scheduler typically used to set up and maintain software environments at scheduled intervals. Run:ai now supports submitting jobs with cron using a YAML file. To submit a job using cron, run the following command: