Skip to content

no-octicons #937

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Aug 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 0 additions & 2 deletions docs/Researcher/cli-reference/runai-submit-dist-TF.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
## Description

:octicons-versions-24: Version 2.10 and later.

Submit a distributed TensorFlow training run:ai job to run.

!!! Note
Expand Down
2 changes: 0 additions & 2 deletions docs/Researcher/cli-reference/runai-submit-dist-pytorch.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
## Description

:octicons-versions-24: Version 2.10 and later.

Submit a distributed PyTorch training run:ai job to run.

!!! Note
Expand Down
4 changes: 0 additions & 4 deletions docs/Researcher/scheduling/using-node-pools.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
# Introduction

:octicons-versions-24: Version 2.8 and up.

Node pools assist in managing heterogeneous resources effectively.
A node pool is a set of nodes grouped into a bucket of resources using a predefined (e.g. GPU-Type) or administrator-defined label (key & value). Typically, those nodes share a common feature or property, such as GPU type or other HW capability (such as Infiniband connectivity) or represent a proximity group (i.e. nodes interconnected via a local ultra-fast switch). Those nodes would typically be used by researchers to run specific workloads on specific resource types, or by MLops engineers to run specific Inference workloads that require specific node types.

Expand Down Expand Up @@ -84,8 +82,6 @@ To download the Node-Pools table to a CSV:

## Multiple Node Pools Selection

:octicons-versions-24: Version 2.9 and up

Starting version 2.9, Run:ai system supports scheduling workloads to a node pool using a **list of prioritized node pools**. The scheduler will try to schedule the workload to the most prioritized node pool first, if it fails, it will try the second one and so forth. If the scheduler tried the entire list and failed to schedule the workload, it will start from the most prioritized node pool again. This pattern allows for maximizing the odds that a workload will be scheduled.

### Defining Project level 'default node pool priority list'
Expand Down
2 changes: 0 additions & 2 deletions docs/Researcher/user-interface/workspaces/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,6 @@

# Getting familiar with workspaces

:octicons-versions-24: Version 2.9

Workspace is a simplified tool for researchers to conduct experiments, build AI models, access standard MLOps tools, and collaborate with their peers.

Run:ai workspaces abstract complex concepts related to running containerized workloads in a Kubernetes environment. Aspects such as networking, storage, and secrets, are built from predefined abstracted setups, that ease and streamline the researcher's AI model development.
Expand Down
2 changes: 0 additions & 2 deletions docs/admin/researcher-setup/cluster-wide-pvc.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
# Cluster wide PVCs

:octicons-versions-24: Version 2.10 and later.

A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes. For more information about PVCs, see [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/){targe=_blank}.

PVCs are namespace-specific. If your PVC relates to all run:ai Projects, do the following to propagate the PVC to all Projects:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,6 @@ See [https://kubernetes.io/docs/concepts/services-networking/service](https://ku

## Workspaces configuration

:octicons-versions-24: Version 2.9 and up

Version 2.9 introduces [Workspaces](../../../Researcher/user-interface/workspaces/overview.md) which allow the Researcher to build AI models interactively.

Workspaces allow the Researcher to launch tools such as Visual Studio code, TensorFlow, TensorBoard etc. These tools require access to the container. Access is provided via URLs.
Expand Down
2 changes: 0 additions & 2 deletions docs/developer/cluster-api/submit-cron-yaml.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
# Submit a Cron job via YAML

:octicons-versions-24: Version 2.10 and later.

The cron command-line utility is a job scheduler typically used to set up and maintain software environments at scheduled intervals. Run:ai now supports submitting jobs with cron using a YAML file.

To submit a job using cron, run the following command:
Expand Down
Loading