diff --git a/docs/Researcher/cli-reference/runai-submit-dist-TF.md b/docs/Researcher/cli-reference/runai-submit-dist-TF.md index 10de6137c8..55673681da 100644 --- a/docs/Researcher/cli-reference/runai-submit-dist-TF.md +++ b/docs/Researcher/cli-reference/runai-submit-dist-TF.md @@ -342,7 +342,7 @@ runai submit-dist tf --name distributed-job --workers=2 -g 1 \ #### --node-pools `` > Instructs the scheduler to run this workload using specific set of nodes which are part of a [Node Pool](../../Researcher/scheduling/the-runai-scheduler.md#). You can specify one or more node pools to form a prioritized list of node pools that the scheduler will use to find one node pool that can provide the workload's specification. To use this feature your Administrator will need to label nodes as explained here: [Limit a Workload to a Specific Node Group](../../admin/researcher-setup/limit-to-node-group.md) or use existing node labels, then create a node-pool and assign the label to the node-pool. -> This flag can be used in conjunction with node-type and Project-based affinity. In this case, the flag is used to refine the list of allowable node groups set from a node-pool. For more information see: [Working with Projects](../../admin/admin-ui-setup/project-setup.md). +> This flag can be used in conjunction with node-type and Project-based affinity. In this case, the flag is used to refine the list of allowable node groups set from a node-pool. For more information see: [Working with Projects](../../admin/aiinitiatives/org/projects.md). #### --node-type `` diff --git a/docs/Researcher/cli-reference/runai-submit-dist-mpi.md b/docs/Researcher/cli-reference/runai-submit-dist-mpi.md index f0d6c8c6c2..664d252aa4 100644 --- a/docs/Researcher/cli-reference/runai-submit-dist-mpi.md +++ b/docs/Researcher/cli-reference/runai-submit-dist-mpi.md @@ -341,7 +341,7 @@ You can start an unattended mpi training Job of name dist1, based on Project *te #### --node-pools `` > Instructs the scheduler to run this workload using specific set of nodes which are part of a [Node Pool](../../Researcher/scheduling/the-runai-scheduler.md#). You can specify one or more node pools to form a prioritized list of node pools that the scheduler will use to find one node pool that can provide the workload's specification. To use this feature your Administrator will need to label nodes as explained here: [Limit a Workload to a Specific Node Group](../../admin/researcher-setup/limit-to-node-group.md) or use existing node labels, then create a node-pool and assign the label to the node-pool. -> This flag can be used in conjunction with node-type and Project-based affinity. In this case, the flag is used to refine the list of allowable node groups set from a node-pool. For more information see: [Working with Projects](../../admin/admin-ui-setup/project-setup.md). +> This flag can be used in conjunction with node-type and Project-based affinity. In this case, the flag is used to refine the list of allowable node groups set from a node-pool. For more information see: [Working with Projects](../../admin/admin-ui-setup/). #### --node-type `` diff --git a/docs/Researcher/cli-reference/runai-submit-dist-pytorch.md b/docs/Researcher/cli-reference/runai-submit-dist-pytorch.md index 393c7b2610..e6f4d17f3c 100644 --- a/docs/Researcher/cli-reference/runai-submit-dist-pytorch.md +++ b/docs/Researcher/cli-reference/runai-submit-dist-pytorch.md @@ -349,7 +349,7 @@ runai submit-dist pytorch --name distributed-job --workers=2 -g 1 \ #### --node-pools `` > Instructs the scheduler to run this workload using specific set of nodes which are part of a [Node Pool](../../Researcher/scheduling/the-runai-scheduler.md#). You can specify one or more node pools to form a prioritized list of node pools that the scheduler will use to find one node pool that can provide the workload's specification. To use this feature your Administrator will need to label nodes as explained here: [Limit a Workload to a Specific Node Group](../../admin/researcher-setup/limit-to-node-group.md) or use existing node labels, then create a node-pool and assign the label to the node-pool. -> This flag can be used in conjunction with node-type and Project-based affinity. In this case, the flag is used to refine the list of allowable node groups set from a node-pool. For more information see: [Working with Projects](../../admin/admin-ui-setup/project-setup.md). +> This flag can be used in conjunction with node-type and Project-based affinity. In this case, the flag is used to refine the list of allowable node groups set from a node-pool. For more information see: [Working with Projects](../../admin/aiinitiatives/org/projects.md). #### --node-type `` diff --git a/docs/Researcher/cli-reference/runai-submit-dist-xgboost.md b/docs/Researcher/cli-reference/runai-submit-dist-xgboost.md index 59060b5404..db0f01da73 100644 --- a/docs/Researcher/cli-reference/runai-submit-dist-xgboost.md +++ b/docs/Researcher/cli-reference/runai-submit-dist-xgboost.md @@ -333,7 +333,7 @@ runai submit-dist xgboost --name distributed-job --workers=2 -g 1 \ #### --node-pools `` > Instructs the scheduler to run this workload using specific set of nodes which are part of a [Node Pool](../../Researcher/scheduling/the-runai-scheduler.md#). You can specify one or more node pools to form a prioritized list of node pools that the scheduler will use to find one node pool that can provide the workload's specification. To use this feature your Administrator will need to label nodes as explained here: [Limit a Workload to a Specific Node Group](../../admin/researcher-setup/limit-to-node-group.md) or use existing node labels, then create a node-pool and assign the label to the node-pool. -> This flag can be used in conjunction with node-type and Project-based affinity. In this case, the flag is used to refine the list of allowable node groups set from a node-pool. For more information see: [Working with Projects](../../admin/admin-ui-setup/project-setup.md). +> This flag can be used in conjunction with node-type and Project-based affinity. In this case, the flag is used to refine the list of allowable node groups set from a node-pool. For more information see: [Working with Projects](../../admin/aiinitiatives/org/projects.md). #### --node-type `` diff --git a/docs/Researcher/cli-reference/runai-submit.md b/docs/Researcher/cli-reference/runai-submit.md index 4426884676..dbf88b582c 100644 --- a/docs/Researcher/cli-reference/runai-submit.md +++ b/docs/Researcher/cli-reference/runai-submit.md @@ -407,7 +407,7 @@ runai submit --job-name-prefix -i gcr.io/run-ai-demo/quickstart -g 1 #### --node-pools `` > Instructs the scheduler to run this workload using specific set of nodes which are part of a [Node Pool](../../Researcher/scheduling/the-runai-scheduler.md#). You can specify one or more node pools to form a prioritized list of node pools that the scheduler will use to find one node pool that can provide the workload's specification. To use this feature your Administrator will need to label nodes as explained here: [Limit a Workload to a Specific Node Group](../../admin/researcher-setup/limit-to-node-group.md) or use existing node labels, then create a node-pool and assign the label to the node-pool. -> This flag can be used in conjunction with node-type and Project-based affinity. In this case, the flag is used to refine the list of allowable node groups set from a node-pool. For more information see: [Working with Projects](../../admin/admin-ui-setup/project-setup.md). +> This flag can be used in conjunction with node-type and Project-based affinity. In this case, the flag is used to refine the list of allowable node groups set from a node-pool. For more information see: [Working with Projects](../../admin/aiinitiatives/org/projects.md). #### --node-type `` diff --git a/docs/Researcher/scheduling/the-runai-scheduler.md b/docs/Researcher/scheduling/the-runai-scheduler.md index 804fea145a..91830918fd 100644 --- a/docs/Researcher/scheduling/the-runai-scheduler.md +++ b/docs/Researcher/scheduling/the-runai-scheduler.md @@ -22,13 +22,13 @@ Projects are quota entities that associate a Project name with a **deserved** GP A Researcher submitting a workload must associate a Project with any workload request. The Run:ai scheduler will then compare the request against the current allocations and the Project's deserved quota and determine whether the workload can be allocated with resources or whether it should remain in a pending state. -For further information on Projects and how to configure them, see: [Working with Projects](../../admin/admin-ui-setup/project-setup.md) +For further information on Projects and how to configure them, see: [Working with Projects](../../admin/aiinitiatives/org/projects.md) ### Departments A *Department* is the second hierarchy of resource allocation above *Project*. A Department quota supersedes a Project quota in the sense that if the sum of Project quotas for Department A exceeds the Department quota -- the scheduler will use the Department quota rather than the Projects' quota. -For further information on Departments and how to configure them, see: [Working with Departments](../../admin/admin-ui-setup/department-setup.md) +For further information on Departments and how to configure them, see: [Working with Departments](../../admin/aiinitiatives/org/departments.md) ### Pods diff --git a/docs/Researcher/user-interface/workspaces/create/workspace-v2.md b/docs/Researcher/user-interface/workspaces/create/workspace-v2.md index 4e4184c22a..4c48f4ec05 100644 --- a/docs/Researcher/user-interface/workspaces/create/workspace-v2.md +++ b/docs/Researcher/user-interface/workspaces/create/workspace-v2.md @@ -9,7 +9,7 @@ date: 2024-Jan-7 A Workspace is assigned to a project and is affected by the project’s quota just like any other workload. A workspace is shared with all project members for collaboration. !!! Note - * You must have at least one project configured in the system. To configure a project, see [Creating a project](../../../../admin/admin-ui-setup/project-setup.md#create-a-project). + * You must have at least one project configured in the system. To configure a project, see [Creating a project](../../../../admin/aiinitiatives/org/projects.md#adding-a-new-project). * You must have at least 1 researcher assigned to the project. Use the *Jobs form* below if you have not enabled the *Workloads* feature. diff --git a/docs/admin/admin-ui-setup/overview.md b/docs/admin/admin-ui-setup/overview.md index 5037122619..5dc8fbd9f1 100644 --- a/docs/admin/admin-ui-setup/overview.md +++ b/docs/admin/admin-ui-setup/overview.md @@ -5,7 +5,6 @@ Run:ai provides a single user interface that, depending on your role, serves bot The control-plane part of the tool allows the administrator to: * Analyze cluster status using [dashboards](dashboard-analysis.md). -* Manage Run:ai metadata such as [users](admin-ui-users.md), [departments](department-setup.md), and [projects](project-setup.md). * View Job details to be able to help researchers solve Job-related issues. The researcher workbench part of the tool allows Researchers to submit, delete and pause [Jobs](jobs.md), view Job logs etc. diff --git a/docs/admin/aiinitiatives/org/departments.md b/docs/admin/aiinitiatives/org/departments.md new file mode 100644 index 0000000000..0459b22ca0 --- /dev/null +++ b/docs/admin/aiinitiatives/org/departments.md @@ -0,0 +1,177 @@ + +This article explains the procedure for managing departments + +Departments are a grouping of projects. By grouping projects into a department, you can set quota limitations to a set of projects, create policies that are applied to the department, and create assets that can be scoped to the whole department or a partial group of descendent projects + +For example, in an academic environment, a department can be the Physics Department grouping various projects (AI Initiatives) within the department, or grouping projects where each project represents a single student. + +## Departments + +The Departments table can be found under Departments in the Run:ai platform. + +!!! Note + Departments are disabled, by default. If you cannot see Departments in the menu, then it must be enabled by your Administrator, under General Settings → Resources → Departments + +The Departments table lists all departments defined for a specific cluster and allows you to manage them. You can switch between clusters by selecting your cluster using the filter at the top. + +![](img/department-list.png) + +The Departments table consists of the following columns: + +| Column | Description | +| :---- | :---- | +| Department | The name of the department | +| Node pool(s) with quota | The node pools associated with this department. By default, all node pools within a cluster are associated with each department. Administrators can change the node pools’ quota parameters for a department. Click the values under this column to view the list of node pools with their parameters (as described below) | +| GPU quota | GPU quota associated with the department | +| Total GPUs for projects | The sum of all projects’ GPU quotas associated with this department | +| Project(s) | List of projects associated with this department | +| Subject(s) | The users, SSO groups, or applications with access to the project. Click the values under this column to view the list of subjects with their parameters (as described below). This column is only viewable if your role in Run:ai platform allows you those permissions. | +| Allocated GPUs | The total number of GPUs allocated by successfully scheduled workloads in projects associated with this department | +| GPU allocation ratio | The ratio of Allocated GPUs to GPU quota. This number reflects how well the department’s GPU quota is utilized by its descendant projects. A number higher than 100% means the department is using over-quota GPUs. A number lower than 100% means not all projects are utilizing their quotas. A quota becomes allocated once a workload is successfully scheduled. | +| Creation time | The timestamp for when the department was created | +| Workload(s) | The list of workloads under projects associated with this department. Click the values under this column to view the list of workloads with their resource parameters (as described below) | +| Cluster | The cluster that the department is associated with | + +### Customizing the table view + +* Filter - Click ADD FILTER, select the column to filter by, and enter the filter values +* Search - Click SEARCH and type the value to search by +* Sort - Click each column header to sort by +* Column selection - Click COLUMNS and select the columns to display in the table +* Download table - Click MORE and then Click Download as CSV + +### Node pools with quota associated with the department + +Click one of the values of Node pool(s) with quota column, to view the list of node pools and their parameters + +| Column | Description | +| :---- | :---- | +| Node pool | The name of the node pool is given by the administrator during node pool creation. All clusters have a default node pool created automatically by the system and named ‘default’. | +| GPU quota | The amount of GPU quota the administrator dedicated to the department for this node pool (floating number, e.g. 2.3 means 230% of a GPU capacity) | +| CPU (Cores) | The amount of CPU (cores) quota the administrator has dedicated to the department for this node pool (floating number, e.g. 1.3 Cores = 1300 mili-cores). The ‘unlimited’ value means the CPU (Cores) quota is not bound and workloads using this node pool can use as many CPU (Cores) resources as they need (if available) | +| CPU memory | The amount of CPU memory quota the administrator has dedicated to the department for this node pool (floating number, in MB or GB). The ‘unlimited’ value means the CPU memory quota is not bounded and workloads using this node pool can use as much CPU memory resource as they need (if available). | +| Allocated GPUs | The total amount of GPUs allocated by workloads using this node pool under projects associated with this department. The number of allocated GPUs may temporarily surpass the GPU quota of the department if over-quota is used. | +| Allocated CPU (Cores) | The total amount of CPUs (cores) allocated by workloads using this node pool under all projects associated with this department. The number of allocated CPUs (cores) may temporarily surpass the CPUs (Cores) quota of the department if over-quota is used. | +| Allocated CPU memory | The actual amount of CPU memory allocated by workloads using this node pool under all projects associated with this department. The number of Allocated CPU memory may temporarily surpass the CPU memory quota if over-quota is used. | + +### Subjects authorized for the project + +Click one of the values of the Subject(s) column, to view the list of subjects and their parameters. This column is only viewable if your role in the Run:ai system affords you those permissions. + +| Column | Description | +| :---- | :---- | +| Subject | A user, SSO group, or application assigned with a role in the scope of this department | +| Type | The type of subject assigned to the access rule (user, SSO group, or application). | +| Scope | The scope of this department within the organizational tree. Click the name of the scope to view the organizational tree diagram, you can only view the parts of the organizational tree for which you have permission to view. | +| Role | The role assigned to the subject, in this department’s scope | +| Authorized by | The user who granted the access rule | +| Last updated | The last time the access rule was updated | + +!!!Note + A role given in a certain scope, means the role applies to this scope and any descendant scopes in the organizational tree. + +## Adding a new department + +To create a new Department: + +1. Click +NEW DEPARTMENT +2. Select a __scope__. + By default, the field contains the scope of the current UI context cluster, viewable at the top left side of your screen. You can change the current UI context cluster by clicking the ‘Cluster: cluster-name’ field and applying another cluster as the UI context. Alternatively, you can choose another cluster within the ‘+ New Department’ form by clicking the organizational tree icon on the right side of the scope field, opening the organizational tree and selecting one of the available clusters. +3. Enter a __name__ for the department. Department names must start with a letter and can only contain lower case latin letters, numbers or a hyphen ('-’). +4. Under __Quota Management__, select a quota for the department. The Quota management section may contain different fields depending on pre-created system configuration. Possible system configurations are: + * Existence of Node Pools + * CPU Quota - Allow setting a quota for CPU resources. + +When no node pools are configured, you can set the following quota parameters: + +* __GPU Devices__ + The number of GPUs you want to allocate for this department (decimal number). This quota is consumed by the department’s subordinated project. +* __CPUs (cores)__ (when CPU quota is set) + The number of CPU cores you want to allocate for this department (decimal number). This quota is consumed by the department’s subordinated projects +* __CPUs memory__ (when CPU quota is set) + The amount of CPU memory you want to allocate for this department (in Megabytes or Gigabytes). This quota is consumed by the department’s subordinated projects + +When node pools are enabled, it is possible to set the above quota parameters __for each node-pool separately__. + +* In addition, you can decide whether to allow a department to go over-quota. Allowing over-quota at the department level means that one department can receive more resources than its quota when not required by other departments. If the over-quota is disabled, workloads running under subordinated projects are not able to use more resources than the department’s quota, but each project can still go over-quota (if enabled at the project level) up to the department’s quota. + +Unlimited CPU(Cores) and CPU memory quotas are an exception - in this case, workloads of subordinated projects can consume available resources up to the physical limitation of the cluster or any of the node pools. + +Example of Quota management: + +![](img/quota-mgmt.png) + +7. Click CREATE DEPARTMENT + +## Adding an access rule to a department + +To create a new access rule for a department: + +1. Select the department you want to add an access rule for +2. Click ACCESS RULES +3. Click +ACCESS RULE +4. Select a subject +5. Select or enter the subject identifier: + * User Email for a local user created in Run:ai or for SSO user as recognized by the IDP + * Group name as recognized by the IDP + * Application name as created in Run:ai +6. Select a role +7. Click SAVE RULE +8. Click CLOSE + +## Deleting an access rule from a department + +To delete an access rule from a department: + +1. Select the department you want to remove an access rule from +2. Click ACCESS RULES +3. Find the access rule you would like to delete +4. Click on the trash icon +5. Click CLOSE + +## Editing a department + +1. Select the Department you want to edit +2. Click EDIT +3. Update the Department and click SAVE + +## Viewing a department’s policy + +To view the policy of a department: + +1. Select the department for which you want to view its policies. + This option is only active if the department has defined policies in place. +2. Click VIEW POLICY and select the workload type for which you want to view the policies: + a. Workspace workload type policy with its set of rules + b. Training workload type policies with its set of rules +3. In the Policy form, view the workload rules that are enforcing your department for the selected workload type as well as the defaults: + * Parameter - The workload submission parameter that Rule and Default is applied on + * Type (applicable for data sources only) - The data source type (Git, S3, nfs, pvc etc.) + * Default - The default value of the Parameter + * Rule - Set up constraints on workload policy fields + * Source - The origin of the applied policy (cluster, department or project) + + +!!! Notes + * The policy affecting the department consists of rules and defaults. Some of these rules and defaults may be derived from the policies of a parent cluster (source). You can see the source of each rule in the policy form. + * A policy set for a department affects all subordinated projects and their workloads, according to the policy workload type + +## Deleting a department + +1. Select the department you want to delete +2. Click DELETE +3. On the dialog, click DELETE to confirm the deletion + +!!! Note + Deleting a department permanently deletes its subordinated projects, any assets created in the scope of this department, and any of its subordinated projects such as compute resources, environments, data sources, templates, and credentials. However, workloads running within the department’s subordinated projects, or the policies defined for this department or its subordinated projects - remain intact and running. + +## Reviewing a department + +1. Select the department you want to review +2. Click REVIEW +3. Review and click CLOSE + +## Using API + +Go to the [Departments](https://app.run.ai/api/docs#tag/Departments) API reference to view the available actions + diff --git a/docs/admin/aiinitiatives/org/img/department-list.png b/docs/admin/aiinitiatives/org/img/department-list.png new file mode 100644 index 0000000000..a951460872 Binary files /dev/null and b/docs/admin/aiinitiatives/org/img/department-list.png differ diff --git a/docs/admin/aiinitiatives/org/img/project-list.png b/docs/admin/aiinitiatives/org/img/project-list.png new file mode 100644 index 0000000000..80682654fd Binary files /dev/null and b/docs/admin/aiinitiatives/org/img/project-list.png differ diff --git a/docs/admin/aiinitiatives/org/img/quota-mgmt.png b/docs/admin/aiinitiatives/org/img/quota-mgmt.png new file mode 100644 index 0000000000..b309f49454 Binary files /dev/null and b/docs/admin/aiinitiatives/org/img/quota-mgmt.png differ diff --git a/docs/admin/aiinitiatives/org/projects.md b/docs/admin/aiinitiatives/org/projects.md new file mode 100644 index 0000000000..6010d5b94b --- /dev/null +++ b/docs/admin/aiinitiatives/org/projects.md @@ -0,0 +1,231 @@ + +This article explains the procedure to manage Projects. + +Researchers submit AI workloads. To streamline resource allocation and prioritize work, Run:ai introduces the concept of Projects. Projects are the tool to implement resource allocation policies as well as the segregation between different initiatives. A project may represent a team, an individual, or an initiative that shares resources or has a specific resource quota. Projects may be aggregated in Run:ai [departments](departments.md). + +For example, you may have several people involved in a specific face-recognition initiative collaborating under one project named “face-recognition-2024”. Alternatively, you can have a project per person in your team, where each member receives their own quota. + +## Projects table + +The Projects table can be found under Projects in the Run:ai platform. + +The Projects table provides a list of all projects defined for a specific cluster, and allows you to manage them. You can switch between clusters by selecting your cluster using the filter at the top. + +![](img/project-list.png) + +The Projects table consists of the following columns: + +| Column | Description | +| :---- | :---- | +| Project | The name of the project | +| Department | The name of the parent department. Several projects may be grouped under a department. | +| Status | The Project creation status. Projects are manifested as Kubernetes namespaces. The project status represents the Namespace creation status. | +| Node pool(s) with quota | The node pools associated with the project. By default, a new project is associated with all node pools within its associated cluster. Administrators can change the node pools’ quota parameters for a project. Click the values under this column to view the list of node pools with their parameters (as described below) | +| Subject(s) | The users, SSO groups, or applications with access to the project. Click the values under this column to view the list of subjects with their parameters (as described below). This column is only viewable if your role in the Run:ai platform allows you those permissions. | +| Allocated GPUs | The total number of GPUs allocated by successfully scheduled workloads under this project | +| GPU allocation ratio | The ratio of Allocated GPUs to GPU quota. This number reflects how well the project’s GPU quota is utilized by its descendent workloads. A number higher than 100% indicates the project is using over-quota GPUs. | +| GPU quota | The GPU quota allocated to the project. This number represents the sum of all node pools’ GPU quota allocated to this project. | +| Allocated CPUs (Core) | The total number of CPU cores allocated by workloads submitted within this project. (This column is only available if the CPU Quota setting is enabled, as described below). | +| Allocated CPU Memory | The total number of CPUs allocated by successfully scheduled workloads under this project. (This column is only available if the CPU Quota setting is enabled, as described below). | +| CPU quota (Cores) | CPU quota allocated to this project. (This column is only available if the CPU Quota setting is enabled, as described below). This number represents the sum of all node pools’ CPU quota allocated to this project. The ‘unlimited’ value means the CPU (cores) quota is not bounded and workloads using this project can use as many CPU (cores) resources as they need (if available). | +| CPU memory quota | CPU memory quota allocated to this project. (This column is only available if the CPU Quota setting is enabled, as described below). This number represents the sum of all node pools’ CPU memory quota allocated to this project. The ‘unlimited’ value means the CPU memory quota is not bounded and workloads using this Project can use as much CPU memory resources as they need (if available). | +| CPU allocation ratio | The ratio of Allocated CPUs (cores) to CPU quota (cores). This number reflects how much the project’s ‘CPU quota’ is utilized by its descendent workloads. A number higher than 100% indicates the project is using over-quota CPU cores. | +| CPU memory allocation ratio | The ratio of Allocated CPU memory to CPU memory quota. This number reflects how well the project’s ‘CPU memory quota’ is utilized by its descendent workloads. A number higher than 100% indicates the project is using over-quota CPU memory. | +| Node affinity of training workloads | The list of Run:ai node-affinities. Any training workload submitted within this project must specify one of those Run:ai node affinities, otherwise it is not submitted. | +| Node affinity of interactive workloads | The list of Run:ai node-affinities. Any interactive (workspace) workload submitted within this project must specify one of those Run:ai node affinities, otherwise it is not submitted. | +| Idle time limit of training workloads | The time in days:hours:minutes after which the project stops a training workload not using its allocated GPU resources. | +| Idle time limit of preemptible workloads | The time in days:hours:minutes after which the project stops a preemptible interactive (workspace) workload not using its allocated GPU resources. | +| Idle time limit of non preemptible workloads | The time in days:hours:minutes after which the project stops a non-preemptible interactive (workspace) workload not using its allocated GPU resources.. | +| Interactive workloads time limit | The duration in days:hours:minutes after which the project stops an interactive (workspace) workload | +| Training workloads time limit | The duration in days:hours:minutes after which the project stops a training workload | +| Creation time | The timestamp for when the project was created | +| Workload(s) | The list of workloads associated with the project. Click the values under this column to view the list of workloads with their resource parameters (as described below). | +| Cluster | The cluster that the project is associated with | + +### Node pools with quota associated with the project + +Click one of the values of Node pool(s) with quota column, to view the list of node pools and their parameters + +| Column | Description | +| :---- | :---- | +| Node pool | The name of the node pool is given by the administrator during node pool creation. All clusters have a default node pool created automatically by the system and named ‘default’. | +| GPU quota | The amount of GPU quota the administrator dedicated to the project for this node pool (floating number, e.g. 2.3 means 230% of GPU capacity). | +| CPU (Cores) | The amount of CPUs (cores) quota the administrator has dedicated to the project for this node pool (floating number, e.g. 1.3 Cores = 1300 mili-cores). The ‘unlimited’ value means the CPU (Cores) quota is not bounded and workloads using this node pool can use as many CPU (Cores) resources as they require, (if available). | +| CPU memory | The amount of CPU memory quota the administrator has dedicated to the project for this node pool (floating number, in MB or GB). The ‘unlimited’ value means the CPU memory quota is not bounded and workloads using this node pool can use as much CPU memory resource as they need (if available). | +| Allocated GPUs | The actual amount of GPUs allocated by workloads using this node pool under this project. The number of allocated GPUs may temporarily surpass the GPU quota if over-quota is used. | +| Allocated CPU (Cores) | The actual amount of CPUs (cores) allocated by workloads using this node pool under this project. The number of allocated CPUs (cores) may temporarily surpass the CPUs (Cores) quota if over-quota is used. | +| Allocated CPU memory | The actual amount of CPU memory allocated by workloads using this node pool under this Project. The number of Allocated CPU memory may temporarily surpass the CPU memory quota if over-quota is used. | +| Order of priority | The default order in which the Scheduler uses node-pools to schedule a workload. This is used only if the order of priority of node pools is not set in the workload during submission, either by an admin policy or the user. An empty value means the node pool is not part of the project’s default list, but can still be chosen by an admin policy or the user during workload submission | + +### Subjects authorized for the project + +Click one of the values in the Subject(s) column, to view the list of subjects and their parameters. This column is only viewable, if your role in the Run:ai system affords you those permissions. + +| Column | Description | +| :---- | :---- | +| Subject | A user, SSO group, or application assigned with a role in the scope of this Project | +| Type | The type of subject assigned to the access rule (user, SSO group, or application) | +| Scope | The scope of this project in the organizational tree. Click the name of the scope to view the organizational tree diagram, you can only view the parts of the organizational tree for which you have permission to view. | +| Role | The role assigned to the subject, in this project’s scope | +| Authorized by | The user who granted the access rule | +| Last updated | The last time the access rule was updated | + +### Workloads associated with the project + +Click one of the values of Workload(s) column, to view the list of workloads and their parameters + +| Column | Description | +| :---- | :---- | +| Workload | The name of the workload, given during its submission. Optionally, an icon describing the type of workload is also visible | +| Type | The type of the workload, e.g. Workspace, Training, Inference | +| Status | The state of the workload and time elapsed since the last status change | +| Created by | The subject that created this workload | +| Running/ requested pods | The number of running pods out of the number of requested pods for this workload. e.g. a distributed workload requesting 4 pods but may be in a state where only 2 are running and 2 are pending | +| Creation time | The date and time the workload was created | +| GPU compute request | The amount of GPU compute requested (floating number, represents either a portion of the GPU compute, or the number of whole GPUs requested) | +| GPU memory request | The amount of GPU memory requested (floating number, can either be presented as a portion of the GPU memory, an absolute memory size in MB or GB, or a MIG profile) | +| CPU memory request | The amount of CPU memory requested (floating number, presented as an absolute memory size in MB or GB) | +| CPU compute request | The amount of CPU compute requested (floating number, represents the number of requested Cores) | + +### Customizing the table view + +* Filter - Click ADD FILTER, select the column to filter by, and enter the filter values +* Search - Click SEARCH and type the value to search by +* Sort - Click each column header to sort by +* Column selection - Click COLUMNS and select the columns to display in the table +* Download table - Click MORE and then Click Download as CSV + +## Adding a new project + +To create a new Project: + +1. Click +NEW PROJECT +2. Select a scope, you can only view clusters if you have permission to do so - within the scope of the roles assigned to you +3. Enter a name for the project + Project names must start with a letter and can only contain lower case Latin letters, numbers or a hyphen ('-’) +4. Namespace associated with Project + Each project has an associated (Kubernetes) namespace in the cluster. + All workloads under this project use this namespace. + a. By default, Run:ai creates a namespace based on the Project name (in the form of `runai-`) + b. Alternatively, you can choose an existing namespace created for you by the cluster administrator +5. In the Quota management section, you can set the quota parameters and prioritize resources + * Order of priority + This column is displayed only if more than one node pool exists. The default order in which the Scheduler uses node pools to schedule a workload. This means the Scheduler first tries to allocate resources using the highest priority node pool, then the next in priority, until it reaches the lowest priority node pool list, then the Scheduler starts from the highest again. The Scheduler uses the Project list of prioritized node pools, only if the order of priority of node pools is not set in the workload during submission, either by an admin policy or by the user. Empty value means the node pool is not part of the Project’s default node pool priority list, but a node pool can still be chosen by the admin policy or a user during workload submission + * Node pool + This column is displayed only if more than one node pool exists. It represents the name of the node pool. + * GPU devices + The number of GPUs you want to allocate for this project in this node pool (decimal number). + * CPUs (Cores) + This column is displayed only if CPU quota is enabled via the General settings. + Represents the number of CPU cores you want to allocate for this project in this node pool (decimal number). + * CPU memory + This column is displayed only if CPU quota is enabled via the General settings. + The amount of CPU memory you want to allocate for this project in this node pool (in Megabytes or Gigabytes). + * Over quota / Over quota priority + If over-quota priority is enabled via the General settings then over-quota priority is presented, otherwise over-quota is presented + * Over quota + When enabled, the project can use non-guaranteed overage resources above its quota in this node pool. The amount of the non-guaranteed overage resources for this project is calculated proportionally to the project quota in this node pool. When disabled, the project cannot use more resources than the guaranteed quota in this node pool. + * Over quota priority + Represents a weight used to calculate the amount of non-guaranteed overage resources a project can get on top + of its quota in this node pool. All unused resources are split between projects that require the use of overage resources: + * Medium + The default value. The Admin can change the default to any of the following values: High, Low, Lowest, or None. + * None + When set, the project cannot use more resources than the guaranteed quota in this node pool. + * Lowest + Over-quota priority ‘lowest’ has a unique behavior, because its weight is 0, it can only use over-quota (unused overage) resources if no other project needs them, and any project with a higher over-quota priority can snap the average resources at any time. + +!!! Note + Setting the quota to 0 (either GPU, CPU, or CPU memory) and the over-quota to ‘disabled’ or over-quota priority to ‘none’ means the project is blocked from using those resources on this node pool. + +When no node pools are configured, you can set the same parameters but it is for the whole project, instead of per node pool. + +After node pools are created, you can set the above parameters __for each node-pool separately__. + +![](img/quota-mgmt.png) + + +6. Set Scheduling rules as required. You can have a scheduling rule for: + * Idle GPU timeout + Preempt a workload that does not use GPUs for more than a specified duration. You can apply a single rule per workload type - Preemptive Workspaces, Non-preemptive Workspaces, and Training. + + !!! Note + To make ‘Idle GPU timeout’ effective, it must be set to a shorter duration than that workload duration of the same workload type. + + * Workspace duration + Preempt workspaces after a specified duration. This applies to both preemptive and non-preemptive Workspaces. + * Training duration + Preempt a training workload after a specified duration. + * Node type (Affinity) + Node type is used to select a group of nodes, usually with specific characteristics such as a hardware feature, storage type, fast networking interconnection, etc. The scheduler uses node type as an indication of which nodes should be used for your workloads, within this project. + Node type is a label in the form of [run.ai/type](http://run.ai/type) and a value (e.g. run.ai/type = dgx200) that the administrator uses to tag a set of nodes. Adding the node type to the project’s scheduling rules enables the user to submit workloads with any node type label/value pairs in this list, according to the workload type - Workspace or Training. The Scheduler then schedules workloads using a node selector, targeting nodes tagged with the Run:ai node type label/value pair. Node pools and a node type can be used in conjunction with each other. For example, specifying a node pool and a smaller group of nodes from that node pool that includes a fast SSD memory or other unique characteristics. + +7. Click CREATE PROJECT + +## Adding an access rule to a project + +To create a new access rule for a project: + +1. Select the project you want to add an access rule for +2. Click ACCESS RULES +3. Click +ACCESS RULE +4. Select a subject +5. Select or enter the subject identifier: + 1. User Email for a local user created in Run:ai or for SSO user as recognized by the IDP + 2. Group name as recognized by the IDP + 3. Application name as created in Run:ai +6. Select a role +7. Click SAVE RULE +8. Click CLOSE + +## Deleting an access rule from a project + +To delete an access rule from a project: + +1. Select the project you want to remove an access rule from +2. Click ACCESS RULES +3. Find the access rule you want to delete +4. Click on the trash icon +5. Click CLOSE + +## Editing a project + +To edit a project: + +1. Select the project you want to edit +2. Click EDIT +3. Update the Project and click SAVE + +## Viewing a project’s policy + +To view the policy of a project: + +1. Select the project for which you want to view its policies. This option is only active for projects with defined policies in place. +2. Click VIEW POLICY and select the workload type for which you want to view the policies: + a. Workspace workload type policy with its set of rules + b. Training workload type policies with its set of rules +3. In the Policy form, view the workload rules that are enforcing your project for the selected workload type as well as the defaults: + * Parameter - The workload submission parameter that Rules and Defaults are applied to + * Type (applicable for data sources only) - The data source type (Git, S3, nfs, pvc etc.) + * Default - The default value of the Parameter + * Rule - Set up constraints on workload policy fields + * Source - The origin of the applied policy (cluster, department or project) + +!!! Note + The policy affecting the project consists of rules and defaults. Some of these rules and defaults may be derived from policies of a parent cluster and/or department (source). You can see the source of each rule in the policy form. + +## Deleting a project + +To delete a project: + +1. Select the project you want to delete +2. Click DELETE +3. On the dialog, click DELETE to confirm the deletion + +!!! Note + Deleting a project does not delete its associated namespace, any of the workloads running using this namespace, or the policies defined for this project. However, any assets created in the scope of this project such as compute resources, environments, data sources, templates and credentials, are permanently deleted from the system. + +## Using API + +Go to the [Projects](https://app.run.ai/api/docs#tag/Projects) API reference to view the available actions + diff --git a/docs/admin/aiinitiatives/resources/node-pools.md b/docs/admin/aiinitiatives/resources/node-pools.md new file mode 100644 index 0000000000..e69de29bb2 diff --git a/docs/admin/aiinitiatives/resources/nodes.md b/docs/admin/aiinitiatives/resources/nodes.md new file mode 100644 index 0000000000..e69de29bb2 diff --git a/docs/admin/researcher-setup/limit-to-node-group.md b/docs/admin/researcher-setup/limit-to-node-group.md index 7747405504..b4726477d0 100644 --- a/docs/admin/researcher-setup/limit-to-node-group.md +++ b/docs/admin/researcher-setup/limit-to-node-group.md @@ -90,6 +90,6 @@ See the [runai submit](../../Researcher/cli-reference/runai-submit.md) documenta Node Pools are automatically assigned to all Projects and Departments with zero resource allocation as default. Allocating resources to a node pool can be done for each Project and Department. Submitting a workload to a node pool that has zero allocation for a specific project (or department) results in that workload running as an over-quota workload. -To assign and configure specific node affinity groups or node pools to a Project see [working with Projects](../admin-ui-setup/project-setup.md). +To assign and configure specific node affinity groups or node pools to a Project see [working with Projects](../aiinitiatives/org/projects.md). When the command-line interface flag is used in conjunction with Project-based affinity, the flag is used to refine the list of allowable node groups set in the Project. \ No newline at end of file diff --git a/docs/admin/researcher-setup/researcher-setup-intro.md b/docs/admin/researcher-setup/researcher-setup-intro.md index fa77b5c9e4..19f402ce3d 100644 --- a/docs/admin/researcher-setup/researcher-setup-intro.md +++ b/docs/admin/researcher-setup/researcher-setup-intro.md @@ -14,7 +14,7 @@ Run:ai CLI needs to be installed on the Researcher's machine. This [document](cl ## Provide the Researcher with a GPU Quota -To submit workloads with Run:ai, the Researcher must be provided with a _Project_ that contains a GPU quota. Please see [Working with Projects](../admin-ui-setup/project-setup.md) document on how to create Projects and set a quota. +To submit workloads with Run:ai, the Researcher must be provided with a _Project_ that contains a GPU quota. Please see [Working with Projects](../aiinitiatives/org/projects.md) document on how to create Projects and set a quota. ## Provide access to the Run:ai User Interface diff --git a/docs/admin/runai-setup/cluster-setup/cluster-install.md b/docs/admin/runai-setup/cluster-setup/cluster-install.md index f3eac335f2..6089856027 100644 --- a/docs/admin/runai-setup/cluster-setup/cluster-install.md +++ b/docs/admin/runai-setup/cluster-setup/cluster-install.md @@ -88,6 +88,6 @@ To perform these tasks. See [Set Node Roles](../config/node-roles.md). ## Next Steps * Set up Run:ai Users [Working with Users](../../admin-ui-setup/admin-ui-users.md). -* Set up Projects for Researchers [Working with Projects](../../admin-ui-setup/project-setup.md). +* Set up Projects for Researchers [Working with Projects](../../aiinitiatives/org/projects.md). * Set up Researchers to work with the Run:ai Command-line interface (CLI). See [Installing the Run:ai Command-line Interface](../../researcher-setup/cli-install.md) on how to install the CLI for users. * Review [advanced setup and maintenance](../config/overview.md) scenarios. diff --git a/docs/admin/runai-setup/cluster-setup/customize-cluster-install.md b/docs/admin/runai-setup/cluster-setup/customize-cluster-install.md index 8a26760765..94a8f58d77 100644 --- a/docs/admin/runai-setup/cluster-setup/customize-cluster-install.md +++ b/docs/admin/runai-setup/cluster-setup/customize-cluster-install.md @@ -62,6 +62,6 @@ There are a couple of use cases that customers will want to disable this feature Follow these steps to achieve this: 1. Disable the namespace creation functionality. See the `runai-operator.config.project-controller.createNamespaces` flag above. -2. [Create a Project](../../admin-ui-setup/project-setup.md#create-a-project) using the Run:ai User Interface. +2. [Create a Project](../../aiinitiatives/org/projects.md#adding-a-new-project) using the Run:ai User Interface. 3. Create the namespace if needed by running: `kubectl create ns `. The suggested Run:ai default is `runai-`. 4. Label the namespace to connect it to the Run:ai Project by running `kubectl label ns runai/queue=`, where `` is the name of the project you have created in the Run:ai user interface above and `` is the name you chose for your namespace. diff --git a/docs/admin/runai-setup/cluster-setup/dgx-bundle.md b/docs/admin/runai-setup/cluster-setup/dgx-bundle.md index 781ccf03e6..774d46ce90 100644 --- a/docs/admin/runai-setup/cluster-setup/dgx-bundle.md +++ b/docs/admin/runai-setup/cluster-setup/dgx-bundle.md @@ -67,7 +67,7 @@ Post installation, you will want to: * (Mandatory) Set up [Researcher Access Control](../authentication/researcher-authentication.md). Without this, the Job Submit form will not work. * Set up Run:ai Users [Working with Users](../../admin-ui-setup/admin-ui-users.md). -* Set up Projects for Researchers [Working with Projects](../../admin-ui-setup/project-setup.md). +* Set up Projects for Researchers [Working with Projects](../../aiinitiatives/org/projects.md). ## Troubleshooting diff --git a/docs/admin/runai-setup/config/node-affinity-with-cloud-node-pools.md b/docs/admin/runai-setup/config/node-affinity-with-cloud-node-pools.md index 31f77feef5..f1c465a18f 100644 --- a/docs/admin/runai-setup/config/node-affinity-with-cloud-node-pools.md +++ b/docs/admin/runai-setup/config/node-affinity-with-cloud-node-pools.md @@ -1,6 +1,6 @@ # Node affinity with cloud node pools -Run:ai allows for [node affinity](../../admin-ui-setup/project-setup.md#other-project-properties). Node affinity is the ability to assign a Project to run on specific nodes. +Run:ai allows for [node affinity](../../aiinitiatives/org/projects.md). Node affinity is the ability to assign a Project to run on specific nodes. To use the node affinity feature, You will need to label the target nodes with the label `run.ai/node-type`. Most cloud clusters allow configuring node labels for the node pools in the cluster. This guide shows how to apply this configuration to different cloud providers. To make the node affinity work with node pools on various cloud providers, we need to make sure the node pools are configured with the appropriate Kubernetes label (`run.ai/type=`). diff --git a/docs/admin/runai-setup/self-hosted/k8s/project-management.md b/docs/admin/runai-setup/self-hosted/k8s/project-management.md index 5917bc2be5..96293260f8 100644 --- a/docs/admin/runai-setup/self-hosted/k8s/project-management.md +++ b/docs/admin/runai-setup/self-hosted/k8s/project-management.md @@ -3,7 +3,7 @@ title: Self Hosted installation over Kubernetes - Create Projects --- ## Introduction -The Administrator creates Run:ai Projects via the [Run:ai user interface](../../../admin-ui-setup/project-setup.md#create-a-project). When enabling [Researcher Authentication](../../authentication/researcher-authentication.md) you also assign users to Projects. +The Administrator creates Run:ai Projects via the [Run:ai user interface](../../../aiinitiatives/org/projects.md#adding-a-new-project). When enabling [Researcher Authentication](../../authentication/researcher-authentication.md) you also assign users to Projects. Run:ai Projects are implemented as Kubernetes namespaces. When creating a new Run:ai Project, Run:ai does the following automatically: diff --git a/docs/admin/runai-setup/self-hosted/ocp/project-management.md b/docs/admin/runai-setup/self-hosted/ocp/project-management.md index 7ef4285d0d..6d6ed97d39 100644 --- a/docs/admin/runai-setup/self-hosted/ocp/project-management.md +++ b/docs/admin/runai-setup/self-hosted/ocp/project-management.md @@ -3,7 +3,7 @@ title: Self Hosted installation over OpenShift - Create Projects --- ## Introduction -The Administrator creates Run:ai Projects via the [Run:ai User Interface](../../../admin-ui-setup/project-setup.md#create-a-project). When enabling [Researcher Authentication](../../authentication/researcher-authentication.md) you also assign users to Projects. +The Administrator creates Run:ai Projects via the [Run:ai User Interface](../../../aiinitiatives/org/projects.md#adding-a-new-project). When enabling [Researcher Authentication](../../authentication/researcher-authentication.md) you also assign users to Projects. Run:ai Projects are implemented as Kubernetes namespaces. When creating a new Run:ai Project, Run:ai does the following automatically: diff --git a/docs/home/whats-new-2-13.md b/docs/home/whats-new-2-13.md index 9ddca470a2..32a09a3cd3 100644 --- a/docs/home/whats-new-2-13.md +++ b/docs/home/whats-new-2-13.md @@ -60,7 +60,7 @@ This version contains features and fixes from previous versions starting with 2. **Projects** -* Improved the **Projects** UI for ease of use. **Projects** follows UI upgrades and changes that are designed to make setting up of components and assets easier for administrators and researchers. To configure a project, see [Projects](../admin/admin-ui-setup/project-setup.md). +* Improved the **Projects** UI for ease of use. **Projects** follows UI upgrades and changes that are designed to make setting up of components and assets easier for administrators and researchers. To configure a project, see [Projects](../admin/aiinitiatives/org/projects.md). **Dashboards** diff --git a/docs/home/whats-new-2-16.md b/docs/home/whats-new-2-16.md index 1d07afb153..f00f2aa528 100644 --- a/docs/home/whats-new-2-16.md +++ b/docs/home/whats-new-2-16.md @@ -14,7 +14,7 @@ date: 2023-Dec-4 #### Jobs, Workloads, and Workspaces -* Added the capability view and edit policies directly in the project submission form. Pressing on *Policy* will open a window that displays the effective policy. For more information, see [Viewing Project Policies](../admin/admin-ui-setup/project-setup.md#viewing-project-policies). +* Added the capability view and edit policies directly in the project submission form. Pressing on *Policy* will open a window that displays the effective policy. For more information, see [Viewing Project Policies](../admin/aiinitiatives/org/projects.md#viewing-a-projects-policy). * Running machine learning workloads effectively on Kubernetes can be difficult, but Run:ai makes it easy. The new *Workloads* experience introduces a simpler and more efficient way to manage machine learning workloads, which will appeal to data scientists and engineers alike. The *Workloads* experience provides a fast, reliable, and easy to use unified interface. @@ -60,7 +60,7 @@ date: 2023-Dec-4 * Added new *Policy Manager. The new *Policy Manager* provides administrators the ability to impose restrictions and default vaules on system resources. The new *Policy Manager* provides a YAML editor for configuration of the policies. Administrators can easily add both *Workspace* or *Training* policies. The editor makes it easy to see the configuration that has been applied and provides a quick and easy method to edit the policies. The new *Policy Editor* brings other important policy features such as the ability to see non-compliant resources in workloads. For more information, see [Policies](../admin/workloads/policies/README.md#policies). -* Added a new policy manager. Enabling the *New Policy Manager* provides new tools to discover how resources are not compliant. Non-compliant resources and will appear greyed out and cannot be selected. To see how a resource is not compliant, press on the clipboard icon in the upper right hand corner of the resource. Policies can also be applied to specific scopes within the Run:ai platform. For more information, see [Viewing Project Policies](../admin/admin-ui-setup/project-setup.md#viewing-project-policies). +* Added a new policy manager. Enabling the *New Policy Manager* provides new tools to discover how resources are not compliant. Non-compliant resources and will appear greyed out and cannot be selected. To see how a resource is not compliant, press on the clipboard icon in the upper right hand corner of the resource. Policies can also be applied to specific scopes within the Run:ai platform. For more information, see [Viewing Project Policies](../admin/aiinitiatives/org/projects.md#adding-a-new-project). ### Control and Visibility diff --git a/docs/home/whats-new-2-17.md b/docs/home/whats-new-2-17.md index 62518e4292..8ff030690a 100644 --- a/docs/home/whats-new-2-17.md +++ b/docs/home/whats-new-2-17.md @@ -19,7 +19,7 @@ date: 2024-Apr-14 * Added the *GPU Resource Optimization* feature to the UI. Now you can enable and configure *GPU Portion (Fraction) limit* and *GPU Memory Limit* from the UI. For more information, see [Compute resources UI with Dynamic Fractions](../Researcher/scheduling/dynamic-gpu-fractions.md#compute-reources-ui-with-dynamic-fractions-support). -* Added the ability to set Run:ai as the default scheduler for any project or namespace. This provides the administrator the ability to ensure that all workloads in a project or namespace are scheduled using the Run:ai scheduler. For more information, see [Setting Run:ai as default scheduler](../admin/admin-ui-setup/project-setup.md). +* Added the ability to set Run:ai as the default scheduler for any project or namespace. This provides the administrator the ability to ensure that all workloads in a project or namespace are scheduled using the Run:ai scheduler. For more information, see [Setting Run:ai as default scheduler](../admin/aiinitiatives/org/projects.md). #### Jobs, Workloads, and Workspaces diff --git a/docs/snippets/common-submit-cli-commands.md b/docs/snippets/common-submit-cli-commands.md index 6a97c74ebc..3a689895e6 100644 --- a/docs/snippets/common-submit-cli-commands.md +++ b/docs/snippets/common-submit-cli-commands.md @@ -272,7 +272,7 @@ #### --node-pools `` > Instructs the scheduler to run this workload using specific set of nodes which are part of a [Node Pool](../Researcher/scheduling/the-runai-scheduler.md#). You can specify one or more node pools to form a prioritized list of node pools that the scheduler will use to find one node pool that can provide the workload's specification. To use this feature your Administrator will need to label nodes as explained here: [Limit a Workload to a Specific Node Group](../admin/researcher-setup/limit-to-node-group.md) or use existing node labels, then create a node-pool and assign the label to the node-pool. -> This flag can be used in conjunction with node-type and Project-based affinity. In this case, the flag is used to refine the list of allowable node groups set from a node-pool. For more information see: [Working with Projects](../admin/admin-ui-setup/project-setup.md). +> This flag can be used in conjunction with node-type and Project-based affinity. In this case, the flag is used to refine the list of allowable node groups set from a node-pool. For more information see: [Working with Projects](../admin/aiinitiatives/org/projects.md). #### --node-type `` diff --git a/docs/admin/admin-ui-setup/department-setup.md b/graveyard/department-setup.md similarity index 100% rename from docs/admin/admin-ui-setup/department-setup.md rename to graveyard/department-setup.md diff --git a/docs/admin/admin-ui-setup/project-setup.md b/graveyard/project-setup.md similarity index 100% rename from docs/admin/admin-ui-setup/project-setup.md rename to graveyard/project-setup.md diff --git a/mkdocs.yml b/mkdocs.yml index c53db7b3a7..5ea4ae4086 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -111,6 +111,8 @@ plugins: 'admin/runai-setup/authentication/sso.md' : 'admin/runai-setup/authentication/authentication-overview.md' 'admin/researcher-setup/cli-troubleshooting.md' : 'admin/troubleshooting/troubleshooting.md' 'developer/deprecated/inference/submit-via-yaml.md' : 'developer/cluster-api/other-resources.md' + 'admin/admin-ui-setup/project-setup.md' : 'admin/aiinitiatives/org/projects.md' + 'admin/admin-ui-setup/department-setup.md' : 'admin/aiinitiatives/org/departments.md' nav: - Home: - 'Overview': 'index.md' @@ -213,11 +215,17 @@ nav: - 'Submitting Workloads' : 'admin/workloads/submitting-workloads.md' - 'Managing AI Intiatives' : - 'Overview' : 'admin/aiinitiatives/overview.md' + - 'Managing your Organization' : + - 'Projects' : 'admin/aiinitiatives/org/projects.md' + - 'Departments' : 'admin/aiinitiatives/org/departments.md' + # - 'Managing your resources' : + # - 'Nodes' : 'admin/aiinitiatives/resources/nodes.md' + # - 'Node Pools' : 'admin/aiinitiatives/resources/node-pools.md' - 'User Interface' : - 'Overview' : 'admin/admin-ui-setup/overview.md' - 'Users' : 'admin/admin-ui-setup/admin-ui-users.md' - - 'Projects' : 'admin/admin-ui-setup/project-setup.md' - - 'Departments' : 'admin/admin-ui-setup/department-setup.md' +# - 'Projects' : 'admin/admin-ui-setup/project-setup.md' +# - 'Departments' : 'admin/admin-ui-setup/department-setup.md' - 'Dashboard Analysis' : 'admin/admin-ui-setup/dashboard-analysis.md' - 'Jobs' : 'admin/admin-ui-setup/jobs.md' - 'Credentials' : 'admin/admin-ui-setup/credentials-setup.md'