From 5094e881544e4009b51428111d7b724ce73ecd2b Mon Sep 17 00:00:00 2001 From: Sherin Date: Thu, 5 Jun 2025 09:49:45 +0300 Subject: [PATCH 1/2] Updated S3 data source limitation for inference --- docs/Researcher/workloads/inference/custom-inference.md | 3 +++ docs/platform-admin/workloads/assets/datasources.md | 3 +++ 2 files changed, 6 insertions(+) diff --git a/docs/Researcher/workloads/inference/custom-inference.md b/docs/Researcher/workloads/inference/custom-inference.md index 85837712f1..1edab64f19 100644 --- a/docs/Researcher/workloads/inference/custom-inference.md +++ b/docs/Researcher/workloads/inference/custom-inference.md @@ -128,6 +128,9 @@ To add a new custom inference workload: Once created, the new data source will be automatically selected. * Optional: Modify the data target location for the selected data source(s). + !!! Note + S3 data sources are not supported for inference workloads. + 11. __Optional - General settings__: * Set the __timeframe for auto-deletion__ after workload completion or failure. The time after which a completed or failed workload is deleted; if this field is set to 0 seconds, the workload will be deleted automatically. * Set __annotations(s)__ diff --git a/docs/platform-admin/workloads/assets/datasources.md b/docs/platform-admin/workloads/assets/datasources.md index 21f90ac386..e377587cd7 100644 --- a/docs/platform-admin/workloads/assets/datasources.md +++ b/docs/platform-admin/workloads/assets/datasources.md @@ -111,6 +111,9 @@ After the data source is created, check its status to monitor its proper creatio The [S3 bucket](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-s3-bucket.html){target=_blank} data source enables the mapping of a remote S3 bucket into the workload’s file system. Similar to a PVC, this mapping remains accessible across different workload executions, extending beyond the lifecycle of individual pods. However, unlike PVCs, data stored in an S3 bucket resides remotely, which may lead to decreased performance during the execution of heavy machine learning workloads. As part of the Run:ai connection to the S3 bucket, you can create [credentials](./credentials.md) in order to access and map private buckets. +!!! Note + S3 data sources are not supported for custom inference workloads. + 1. Select the __cluster__ under which to create this data source 2. Select a [scope](./overview.md#asset-scope) 3. Enter a __name__ for the data source. The name must be unique. From 465c347c6e0c231a016e32dd10fbee7bdef7a909 Mon Sep 17 00:00:00 2001 From: Sherin Date: Thu, 5 Jun 2025 10:04:16 +0300 Subject: [PATCH 2/2] Update custom-inference.md --- docs/Researcher/workloads/inference/custom-inference.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/Researcher/workloads/inference/custom-inference.md b/docs/Researcher/workloads/inference/custom-inference.md index 1edab64f19..a63c4b4dcf 100644 --- a/docs/Researcher/workloads/inference/custom-inference.md +++ b/docs/Researcher/workloads/inference/custom-inference.md @@ -122,6 +122,7 @@ To add a new custom inference workload: * __NoSchedule__ - No new pods will be scheduled on the tainted node unless they have a matching toleration. Pods currently running on the node will not be evicted. * __PreferNoSchedule__ - The control plane will try to avoid placing a pod that does not tolerate the taint on the node, but it is not guaranteed. * __Any__ - All effects above match. + 10. Optional: Select __data sources__ for your inference workload Select a data source or click __+NEW DATA SOURCE__ to add a new data source to the gallery. If there are issues with the connectivity to the cluster, or issues while creating the data source, the data source won't be available for selection. For a step-by-step guide on adding data sources to the gallery, see [data sources](../assets/datasources.md).