diff --git a/docs/Researcher/workloads/inference/custom-inference.md b/docs/Researcher/workloads/inference/custom-inference.md index 85837712f1..a63c4b4dcf 100644 --- a/docs/Researcher/workloads/inference/custom-inference.md +++ b/docs/Researcher/workloads/inference/custom-inference.md @@ -122,12 +122,16 @@ To add a new custom inference workload: * __NoSchedule__ - No new pods will be scheduled on the tainted node unless they have a matching toleration. Pods currently running on the node will not be evicted. * __PreferNoSchedule__ - The control plane will try to avoid placing a pod that does not tolerate the taint on the node, but it is not guaranteed. * __Any__ - All effects above match. + 10. Optional: Select __data sources__ for your inference workload Select a data source or click __+NEW DATA SOURCE__ to add a new data source to the gallery. If there are issues with the connectivity to the cluster, or issues while creating the data source, the data source won't be available for selection. For a step-by-step guide on adding data sources to the gallery, see [data sources](../assets/datasources.md). Once created, the new data source will be automatically selected. * Optional: Modify the data target location for the selected data source(s). + !!! Note + S3 data sources are not supported for inference workloads. + 11. __Optional - General settings__: * Set the __timeframe for auto-deletion__ after workload completion or failure. The time after which a completed or failed workload is deleted; if this field is set to 0 seconds, the workload will be deleted automatically. * Set __annotations(s)__ diff --git a/docs/platform-admin/workloads/assets/datasources.md b/docs/platform-admin/workloads/assets/datasources.md index 21f90ac386..e377587cd7 100644 --- a/docs/platform-admin/workloads/assets/datasources.md +++ b/docs/platform-admin/workloads/assets/datasources.md @@ -111,6 +111,9 @@ After the data source is created, check its status to monitor its proper creatio The [S3 bucket](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-s3-bucket.html){target=_blank} data source enables the mapping of a remote S3 bucket into the workload’s file system. Similar to a PVC, this mapping remains accessible across different workload executions, extending beyond the lifecycle of individual pods. However, unlike PVCs, data stored in an S3 bucket resides remotely, which may lead to decreased performance during the execution of heavy machine learning workloads. As part of the Run:ai connection to the S3 bucket, you can create [credentials](./credentials.md) in order to access and map private buckets. +!!! Note + S3 data sources are not supported for custom inference workloads. + 1. Select the __cluster__ under which to create this data source 2. Select a [scope](./overview.md#asset-scope) 3. Enter a __name__ for the data source. The name must be unique.