Skip to content

Commit b46be37

Browse files
authored
Updated mounting file systems in job docs. (#192)
2 parents 824a153 + bfa5063 commit b46be37

File tree

3 files changed

+62
-0
lines changed

3 files changed

+62
-0
lines changed

docs/source/user_guide/jobs/infra_and_runtime.rst

Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -120,6 +120,48 @@ see also `ADS Logging <../logging/logging.html>`_.
120120

121121
With logging configured, you can call :py:meth:`~ads.jobs.DataScienceJobRun.watch` method to stream the logs.
122122

123+
Mounting File Systems
124+
---------------------
125+
126+
Data Science Job supports mounting multiple types of file systems,
127+
see `Data Science Job Mounting File Systems <place_holder>`_. A maximum number of 5 file systems are
128+
allowed to be mounted for each Data Science Job. You can specify a list of file systems to be mounted
129+
by calling :py:meth:`~ads.jobs.DataScienceJob.with_storage_mount()`. For each file system to be mounted,
130+
you need to pass a dictionary with `src` and `dest` as keys. For example, you can pass
131+
*<mount_target_ip_address>@<export_path>* as the value for `src` to mount OCI File Storage. The value of
132+
`dest` must be the folder to which you want to mount the file system. See example below.
133+
134+
.. tabs::
135+
136+
.. code-tab:: python
137+
:caption: Python
138+
139+
from ads.jobs import DataScienceJob
140+
141+
infrastructure = (
142+
DataScienceJob()
143+
.with_log_group_id("<log_group_ocid>")
144+
.with_log_id("<log_ocid>")
145+
.with_storage_mount(
146+
{
147+
"src" : "<mount_target_ip_address>@<export_path>",
148+
"dest" : "<destination_directory_name>"
149+
}
150+
)
151+
)
152+
153+
.. code-tab:: yaml
154+
:caption: YAML
155+
156+
kind: infrastructure
157+
type: dataScienceJob
158+
spec:
159+
logGroupId: <log_group_ocid>
160+
logId: <log_ocid>
161+
storageMount:
162+
- src: <mount_target_ip_address>@<export_path>
163+
dest: <destination_directory_name>
164+
123165
Runtime
124166
=======
125167

docs/source/user_guide/jobs/tabs/infra_config.rst

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,13 @@
2222
.with_shape_config_details(memory_in_gbs=16, ocpus=1)
2323
# Minimum/Default block storage size is 50 (GB).
2424
.with_block_storage_size(50)
25+
# A maximum number of 5 file systems are allowed to be mounted for a job.
26+
.with_storage_mount(
27+
{
28+
"src" : "<mount_target_ip_address>@<export_path>",
29+
"dest" : "<destination_directory_name>"
30+
}
31+
)
2532
)
2633

2734
.. code-tab:: yaml
@@ -40,3 +47,6 @@
4047
ocpus: 1
4148
shapeName: VM.Standard.E3.Flex
4249
subnetId: <subnet_ocid>
50+
storageMount:
51+
- src: <mount_target_ip_address>@<export_path>
52+
dest: <destination_directory_name>

docs/source/user_guide/jobs/tabs/quick_start_job.rst

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,13 @@
2424
.with_shape_config_details(memory_in_gbs=16, ocpus=1)
2525
# Minimum/Default block storage size is 50 (GB).
2626
.with_block_storage_size(50)
27+
# A maximum number of 5 file systems are allowed to be mounted for a job.
28+
.with_storage_mount(
29+
{
30+
"src" : "<mount_target_ip_address>@<export_path>",
31+
"dest" : "<destination_directory_name>"
32+
}
33+
)
2734
)
2835
.with_runtime(
2936
PythonRuntime()
@@ -59,6 +66,9 @@
5966
ocpus: 1
6067
shapeName: VM.Standard.E3.Flex
6168
subnetId: <subnet_ocid>
69+
storageMount:
70+
- src: <mount_target_ip_address>@<export_path>
71+
dest: <destination_directory_name>
6272
runtime:
6373
kind: runtime
6474
type: python

0 commit comments

Comments
 (0)