File tree Expand file tree Collapse file tree 2 files changed +3
-9
lines changed
docs/source/user_guide/model_training/distributed_training/ray Expand file tree Collapse file tree 2 files changed +3
-9
lines changed Original file line number Diff line number Diff line change @@ -179,8 +179,6 @@ Do a dry run to inspect how the yaml translates to Job and Job Runs
179
179
ads opctl run -f train.yaml --dry-run
180
180
181
181
182
- **Use ads opctl to create the cluster infrastructure and run the workload: **
183
-
184
182
.. include :: ../_test_and_submit.rst
185
183
186
184
**Monitoring the workload logs **
Original file line number Diff line number Diff line change @@ -7,13 +7,9 @@ Ray is a framework for distributed computing in Python specialized in ML workloa
7
7
The documentation shows how to create a container and ``yaml `` spec to run a ``Ray ``
8
8
code sample in distributed modality.
9
9
10
-
11
- .. admonition :: Ray
12
- :class: note
13
-
14
- ``Ray `` offers a core package to simply execute Python workloads in a distributed manner,
15
- potentially across a cluster of machines (set up through ``Ray `` itself), but also other
16
- extensions to perform more traditional ML computation, such as Hyperparameter Optimization.
10
+ ``Ray `` offers a core package to simply execute Python workloads in a distributed manner,
11
+ potentially across a cluster of machines (set up through ``Ray `` itself), but also other
12
+ extensions to perform more traditional ML computation, such as Hyperparameter Optimization.
17
13
18
14
19
15
.. toctree ::
You can’t perform that action at this time.
0 commit comments