@@ -61,41 +61,42 @@ that information directly from Slurm at run time.
61
61
Using Slurm's "direct launch" functionality
62
62
-------------------------------------------
63
63
64
- Assuming that Slurm installed its Open MPI plugin, you can use
64
+ Assuming that Slurm was configured with its PMIx plugin, you can use
65
65
``srun `` to "direct launch" Open MPI applications without the use of
66
66
Open MPI's ``mpirun `` command.
67
67
68
- .. note :: Using direct launch can be *slightly* faster when launching
69
- very, very large MPI processes (i.e., thousands or millions of MPI
70
- processes in a single job). But it has significantly fewer
71
- features than Open MPI's ``mpirun ``.
68
+ First, you must ensure that Slurm was built and installed with PMIx
69
+ support. This can determined as shown below:
72
70
73
- First, you must ensure that Slurm was built and installed with PMI-2
74
- support.
71
+ .. code-block :: sh
72
+
73
+ shell$ srun --mpi=list
74
+ MPI plugin types are...
75
+ none
76
+ pmi2
77
+ pmix
78
+ specific pmix plugin versions available: pmix_v4
75
79
76
- .. note :: Please ask your friendly neighborhood Slurm developer to
77
- support PMIx. PMIx is the current generation of run-time
78
- support API; PMI-2 is the legacy / antiquated API. Open MPI
79
- *only * supports PMI-2 for Slurm.
80
+ The output from ``srun `` may vary somewhat depending on the version of Slurm installed.
81
+ If PMIx is not present in the output, then you will not be able to use srun
82
+ to launch Open MPI applications.
80
83
81
- Next, ensure that Open MPI was configured ``--with-pmi=DIR ``, where
82
- ``DIR `` is the path to the directory where Slurm's ``pmi2.h `` is
83
- located.
84
+ .. note :: PMI-2 is not supported in Open MPI 5.0.0 and later releases.
84
85
85
- Open MPI applications can then be launched directly via the `` srun ``
86
- command. For example:
86
+ Provided the Slurm installation includes the PMIx plugin, Open MPI applications
87
+ can then be launched directly via the `` srun `` command. For example:
87
88
88
89
.. code-block :: sh
89
90
90
- shell$ srun -N 4 mpi-hello-world
91
+ shell$ srun -N 4 --mpi=pmix mpi-hello-world
91
92
92
93
Or you can use ``sbatch `` with a script:
93
94
94
95
.. code-block :: sh
95
96
96
97
shell$ cat my_script.sh
97
98
#! /bin/sh
98
- srun mpi-hello-world
99
+ srun --mpi=pmix mpi-hello-world
99
100
shell$ sbatch -N 4 my_script.sh
100
101
srun: jobid 1235 submitted
101
102
shell$
0 commit comments