@@ -80,15 +80,78 @@ running Open MPI's ``configure`` script.
80
80
81
81
.. _label-install-packagers-dso-or-not :
82
82
83
- Components ("plugins"): DSO or no ?
84
- ----------------------------------
83
+ Components ("plugins"): static or DSO ?
84
+ --------------------------------------
85
85
86
86
Open MPI contains a large number of components (sometimes called
87
87
"plugins") to effect different types of functionality in MPI. For
88
88
example, some components effect Open MPI's networking functionality:
89
89
they may link against specialized libraries to provide
90
90
highly-optimized network access.
91
91
92
+ Open MPI can build its components as Dynamic Shared Objects (DSOs) or
93
+ statically included in core libraries (regardless of whether those
94
+ libraries are built as shared or static libraries).
95
+
96
+ .. note :: As of Open MPI |ompi_ver|, ``configure``'s global default is
97
+ to build all components as static (i.e., part of the Open
98
+ MPI core libraries, not as DSOs). Prior to Open MPI v5.0.0,
99
+ the global default behavior was to build most components as
100
+ DSOs.
101
+
102
+ Why build components as DSOs?
103
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
104
+
105
+ There are advantages to building components as DSOs:
106
+
107
+ * Open MPI's core libraries |mdash | and therefore MPI applications
108
+ |mdash | will have very few dependencies. For example, if you build
109
+ Open MPI with support for a specific network stack, the libraries in
110
+ that network stack will be dependencies of the DSOs, not Open MPI's
111
+ core libraries (or MPI applications).
112
+
113
+ * Removing Open MPI functionality that you do not want is as simple as
114
+ removing a DSO from ``$libdir/open-mpi ``.
115
+
116
+ Why build components as part of Open MPI's core libraries?
117
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
118
+
119
+ The biggest advantage to building the components as part of Open MPI's
120
+ core libraries is when running at (very) large scales when Open MPI is
121
+ installed on a network filesystem (vs. being installed on a local
122
+ filesystem).
123
+
124
+ For example, consider launching a single MPI process on each of 1,000
125
+ nodes. In this scenario, the following is accessed from the network
126
+ filesystem:
127
+
128
+ #. The MPI application
129
+ #. The core Open MPI libraries and their dependencies (e.g.,
130
+ ``libmpi ``)
131
+
132
+ * Depending on your configuration, this is probably on the order of
133
+ 10-20 library files.
134
+
135
+ #. All DSO component files and their dependencies
136
+
137
+ * Depending on your configuration, this can be 200+ component
138
+ files.
139
+
140
+ If all components are physically located in the libraries, then the
141
+ third step loads zero DSO component files. When using a networked
142
+ filesystem while launching at scale, this can translate to large
143
+ performance savings.
144
+
145
+ .. note :: If not using a networked filesystem, or if not launching at
146
+ scale, loading a large number of DSO files may not consume a
147
+ noticeable amount of time during MPI process launch. Put
148
+ simply: loading DSOs as indvidual files generally only
149
+ matters when using a networked filesystem while launching at
150
+ scale.
151
+
152
+ Direct controls for building components as DSOs or not
153
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
154
+
92
155
Open MPI |ompi_ver | has two ``configure ``-time defaults regarding the
93
156
treatment of components that may be of interest to packagers:
94
157
@@ -151,3 +214,72 @@ binary package, and can install the additional "accelerator" Open MPI
151
214
binary sub-package if they actually have accelerator hardware
152
215
installed (which will cause the installation of additional
153
216
dependencies).
217
+
218
+ .. _label-install-packagers-gnu-libtool-dependency-flattening :
219
+
220
+ GNU Libtool dependency flattening
221
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
222
+
223
+ When compiling Open MPI's components statically as part of Open MPI's
224
+ core libraries, `GNU Libtool <https://www.gnu.org/software/libtool/ >`_
225
+ |mdash | which is used as part of Open MPI's build system |mdash | will
226
+ attempt to "flatten" dependencies.
227
+
228
+ For example, the :ref: `ompi_info(1) <man1-ompi_info >` command links
229
+ against the Open MPI core library ``libopen-pal ``. This library will
230
+ have dependencies on various HPC-class network stack libraries. For
231
+ simplicity, the discussion below assumes that Open MPI was built with
232
+ support for `Libfabric <https://libfabric.org/ >`_ and `UCX
233
+ <https://openucx.org/> `_, and therefore ``libopen-pal `` has direct
234
+ dependencies on ``libfabric `` and ``libucx ``.
235
+
236
+ In this scenario, GNU Libtool will automatically attempt to "flatten"
237
+ these dependencies by linking :ref: `ompi_info(1) <man1-ompi_info >`
238
+ directly to ``libfabric `` and ``libucx `` (vs. letting ``libopen-pal ``
239
+ pull the dependencies in at run time).
240
+
241
+ * In some environments (e.g., Ubuntu 22.04), the compiler and/or
242
+ linker will automatically utilize the linker CLI flag
243
+ ``-Wl,--as-needed ``, which will effectively cause these dependencies
244
+ to *not * be flattened: :ref: `ompi_info(1) <man1-ompi_info >` will
245
+ *not * have a direct dependencies on either ``libfabric `` or
246
+ ``libucx ``.
247
+
248
+ * In other environments (e.g., Fedora 38), the compiler and linker
249
+ will *not * utilize the ``-Wl,--as-needed `` linker CLI flag. As
250
+ such, :ref: `ompi_info(1) <man1-ompi_info >` will show direct
251
+ dependencies on ``libfabric `` and ``libucx ``.
252
+
253
+ **Just to be clear: ** these flattened dependencies *are not a
254
+ problem *. Open MPI will function correctly with or without the
255
+ flattened dependencies. There is no performance impact associated
256
+ with having |mdash | or not having |mdash | the flattened dependencies.
257
+ We mention this situation here in the documentation simply because it
258
+ surprised some Open MPI downstream package managers to see that
259
+ :ref: `ompi_info(1) <man1-ompi_info >` in Open MPI |ompi_ver | had more
260
+ shared library dependencies than it did in prior Open MPI releases.
261
+
262
+ If packagers want :ref: `ompi_info(1) <man1-ompi_info >` to not have
263
+ these flattened dependencies, use either of the following mechanisms:
264
+
265
+ #. Use ``--enable-mca-dso `` to force all components to be built as
266
+ DSOs (this was actually the default behavior before Open MPI v5.0.0).
267
+
268
+ #. Add ``LDFLAGS=-Wl,--as-needed `` to the ``configure `` command line
269
+ when building Open MPI.
270
+
271
+ .. note :: The Open MPI community specifically chose not to
272
+ automatically utilize this linker flag for the following
273
+ reasons:
274
+
275
+ #. Having the flattened dependencies does not cause any
276
+ correctness or performance problems.
277
+ #. There's multiple mechanisms (see above) for users or
278
+ packagers to change this behavior, if desired.
279
+ #. Certain environments have chosen to have |mdash | or
280
+ not have |mdash | this flattened dependency behavior.
281
+ It is not Open MPI's place to override these choices.
282
+ #. In general, Open MPI's ``configure `` script only
283
+ utilizes compiler and linker flags if they are
284
+ *needed *. All other flags should be the user's /
285
+ packager's choice.
0 commit comments