@@ -4,9 +4,9 @@ Open MPI v5.0.x series
4
4
This file contains all the NEWS updates for the Open MPI v5.0.x
5
5
series, in reverse chronological order.
6
6
7
- Open MPI version 5.0.0rc11
7
+ Open MPI version 5.0.0rc12
8
8
--------------------------
9
- :Date: 6 April 2023
9
+ :Date: 19 May 2023
10
10
11
11
.. admonition :: MPIR API has been removed
12
12
:class: warning
@@ -40,28 +40,26 @@ Open MPI version 5.0.0rc11
40
40
libraries, rather than linked into the Open MPI core libraries.
41
41
42
42
43
- - Changes since rc10:
44
-
45
- - The ``HAN `` collective is now enabled by default. This replaces ``tuned `` as the
46
- default out-of-the-box collective component for Open MPI.
47
- - Various fixes to make v5.0 ABI compatible with v4.1 compiled programs.
48
- - Fixed support for ``OFI `` on RHEL7 and Libfabric < 1.9.
49
- - Added the mca option ``--mca ompi_pml_base_check_pml 0|1 `` to skip
50
- ``PML `` transport validation across processes. This can speed-up launch
51
- times for users who know their cluster will always choose the same
52
- ``PML `` transport. Default: verify pml selections.
53
- - Use Libfabric 1.18 if available when using the ``OFI `` transport.
54
- - Implemented ``ompi_info `` color coding.
55
- - Added ``MPI_SESSION_NULL `` to Fortran bindings. Thanks to Jan Fecht for the fix.
56
- - Fixed a bug where CUDA-aware MPI is broken when using the ``OB1 `` transport.
57
- - Many other bug fixes and cleanups.
58
- - Many documentation updates.
59
- Thanks to Nick Papior for the contributions.
60
-
43
+ - Changes since rc11:
44
+ - accelerator/rocm: add SYNC_MEMOPS support
45
+ - Updated PMIx, PRRTe, and OAC submodule pointers.
46
+ - Fixe in mca_btl_ofi_flush() in multi threaded environment.
47
+ - smcuda: fix an edge case when using enable mca dso
48
+ - Fix MPI_Session_init bug if all previous sessions are finalized..
49
+ - Fix mpi4py hang in intercomm_create_from_groups.
50
+ - Fix finalize segfault with OSHMEM 4.1.5
51
+ - Update FAQ content.
52
+ - Improve AVX* detection. Fixes op/avx link failure with nvhpc compiler.
53
+ - Fix incorrect results with pml/ucx using Intel compiler.
54
+ - Fix segfault when broadcasting large MPI structs.
55
+ - Add platform files for Google Cloud HPC.
56
+ - UCC/HCOLL: Fix waitall for non blokcing collectives.
57
+ - check for MPI_T.3 (not MPI_T.5). Fix pre-built docs check.
58
+
61
59
- All other notable updates for v5.0.0:
60
+ - Updated PMIx to the ``v4.2 `` branch - current hash: ``1492c0b3 ``.
61
+ - Updated PRRTE to the ``v3.0 `` branch - current hash: ``4636ea79dc ``.
62
62
63
- - Updated PMIx to the ``v4.2 `` branch - current hash: ``7d45393 ``.
64
- - Updated PRRTE to the ``v3.0 `` branch - current hash: `20ee752 `.
65
63
66
64
- New Features:
67
65
0 commit comments