Skip to content

Clarify synchronization behavior of neighborhood collectives #863

Open
@gcorbin

Description

@gcorbin

Problem

The table in Appendix A.2 (Summary of the Semantics of all Operation-Related MPI Procedures) lists remark 18

Based on their semantics, when called using an intra-communicator, MPI_ALLGATHER,
MPI_ALLTOALL, and their V and W variants, MPI_ALLREDUCE, MPI_REDUCE_SCATTER,
and MPI_REDUCE_SCATTER_BLOCK must synchronize (i.e., S1/S2 instead of W1/W2) pro-
vided that all counts and the size of all datatypes are larger than zero

also for the neighborhood collectives. Although not technically wrong(*), this remark is not relevant and confusing(**), because it does not talk about neighborhood collectives at all.

(*): Neighborhood collectives can be emulated by MPI_ALLTOALLW with some counts equal to zero, thus synchronization across the communicator is not guaranteed

(**): If not read carefully, could be misunderstood in the sense that synchronization is guaranteed for neighborhood collectives.

Proposal

Remove remark 18 from all table entries belonging to neighborhood collective calls.

Changes to the Text

Remove the remark 18) from the last column of the table in Appendix A.2, page 881 (in version 4.1) in all rows that describe neighborhood collectives.

Impact on Implementations

None.

Impact on Users

None.

References and Pull Requests

Metadata

Metadata

Labels

chap-termsMPI Terms and Conventions Chapter Committeechap-topologiesProcess Topologies Chapter Committeempi-6For inclusion in the MPI 5.1 or 6.0 standardwg-collectivesCollectives Working Groupwg-hardware-topologiesHardware Topologies Working Groupwg-termsSemantic Terms Working Group

Type

No type

Projects

Status

To Do

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions