Skip to content

New subsection 5: Local use of general datatypes, advice to users. #861

Open
@softwaretraff

Description

@softwaretraff

Problem

It may be convenient to be able to copy MPI typed data locally on a process, for instance a matrix column (described as an
MPI vector) to a matrix row (MPI contiguous) or more sophisticated rearrangements of data described by MPI datatypes. There is no specific functionality in MPI for process local, typed memory copy. Possibly none is needed: Process local copy could be done over the MPI_COMM_SELF communicator. Add a small "advice to users" somewhere.

Proposal

Comment on the issue of process local, correctly MPI typed memory copy. Suggest a solution as "advice to users", possibly in new subsection in Chapter 5.

Changes to the Text

After Subsection 5.1.11 (possibly new 5.1.12): Process local use of general datatypes.

MPI derived datatypes can likewise describe the structure of data locally on the MPI processes. The need to copy possibly non-consecutive data from one process local data buffer to another process local data buffer, e.g., a column of a 2-dimensional matrix to a row of another 2-dimensional matrix, may arise in applications. Such a copy can be effected by having the process communicate with itself using the MPI_COMM_SELF communication. Collective or point-to-point communication (MPI_Sendrecv()) can be used.

Example: A column of an nxn matrix could be described by a vector datatype with n blocks, one element and stride n. Such a column can be copied into a contiguous vector using a contiguous rowtype of n elements using for instance a collective MPI_Allgather() operation:

MPI_Allgather(matrix,colltype,1,vector,rowtype,1,MPI_COMM_SELF);

Impact on Implementations

None, except the obligation that collectives or MPI_Sendrecv are fast with different datatypes in MPI_COMM_SELF.

Impact on Users

None. May help advanced users having had this problem.

References and Pull Requests

The issue is discussed with some benchmarking results in

Jesper Larsson Träff, Ioannis Vardas:
Library Development with MPI: Attributes, Request Objects, Group Communicator Creation, Local Reductions, and Datatypes. EuroMPI 2023: 5:1-5:10

In the paper, MPI_Sendrecv is recommended. It seems that for some/many MPI libraries, MPI_Allgather or MPI_Alltoall perform better.

Metadata

Metadata

Assignees

No one assigned

    Labels

    chap-datatypesDatatypes Chapter Committeempi-6For inclusion in the MPI 5.1 or 6.0 standard

    Type

    No type

    Projects

    Status

    To Do

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions