Description
Background information
MPI_Test()
and MPI_Request_get_status()
return flag = true just right after a second call to MPI_Start()
on a persistent request that originates from a call to MPI_Psend_init()
.
However, MPI 4.1 p.114 states:
Once an MPI_PSEND_INIT call has been made, the user may start the operation with
a call to a starting procedure and complete the operation with a number of MPI_PREADY
calls equal to the requested number of send partitions followed by a call to a completing
procedure.
Hence, after MPI_Start the operation cannot be completed before the correct number of MPI_Pready()
calls are made
Note: for a request originating from MPI_Precv_init
the state is correctly reported with flag = true.
What version of Open MPI are you using?
git describe
v5.0.2-13-g23b994a9dd
git submodule status
d1c1ed0c2e64f19ad49291241a43630ea7fdce28 3rd-party/openpmix (v4.2.8)
e383f5ad70c2633420d3425e9fb67e69b6bfd9c4 3rd-party/prrte (v3.0.3)
c1cfc910d92af43f8c27807a9a84c9c13f4fbc65 config/oac (remotes/origin/HEAD)
built with
./configure --prefix=$HOME/bin/mpi/openmpi/main --with-pmix=internal
If you are building/installing from a git clone, please copy-n-paste the output from git submodule status
.
Please describe the system on which you are running
- Operating system/version:
uname-a
Linux 6.6.9-arch1-1 #1 SMP PREEMPT_DYNAMIC Tue, 02 Jan 2024 02:28:28 +0000 x86_64 GNU/Linux
- Computer hardware: Notebook / intel x86_64
Details of the problem
Find attached a test code and the output I get:
Note: If you include verbatim output (or a code block), please use a GitHub Markdown code block like below:
/*
* Copyright (c) 2024 Christoph Niethammer <niethammerer@hlrs.de>
*/
#include <stdio.h>
#include <mpi.h>
int main(int argc, char *argv[]) {
MPI_Init(&argc, &argv);
MPI_Comm comm = MPI_COMM_WORLD;
int comm_size = 1;
int comm_rank = 0;
MPI_Comm_size(comm, &comm_size);
MPI_Comm_rank(comm, &comm_rank);
double send_buf;
double recv_buf;
int tag = 4711;
MPI_Request prequest;
MPI_Status status;
if(comm_size > 2) {
MPI_Abort(comm, 1);
}
if(0 == comm_rank) {
MPI_Psend_init(&send_buf, 1, 1, MPI_DOUBLE, 1, tag, comm, MPI_INFO_NULL, &prequest);
} else if (1 == comm_rank) {
MPI_Precv_init(&recv_buf, 1, 1, MPI_DOUBLE, 0, tag, comm, MPI_INFO_NULL, &prequest);
}
int flag = -333;
int pflag = 1023;
#define LOGS(STR) printf("[%d/%d] %s\n", comm_rank, comm_size, STR); fflush(stdout);
#define LOGSD(STR, D) printf("[%d/%d] %s %d\n", comm_rank, comm_size, STR, D); fflush(stdout);
flag = -1;
MPI_Request_get_status(prequest, &flag, &status);
LOGSD("MPI_Request_get_status flag:", flag);
// First round
MPI_Start(&prequest);
LOGS("MPI_Start 1");
flag = -2;
MPI_Request_get_status(prequest, &flag, &status);
LOGSD("MPI_Request_get_status flag:", flag);
MPI_Test(&prequest, &flag, &status);
LOGSD("MPI_test flag:", flag);
if(comm_rank == 0) {
MPI_Pready(0, prequest);
LOGS("MPI_Pready 1");
} else if (comm_rank == 1) {
MPI_Parrived(prequest, 0, &pflag);
LOGSD("MPI_Parrived 1", pflag);
}
flag = -3;
MPI_Request_get_status(prequest, &flag, &status);
LOGSD("MPI_Request_get_status flag:", flag);
MPI_Wait(&prequest, &status);
LOGS("MPI_Wait 1");
flag = -4;
MPI_Request_get_status(prequest, &flag, &status);
LOGSD("MPI_Request_get_status flag:", flag);
// Second round
MPI_Start(&prequest);
LOGS("MPI_Start 2");
flag = -5;
MPI_Request_get_status(prequest, &flag, &status);
LOGSD("MPI_Request_get_status flag:", flag);
MPI_Test(&prequest, &flag, &status);
LOGSD("MPI_test flag:", flag);
if(comm_rank == 0) {
MPI_Pready(0, prequest);
LOGS("MPI_Pready 2");
} else if (comm_rank == 1) {
MPI_Parrived(prequest, 0, &pflag);
LOGSD("MPI_Parrived 2", pflag);
}
flag = -6;
MPI_Request_get_status(prequest, &flag, &status);
LOGSD("MPI_Request_get_status flag:", flag);
MPI_Wait(&prequest, &status);
LOGS("MPI_Wait 2");
flag = -7;
MPI_Request_get_status(prequest, &flag, &status);
LOGSD("MPI_Request_get_status flag:", flag);
MPI_Finalize();
return 0;
}
$ mpirun -n 2 part_persistent | grep -E "\[0"
[0/2] MPI_Request_get_status flag: 1
[0/2] MPI_Start 1
[0/2] MPI_Request_get_status flag: 0
[0/2] MPI_test flag: 0
[0/2] MPI_Pready 1
[0/2] MPI_Request_get_status flag: 0
[0/2] MPI_Wait 1
[0/2] MPI_Request_get_status flag: 1
[0/2] MPI_Start 2
[0/2] MPI_Request_get_status flag: 1
[0/2] MPI_test flag: 1
[0/2] MPI_Pready 2
[0/2] MPI_Request_get_status flag: 1
[0/2] MPI_Wait 2
[0/2] MPI_Request_get_status flag: 1