Skip to content

Error with combination of MPI_Win_create_dynamic() and MPI_Get() using osc rdma #10328

Open
@joaobfernandes0

Description

@joaobfernandes0

Thank you for taking the time to submit an issue!

Background information

What version of Open MPI are you using? (e.g., v3.0.5, v4.0.2, git branch name and hash, etc.)

v4.1.1

Describe how Open MPI was installed (e.g., from a source/distribution tarball, from a git clone, from an operating system distribution package, etc.)

$ ompi_info
               Package: Open MPI haroldo@headnode0 Distribution
                Open MPI: 4.1.1
  Open MPI repo revision: v4.1.1
   Open MPI release date: Apr 24, 2021
                Open RTE: 4.1.1
  Open RTE repo revision: v4.1.1
   Open RTE release date: Apr 24, 2021
                    OPAL: 4.1.1
      OPAL repo revision: v4.1.1
       OPAL release date: Apr 24, 2021
                 MPI API: 3.1.0
            Ident string: 4.1.1
                  Prefix: /opt/npad/shared/libraries/openmpi/4.1.1-gnu-8
 Configured architecture: x86_64-pc-linux-gnu
          Configure host: headnode0
           Configured by: haroldo
           Configured on: Wed Dec  8 01:32:00 UTC 2021
          Configure host: headnode0
  Configure command line: '--prefix=/opt/npad/shared/libraries/openmpi/4.1.1-gnu-8'
                          '--with-slurm' '--with-pmi' '--with-verbs'
                          '--with-ucx' '--enable-openib-rdmacm-ibaddr'
                Built by: haroldo
                Built on: Wed Dec  8 01:37:44 UTC 2021
              Built host: headnode0
              C bindings: yes
            C++ bindings: no
             Fort mpif.h: yes (all)
            Fort use mpi: yes (full: ignore TKR)
       Fort use mpi size: deprecated-ompi-info-value
        Fort use mpi_f08: yes
 Fort mpi_f08 compliance: The mpi_f08 module is available, but due to
                          limitations in the gfortran compiler and/or Open
                          MPI, does not support the following: array
                          subsections, direct passthru (where possible) to
                          underlying Open MPI's C functionality
  Fort mpi_f08 subarrays: no
           Java bindings: no
  Wrapper compiler rpath: runpath
              C compiler: gcc
     C compiler absolute: /usr/bin/gcc
  C compiler family name: GNU
      C compiler version: 8.5.0
            C++ compiler: g++
   C++ compiler absolute: /usr/bin/g++
           Fort compiler: gfortran
       Fort compiler abs: /usr/bin/gfortran
         Fort ignore TKR: yes (!GCC$ ATTRIBUTES NO_ARG_CHECK ::)
   Fort 08 assumed shape: yes
      Fort optional args: yes
          Fort INTERFACE: yes
    Fort ISO_FORTRAN_ENV: yes
       Fort STORAGE_SIZE: yes
      Fort BIND(C) (all): yes
      Fort ISO_C_BINDING: yes
 Fort SUBROUTINE BIND(C): yes
       Fort TYPE,BIND(C): yes
 Fort T,BIND(C,name="a"): yes
            Fort PRIVATE: yes
          Fort PROTECTED: yes
           Fort ABSTRACT: yes
       Fort ASYNCHRONOUS: yes
          Fort PROCEDURE: yes
         Fort USE...ONLY: yes
           Fort C_FUNLOC: yes
 Fort f08 using wrappers: yes
         Fort MPI_SIZEOF: yes
             C profiling: yes
           C++ profiling: no
   Fort mpif.h profiling: yes
  Fort use mpi profiling: yes
   Fort use mpi_f08 prof: yes
          C++ exceptions: no
          Thread support: posix (MPI_THREAD_MULTIPLE: yes, OPAL support: yes,
                          OMPI progress: no, ORTE progress: yes, Event lib:
                          yes)
           Sparse Groups: no
  Internal debug support: no
  MPI interface warnings: yes
     MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
              dl support: yes
   Heterogeneous support: no
 mpirun default --prefix: no
       MPI_WTIME support: native
     Symbol vis. support: yes
   Host topology support: yes
            IPv6 support: no
      MPI1 compatibility: no
          MPI extensions: affinity, cuda, pcollreq
   FT Checkpoint support: no (checkpoint thread: no)
   C/R Enabled Debugging: no

Please describe the system on which you are running

  • Operating system/version: Rocky Linux 8.5 (Green Obsidian)
  • Computer hardware: 4 Nodes with 2xCPU Intel Xeon Sixteen-Core E5-2698v3
  • Network type: Infiniband (Mellanox)

Details of the problem

Hello everyone,

I'm trying to run a mpi code with MPI_Get() using a Dynamic Windown (MPI_Win_create_dynamic + MPI_Win_attach) I have two scenarios: The first am trying to run my code with the following configurations

btl_openib_allow_ib = 1
btl_openib_if_include = mlx4_0:1
osc = rdma 
orte_base_help_aggregate = 0 

with this configuration, I have the problem that the execution got stuck in the MPI_Get(), sometimes generations the following error

[service3:92500] *** An error occurred in MPI_Rget
[service3:92500] *** reported by process [415891457,0]
[service3:92500] *** on win rdma window 4
[service3:92500] *** MPI_ERR_RMA_RANGE: invalid RMA address range
[service3:92500] *** MPI_ERRORS_ARE_FATAL (processes in this win will now abort,
[service3:92500] ***    and potentially your MPI job)

I can solve this change the changing osc = ucx. The code run ok, but with this, I return to another problem that I have described here (#9580).

Then, my idea is to solve the problem with MPI_Get() using osc = rdma. I'm using the following code to test the MPI_Get()

#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char *argv[]) {
	MPI_Init(&argc, &argv);

	int size, rank;
	MPI_Comm_size(MPI_COMM_WORLD, &size);
	MPI_Comm_rank(MPI_COMM_WORLD, &rank);

	int *value = new int[1];
	int get_value;
	MPI_Aint *addr;
	MPI_Aint get_addr;
	MPI_Win win_addr;
	MPI_Win win_value;

	MPI_Win_allocate(sizeof(MPI_Aint), sizeof(MPI_Aint), MPI_INFO_NULL, MPI_COMM_WORLD, &(addr), &(win_addr));
	MPI_Win_create_dynamic(MPI_INFO_NULL, MPI_COMM_WORLD, &win_value);
	MPI_Win_attach(win_value, value, sizeof(int));

	if (rank == 0) {
		MPI_Win_lock(MPI_LOCK_SHARED, rank, 0, win_value);
		MPI_Win_lock(MPI_LOCK_EXCLUSIVE, rank, 0, win_addr);
		*value = 3366;
		MPI_Get_address(value, addr);
		printf("ID %i Dynamic Windown with Value = %i and Addr %lu\n", rank, *value, *addr);
		MPI_Win_unlock(rank, win_addr);
		MPI_Win_unlock(rank, win_value);
	}

	MPI_Barrier(MPI_COMM_WORLD);

	if (rank != 0) {
		printf("ID %i MPI_Get Addr ...\n", rank);
		MPI_Win_lock(MPI_LOCK_EXCLUSIVE, 0, 0, win_addr);
		MPI_Get(&get_addr, 1, MPI_AINT, 0, 0, 1, MPI_AINT, win_addr);
		MPI_Win_unlock(0, win_addr);

		printf("ID %i MPI_Get Value ...\n", rank);
		MPI_Win_lock(MPI_LOCK_EXCLUSIVE, 0, 0, win_value);
		MPI_Get(&get_value, 1, MPI_INT, 0, get_addr, 1, MPI_INT, win_value);
		MPI_Win_unlock(0, win_value);

		printf("ID %i MPI_Get completed on Dynamic Windown with Value = %i and Addr %lu\n", rank, get_value, get_addr);
	}

	MPI_Win_free(&win_addr);
	MPI_Win_detach(win_value, value);
	MPI_Finalize();

	return EXIT_SUCCESS;
}

I'm using a combination of MPI_Win_create_dynamic() and MPI_Get(), wherein in this case I use the target memory address as the displacement parameter in MPI_Get(). Repeating, this works with ucx but not with rdma. I think the rdma MPI_Get() function does not do the proper handling of the displacement value when this means the target address.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions