Skip to content

Remove get_c_op_from_fortran Auxiliary function #105

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

adit4443ya
Copy link
Collaborator

Towards #21 #101

@adit4443ya adit4443ya requested a review from gxyd April 11, 2025 11:52
Copy link
Collaborator

@gxyd gxyd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We probably need to fallback to the standard way of getting get_c_mpi_sum() in C wrapper for this.

@@ -20,7 +22,7 @@ module mpi

integer, parameter :: MPI_COMM_WORLD = -1000
real(8), parameter :: MPI_IN_PLACE = -1002
integer, parameter :: MPI_SUM = -2300
integer, parameter :: MPI_SUM = MPI_SUM_value
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really like this approach of setting the value of MPI_SUM depending on the implementation, the implementations can have their nuances, e.g. they might changes the value of MPI_SUM from version to version etc.

This approach isn't something that will work for us, we definitely do need to set it to some random value but that shouldn't be used as a criterion to identify the operation to perform.

Copy link
Collaborator Author

@adit4443ya adit4443ya Apr 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

they might changes the value of MPI_SUM from version to version etc.

Does it change? Can you check with other versions ?
As I thought that these are predefined constants they are not meant to be changed
even If that's the case Can't we keep updating the values as we switch to newer versions?

If no then we would discard this approach,

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So i checked with the mpich 4.1.1 version it were consistent with the constant values like MPI_SUM, MPI_Double, MPI_INT etc
LEt me check with openmpi for the saame

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes checked with openmpi 5.0.3 too , it's same

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This are the logs supporting the same

For OPenMPI

 aditya-trivedi   src    type_ax ≢  ?2    conda activate mpi
 aditya-trivedi   src    type_ax ≢  ?2    mpiexec -version
mpiexec (Open MPI) 5.0.6

Report bugs to https://www.open-mpi.org/community/help/
 aditya-trivedi   src    type_ax ≢  ?2    mpif90 ../src/mre.f90 -o a&& ./a
 MPI Operations:
 MPI_MAX:           1
 MPI_MIN:           2
 MPI_SUM:           3
 MPI Datatypes:
 MPI_INT:          39
 MPI_LONG:          41
 MPI_FLOAT:          45
 MPI_DOUBLE:          46
 MPI_LONG_DOUBLE:          47
 MPI_LONG_LONG_INT:          43
 MPI_REAL:          13
 aditya-trivedi   src    type_ax ≢  ?2    conda activate mpi_o
 aditya-trivedi   src    type_ax ≢  ?2    mpiexec -version
mpiexec (Open MPI) 5.0.3

Report bugs to https://www.open-mpi.org/community/help/
 aditya-trivedi   src    type_ax ≢  ?2    mpif90 ../src/mre.f90 -o a&& ./a
 MPI Operations:

 MPI_MAX:           1
 MPI_MIN:           2
 MPI_SUM:           3
 MPI Datatypes:
 MPI_INT:          39
 MPI_LONG:          41
 MPI_FLOAT:          45
 MPI_DOUBLE:          46
 MPI_LONG_DOUBLE:          47
 MPI_LONG_LONG_INT:          43
 MPI_REAL:          13

For MPICH

 aditya-trivedi   src    type_ax ≢  ?2    conda activate mpich_o
 aditya-trivedi   src    type_ax ≢  ?2    mpif90 ../src/mre.f90 -o a&& ./a
MPI Operations:
MPI_MAX:  1476395009
MPI_MIN:  1476395010
MPI_SUM:  1476395011
MPI Datatypes:
MPI_INT:  1275069445
MPI_LONG:  1275070471
MPI_FLOAT:  1275069450
MPI_DOUBLE:  1275070475
MPI_LONG_DOUBLE:  1275072524
MPI_LONG_LONG_INT:  1275070473
MPI_REAL:  1275069468
 aditya-trivedi   src    type_ax ≢  ?2    mpiexec --version
HYDRA build details:
   Version:                                 4.1.1
   Release Date:                            Mon Mar  6 14:14:15 CST 2023
   
    aditya-trivedi   src    type_ax ≢  ?2    conda activate mpich
 aditya-trivedi   src    type_ax ≢  ?2    mpif90 ../src/mre.f90 -o a&& ./a
MPI Operations:
MPI_MAX:  1476395009
MPI_MIN:  1476395010
MPI_SUM:  1476395011
MPI Datatypes:
MPI_INT:  1275069445
MPI_LONG:  1275070471
MPI_FLOAT:  1275069450
MPI_DOUBLE:  1275070475
MPI_LONG_DOUBLE:  1275072524
MPI_LONG_LONG_INT:  1275070473
MPI_REAL:  1275069468
 aditya-trivedi   src    type_ax ≢  ?2    which mpiexec
/home/aditya-trivedi/conda_root/envs/mpich/bin/mpiexec
 aditya-trivedi   src    type_ax ≢  ?2     mpiexec --version
HYDRA build details:
   Version:                                 4.3.0
   Release Date:                            Mon Feb  3 09:09:47 AM CST 2025

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My concern isn't limited just the (possibly) changing values of constants with different versioning. The dependency of value on the implementation isn't fundamentally the right thing to do, considering that we can have a better solution instead of using this.

I've made a PR here: #107, which intends to do the work that this PR is doing with a different approach (same as what we've already done with other constants like MPI_COMM_WORLD etc.

I'll let @certik take a call here.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The approach in #107 is fine.

This PR I would need to understand the details, I am not quite sure what is going on, so don't have an opinion.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's go with #107 approach then

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect, I apologize that the effort here seems wasted, but it isn't, once we get with understanding more of the MPI work we might able to better reflect on this approach later.

For now, I'll merged #107

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I always learn from this,,,, How Different Approaches have their own merits and demerits,,
Thanks for bringing that approach!!

@adit4443ya adit4443ya requested review from gxyd and certik April 11, 2025 13:03
@gxyd
Copy link
Collaborator

gxyd commented Apr 12, 2025

As the PR #107 is merged, we can close this PR following the discussion on thread #105 (comment)

@gxyd gxyd closed this Apr 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants