-
Notifications
You must be signed in to change notification settings - Fork 5
Initial implementation of SUMMA like MatrixMult #136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@astroC86 good start!
As you will see I left a number of comments, many are stylistic (still to review test_matrixmult
, but in some cases I am not entirely sure I can follow (especially as I am not sure if this is meant to match the algorithm in your GSoC proposal Appendix)...
In general, I think it would be important if you add a well written docstring to the SUMMAMatrixMult
method and comments in both the code and the example; after that I will do another full review and we will hopefully be closer to a final version that we can have into the pylops-mpi library 🤓
Finally, whilst I think that this algorithm is very interesting and worth having, we should make sure to understand if this is really the SUMMA algorithm from https://www.netlib.org/lapack/lawnspdf/lawn96.pdf (I am not so sure, as there they block each matrix over both rows and columns) and either refer to a paper that implements your algorithm or give it a name (not SUMMA) and explain it in quite some details in the Notes of the docstring of the class.
|
||
super().__init__(shape=shape, dtype=np.dtype(dtype), base_comm=base_comm) | ||
|
||
def _matvec(self, x: DistributedArray) -> DistributedArray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add some comments to the code, so far it is very hard to follow...
f"Rank {rank}: z_dist shape {z_dist.local_array.shape} != expected {expected_z_shape}" | ||
) | ||
|
||
# Verify adjoint result values |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should follow what done in all our tests, use .asarray()
on y_dist
and z_dist
and compare with C_true
and Z_true
only for rank 0.
See for example
pylops-mpi/tests/test_blockdiag.py
Line 97 in 97f8f5a
if rank == 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this wont be possible because the output is replicated across processes so when we call .asarray()
(an allgather
)
we wont end up with
C_0||C_1||C_2||C_3
instead we would end up with
C_0||C_1||C_0||C_1||C_2||C_3||C_2||C_3
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know, I have been thinking a bit about this and I have a working solution 😄 I think we should add a _allgather_subcomm
method to DistributedArray
and return the effective array (C_0||C_1||C_2||C_3
) instead of the replicated array (C_0||C_1||C_0||C_1||C_2||C_3||C_2||C_3
) in asarray
when a sub communicator is used... however, to avoid breaking backward compatibility, we could add an optional parameter (masked
or something along these lines) that if True will return the effective array and if False (default) will return the total replicated array... since it is a minor change I am going to push this code here together with what I worked out in the example to have comparison on rank 0 of the full matrix... then you can do the same for the tests
12d73e4
to
f22f5ec
Compare
No description provided.