-
Notifications
You must be signed in to change notification settings - Fork 110
Python array API support #1683
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Python array API support #1683
Conversation
ab70b6f
to
c698bfa
Compare
Would it be a lot of work to put the weighting hierarchy in its own PR? I think there are a bunch of questions to be discussed there, and it should be checked thoroughly whether this really works, independent of the Python Array API generalizations. What certainly needs clarification (if not changes) is that about the methods being "mutually exclusive". It makes some sense from the definition side (we don't want conflicting semantics between e.g. distance and norm). But surely, from the use side we should then have all of them available, as far as mathematically possible? In the most typical use case (say, |
About the fact that the weighting refactoring should be in its own PR. I chose to add it here as for me, the python array API allows decoupling the backend classes (which will only add a few attributes to the Weighting) and the abstract class. As i am doing it for the TensorSpace and TensorSpaceElement classes i thought it would fit :) |
I did not mean to click "ready for review" |
Well, for one thing, I don't think the current state actually works correctly for a simple inner-product space. In the |
Pretty sure all that could be fixed, but the thing is, it's not so clear-cut to say the functionality remains unchanged (and whether that even makes sense) given that the decision logic is set up in a different way now, with a dict-of-methods instead of inheritance. And this would be best investigated in a separate PR. |
Okay, got you :) I should have checked the behaviour better. However i don't really understand the point about splitting the PRs. I looked at your pytorch backend PR: the commits are mixed and i think it's still readable. I am also not sure that you ran the tests that would have spotted the errors that we all do while coding. I told you this is a WIP, you told me to make a PR and now you say that it's not thoroughly checked. I guess we should align in terms of how we push changes to this repo. How about discussing live on Friday? |
No critique meant. We're moving in the right direction, I just keep underestimating the amount of individual changes and possible implications. The more we separate concerns, the better we can ensure that each change is safe and won't cause more problems further down the road than it fixes. |
…g argument is provided
… class and added device, impl and dtype as posargs
…ta attribute and the associated data property out of the backend
…ss. Note: the copy() method without dunders remains in the backend as numpy and pytorch have different ways to copy data (copy vs clone) and the __clone__ calls clone anyway
… the abstract TensorSpaceElement class.
…oat and complex python types
…API accepts int, float, complex on top of str for dtype argument, I changed the attribute to reflect that
…_namespace.equal function when the weight is an array
… If an array is provided, the shape of the Weighting becomes the shape of the array. This is helpful for checking that a weighting defined by an array must have a shape matching the one of the space it is passed as an argument of
This is done by using np as a fallback when the array provided is a list or a tuple and otherwise looking up the array_backend using the array-API utils functions.
Ah, now I get it! Yes, |
…e odl's namespace.
…ckward transforms. In astra_setup, we modify several things. 1) API change: requires an keyword. For 2D tomography, ASTRA on the GPU a 3D geometry. 2) API change: requires an keyword. For 2D tomography, ASTRA on the GPU emulates a 3D geometry. The projection coordinates hence need to be processed accordingly.
This is to avoid the confusion between impl for TensorSpace and impl for astra cpu or cuda version.
This involves the removal of np calls and conforming to the new astra_setup api
This involves getting rid off the astra.data3d calls and using the astra.experimental.direct_FP3D / BP3D functions.
… the resize_array method.
…rward and backward calls.I found that having two functions to maintain was more error prone than having just one function with if/else logic. Also, I made sure that the cpu calls are compatible with Pytorch by doing explicit cpu conversion.
…spaces are implemented using numpy.
This remains an ongoing TODO.
…array api compatible
So far, we only had logic for int, float and objects with an __array__ attribute. We forgot to support the list/tuple. This commit fixes it.
Although all the tests passed, when running the doctests, I realised that there was a problem with inplace updates (sigh) for the .element method. The problem was that although we did the elementwise operation inplace, the call of the element function bounced to dlpack_transfer ALTHOUGH it was not needed. The aim of dlpack_transfer is to backend compatibility and device copies, but is strictly not necessary if the array already has the space's dtype and the space's device. So, we would ALWAYS copy the data when calling element, which is both destroying the inplace update and unnecessary slow. Even if this fixes the problem of inplace update of the arrays, as we always wrap the array into a space of which type is inferred by the output of the computation, we do not have and inplace update of ODL objects. I also enhanced the error message of element, and made separated the list/tuple condition from the int/float /complex condition. The other changes make sure that the doctests expect the right output when printed.
It caused circular import errors. It is now imported from itertools.
This aims to test the new functionnalities of ODL, especially the odl namespace functions implementing the Python array API and the multi-backend/multi device support.
… Tensor and a backend-specific array. @leftaroundabout what do you think about this?
The aim of this pull request is to make ODL multi backend through the Python Array API.