-
Notifications
You must be signed in to change notification settings - Fork 11
Description
In addition to the take
translation I added in my previous PR, there is some more that might be good to add. At least, I am using these myself. I can make a PR.
split
. The syntax is different for numpy and tensorflow/torch. The former wants the number of splits or an array of locations of splits, whereas tensorflow/torch either want the number of splits or an array of split sizes. We can go from one format the other usingnp.diff
diff
. This is implemented in tensorflow astf.experimental.numpy.diff
, and not implemented at all for torch. This also means I don't know what the cleanest way is to implementsplit
mentioned above. Maybe just usingnp.diff
and then convert to array of right backend if necessary?linalg.norm
, seems to work with tensorflow, but for torch we need to do_SUBMODULE_ALIASES["torch", "linalg.norm"] = "torch"
I didn't check these things for any other libraries.
Maybe a bit of an overly ambitious idea, but have you ever thought about baking in support for JIT? Right now it seems that for TensorFlow everything works with eager execution, and I'm not sure you can compile the computation graphs resulting from a series of ar.do calls.
PyTorch also support JIT to some extend with TorchScript
Numpy doesn' t have JIT, but there is Numba
Cupy has an interface with Numba that does seem to allow JIT.
JAX has support for JIT
Another thing is gradients. Several of these libraries have automatic gradients, and having an autoray interface for doing computations with automatic gradients would be fantastic as well (although probably also ambitious).
If you think these things are doable at all, I wouldn't mind spending some time to try to figure out how this could work.
Less ambitiously, you did mention in #3 that something along the lines of
with set_backend(like):
...
would be pretty nice. I can try to do this. This probably comes down to checking for a global flag in ar.do
after the line
if like is None: