Skip to content

Commit 1321368

Browse files
committed
Clear most of the README content and point to the docs
1 parent 1344568 commit 1321368

File tree

1 file changed

+1
-391
lines changed

1 file changed

+1
-391
lines changed

README.md

Lines changed: 1 addition & 391 deletions
Original file line numberDiff line numberDiff line change
@@ -6,394 +6,4 @@ NumPy, CuPy, PyTorch, Dask, and JAX are supported. If you want support for other
66
libraries, or if you encounter any issues, please [open an
77
issue](https://github.com/data-apis/array-api-compat/issues).
88

9-
Note that some of the functionality in this library is backwards incompatible
10-
with the corresponding wrapped libraries. The end-goal is to eventually make
11-
each array library itself fully compatible with the array API, but this
12-
requires making backwards incompatible changes in many cases, so this will
13-
take some time.
14-
15-
Currently all libraries here are implemented against the [2022.12
16-
version](https://data-apis.org/array-api/2022.12/) of the standard.
17-
18-
## Install
19-
20-
`array-api-compat` is available on both [PyPI](https://pypi.org/project/array-api-compat/)
21-
22-
```
23-
python -m pip install array-api-compat
24-
```
25-
26-
and [Conda-forge](https://anaconda.org/conda-forge/array-api-compat)
27-
28-
```
29-
conda install --channel conda-forge array-api-compat
30-
```
31-
32-
## Usage
33-
34-
The typical usage of this library will be to get the corresponding array API
35-
compliant namespace from the input arrays using `array_namespace()`, like
36-
37-
```py
38-
def your_function(x, y):
39-
xp = array_api_compat.array_namespace(x, y)
40-
# Now use xp as the array library namespace
41-
return xp.mean(x, axis=0) + 2*xp.std(y, axis=0)
42-
```
43-
44-
If you wish to have library-specific code-paths, you can import the
45-
corresponding wrapped namespace for each library, like
46-
47-
```py
48-
import array_api_compat.numpy as np
49-
```
50-
51-
```py
52-
import array_api_compat.cupy as cp
53-
```
54-
55-
```py
56-
import array_api_compat.torch as torch
57-
```
58-
59-
```py
60-
import array_api_compat.dask as da
61-
```
62-
63-
> [!NOTE]
64-
> There is no `array_api_compat.jax` submodule. JAX support is contained
65-
> in JAX itself in the `jax.experimental.array_api` module. array-api-compat simply
66-
> wraps that submodule. The main JAX support in this module consists of
67-
> supporting it in the [helper functions](#helper-functions) defined below.
68-
69-
Each will include all the functions from the normal NumPy/CuPy/PyTorch/dask.array
70-
namespace, except that functions that are part of the array API are wrapped so
71-
that they have the correct array API behavior. In each case, the array object
72-
used will be the same array object from the wrapped library.
73-
74-
## Difference between `array_api_compat` and `array_api_strict`
75-
76-
`array_api_strict` is a strict minimal implementation of the array API standard, formerly
77-
known as `numpy.array_api` (see
78-
[NEP 47](https://numpy.org/neps/nep-0047-array-api-standard.html)). For
79-
example, `array_api_strict` does not include any functions that are not part of
80-
the array API specification, and will explicitly disallow behaviors that are
81-
not required by the spec (e.g., [cross-kind type
82-
promotions](https://data-apis.org/array-api/latest/API_specification/type_promotion.html)).
83-
(`cupy.array_api` is similar to `array_api_strict`)
84-
85-
`array_api_compat`, on the other hand, is just an extension of the
86-
corresponding array library namespaces with changes needed to be compliant
87-
with the array API. It includes all additional library functions not mentioned
88-
in the spec, and allows any library behaviors not explicitly disallowed by it,
89-
such as cross-kind casting.
90-
91-
In particular, unlike `array_api_strict`, this package does not use a separate
92-
`Array` object, but rather just uses the corresponding array library array
93-
objects (`numpy.ndarray`, `cupy.ndarray`, `torch.Tensor`, etc.) directly. This
94-
is because those are the objects that are going to be passed as inputs to
95-
functions by end users. This does mean that a few behaviors cannot be wrapped
96-
(see below), but most of the array API functional, so this does not affect
97-
most things.
98-
99-
Array consuming library authors coding against the array API may wish to test
100-
against `array_api_strict` to ensure they are not using functionality outside
101-
of the standard, but prefer this implementation for the default behavior for
102-
end-users.
103-
104-
## Helper Functions
105-
106-
In addition to the wrapped library namespaces and functions in the array API
107-
specification, there are several helper functions included here that aren't
108-
part of the specification but which are useful for using the array API:
109-
110-
- `is_array_api_obj(x)`: Return `True` if `x` is an array API compatible array
111-
object.
112-
113-
- `is_numpy_array(x)`, `is_cupy_array(x)`, `is_torch_array(x)`,
114-
`is_dask_array(x)`, `is_jax_array(x)`: return `True` if `x` is an array from
115-
the corresponding library. These functions do not import the underlying
116-
library if it has not already been imported, so they are cheap to use.
117-
118-
- `array_namespace(*xs)`: Get the corresponding array API namespace for the
119-
arrays `xs`. For example, if the arrays are NumPy arrays, the returned
120-
namespace will be `array_api_compat.numpy`. Note that this function will
121-
also work for namespaces that aren't supported by this compat library but
122-
which do support the array API (i.e., arrays that have the
123-
`__array_namespace__` attribute).
124-
125-
- `device(x)`: Equivalent to
126-
[`x.device`](https://data-apis.org/array-api/latest/API_specification/generated/signatures.array_object.array.device.html)
127-
in the array API specification. Included because `numpy.ndarray` does not
128-
include the `device` attribute and this library does not wrap or extend the
129-
array object. Note that for NumPy and dask, `device(x)` is always `"cpu"`.
130-
131-
- `to_device(x, device, /, *, stream=None)`: Equivalent to
132-
[`x.to_device`](https://data-apis.org/array-api/latest/API_specification/generated/signatures.array_object.array.to_device.html).
133-
Included because neither NumPy's, CuPy's, Dask's, nor PyTorch's array objects
134-
include this method. For NumPy, this function effectively does nothing since
135-
the only supported device is the CPU, but for CuPy, this method supports
136-
CuPy CUDA
137-
[Device](https://docs.cupy.dev/en/stable/reference/generated/cupy.cuda.Device.html)
138-
and
139-
[Stream](https://docs.cupy.dev/en/stable/reference/generated/cupy.cuda.Stream.html)
140-
objects. For PyTorch, this is the same as
141-
[`x.to(device)`](https://pytorch.org/docs/stable/generated/torch.Tensor.to.html)
142-
(the `stream` argument is not supported in PyTorch).
143-
144-
- `size(x)`: Equivalent to
145-
[`x.size`](https://data-apis.org/array-api/latest/API_specification/generated/array_api.array.size.html#array_api.array.size),
146-
i.e., the number of elements in the array. Included because PyTorch's
147-
`Tensor` defines `size` as a method which returns the shape, and this cannot
148-
be wrapped because this compat library doesn't wrap or extend the array
149-
objects.
150-
151-
## Known Differences from the Array API Specification
152-
153-
There are some known differences between this library and the array API
154-
specification:
155-
156-
### NumPy and CuPy
157-
158-
- The array methods `__array_namespace__`, `device` (for NumPy), `to_device`,
159-
and `mT` are not defined. This reuses `np.ndarray` and `cp.ndarray` and we
160-
don't want to monkeypatch or wrap it. The helper functions `device()` and
161-
`to_device()` are provided to work around these missing methods (see above).
162-
`x.mT` can be replaced with `xp.linalg.matrix_transpose(x)`.
163-
`array_namespace(x)` should be used instead of `x.__array_namespace__`.
164-
165-
- Value-based casting for scalars will be in effect unless explicitly disabled
166-
with the environment variable `NPY_PROMOTION_STATE=weak` or
167-
`np._set_promotion_state('weak')` (requires NumPy 1.24 or newer, see [NEP
168-
50](https://numpy.org/neps/nep-0050-scalar-promotion.html) and
169-
https://github.com/numpy/numpy/issues/22341)
170-
171-
- `asarray()` does not support `copy=False`.
172-
173-
- Functions which are not wrapped may not have the same type annotations
174-
as the spec.
175-
176-
- Functions which are not wrapped may not use positional-only arguments.
177-
178-
The minimum supported NumPy version is 1.21. However, this older version of
179-
NumPy has a few issues:
180-
181-
- `unique_*` will not compare nans as unequal.
182-
- `finfo()` has no `smallest_normal`.
183-
- No `from_dlpack` or `__dlpack__`.
184-
- `argmax()` and `argmin()` do not have `keepdims`.
185-
- `qr()` doesn't support matrix stacks.
186-
- `asarray()` doesn't support `copy=True` (as noted above, `copy=False` is not
187-
supported even in the latest NumPy).
188-
- Type promotion behavior will be value based for 0-D arrays (and there is no
189-
`NPY_PROMOTION_STATE=weak` to disable this).
190-
191-
If any of these are an issue, it is recommended to bump your minimum NumPy
192-
version.
193-
194-
### PyTorch
195-
196-
- Like NumPy/CuPy, we do not wrap the `torch.Tensor` object. It is missing the
197-
`__array_namespace__` and `to_device` methods, so the corresponding helper
198-
functions `array_namespace()` and `to_device()` in this library should be
199-
used instead (see above).
200-
201-
- The `x.size` attribute on `torch.Tensor` is a function that behaves
202-
differently from
203-
[`x.size`](https://data-apis.org/array-api/draft/API_specification/generated/array_api.array.size.html)
204-
in the spec. Use the `size(x)` helper function as a portable workaround (see
205-
above).
206-
207-
- PyTorch does not have unsigned integer types other than `uint8`, and no
208-
attempt is made to implement them here.
209-
210-
- PyTorch has type promotion semantics that differ from the array API
211-
specification for 0-D tensor objects. The array functions in this wrapper
212-
library do work around this, but the operators on the Tensor object do not,
213-
as no operators or methods on the Tensor object are modified. If this is a
214-
concern, use the functional form instead of the operator form, e.g., `add(x,
215-
y)` instead of `x + y`.
216-
217-
- [`unique_all()`](https://data-apis.org/array-api/latest/API_specification/generated/array_api.unique_all.html#array_api.unique_all)
218-
is not implemented, due to the fact that `torch.unique` does not support
219-
returning the `indices` array. The other
220-
[`unique_*`](https://data-apis.org/array-api/latest/API_specification/set_functions.html)
221-
functions are implemented.
222-
223-
- Slices do not support negative steps.
224-
225-
- [`std()`](https://data-apis.org/array-api/latest/API_specification/generated/array_api.std.html#array_api.std)
226-
and
227-
[`var()`](https://data-apis.org/array-api/latest/API_specification/generated/array_api.var.html#array_api.var)
228-
do not support floating-point `correction`.
229-
230-
- The `stream` argument of the `to_device()` helper (see above) is not
231-
supported.
232-
233-
- As with NumPy, type annotations and positional-only arguments may not
234-
exactly match the spec for functions that are not wrapped at all.
235-
236-
The minimum supported PyTorch version is 1.13.
237-
238-
### JAX
239-
240-
Unlike the other libraries supported here, JAX array API support is contained
241-
entirely in the JAX library. The JAX array API support is tracked at
242-
https://github.com/google/jax/issues/18353.
243-
244-
## Dask
245-
246-
If you're using dask with numpy, many of the same limitations that apply to numpy
247-
will also apply to dask. Besides those differences, other limitations include missing
248-
sort functionality (no `sort` or `argsort`), and limited support for the optional `linalg`
249-
and `fft` extensions.
250-
251-
In particular, the `fft` namespace is not compliant with the array API spec. Any functions
252-
that you find under the `fft` namespace are the original, unwrapped functions under [`dask.array.fft`](https://docs.dask.org/en/latest/array-api.html#fast-fourier-transforms), which may or may not be Array API compliant. Use at your own risk!
253-
254-
For `linalg`, several methods are missing, for example:
255-
- `cross`
256-
- `det`
257-
- `eigh`
258-
- `eigvalsh`
259-
- `matrix_power`
260-
- `pinv`
261-
- `slogdet`
262-
- `matrix_norm`
263-
- `matrix_rank`
264-
Other methods may only be partially implemented or return incorrect results at times.
265-
266-
The minimum supported Dask version is 2023.12.0.
267-
268-
## Vendoring
269-
270-
This library supports vendoring as an installation method. To vendor the
271-
library, simply copy `array_api_compat` into the appropriate place in the
272-
library, like
273-
274-
```
275-
cp -R array_api_compat/ mylib/vendored/array_api_compat
276-
```
277-
278-
You may also rename it to something else if you like (nowhere in the code
279-
references the name "array_api_compat").
280-
281-
Alternatively, the library may be installed as dependency on PyPI.
282-
283-
## Implementation Notes
284-
285-
As noted before, the goal of this library is to reuse the NumPy and CuPy array
286-
objects, rather than wrapping or extending them. This means that the functions
287-
need to accept and return `np.ndarray` for NumPy and `cp.ndarray` for CuPy.
288-
289-
Each namespace (`array_api_compat.numpy`, `array_api_compat.cupy`, and
290-
`array_api_compat.torch`) is populated with the normal library namespace (like
291-
`from numpy import *`). Then specific functions are replaced with wrapped
292-
variants.
293-
294-
Since NumPy and CuPy are nearly identical in behavior, most wrapping logic can
295-
be shared between them. Wrapped functions that have the same logic between
296-
NumPy and CuPy are in `array_api_compat/common/`.
297-
These functions are defined like
298-
299-
```py
300-
# In array_api_compat/common/_aliases.py
301-
302-
def acos(x, /, xp):
303-
return xp.arccos(x)
304-
```
305-
306-
The `xp` argument refers to the original array namespace (either `numpy` or
307-
`cupy`). Then in the specific `array_api_compat/numpy/` and
308-
`array_api_compat/cupy/` namespaces, the `@get_xp` decorator is applied to
309-
these functions, which automatically removes the `xp` argument from the
310-
function signature and replaces it with the corresponding array library, like
311-
312-
```py
313-
# In array_api_compat/numpy/_aliases.py
314-
315-
from ..common import _aliases
316-
317-
import numpy as np
318-
319-
acos = get_xp(np)(_aliases.acos)
320-
```
321-
322-
This `acos` now has the signature `acos(x, /)` and calls `numpy.arccos`.
323-
324-
Similarly, for CuPy:
325-
326-
```py
327-
# In array_api_compat/cupy/_aliases.py
328-
329-
from ..common import _aliases
330-
331-
import cupy as cp
332-
333-
acos = get_xp(cp)(_aliases.acos)
334-
```
335-
336-
Since NumPy and CuPy are nearly identical in their behaviors, this allows
337-
writing the wrapping logic for both libraries only once.
338-
339-
PyTorch uses a similar layout in `array_api_compat/torch/`, but it differs
340-
enough from NumPy/CuPy that very few common wrappers for those libraries are
341-
reused.
342-
343-
See https://numpy.org/doc/stable/reference/array_api.html for a full list of
344-
changes from the base NumPy (the differences for CuPy are nearly identical). A
345-
corresponding document does not yet exist for PyTorch, but you can examine the
346-
various comments in the
347-
[implementation](https://github.com/data-apis/array-api-compat/blob/main/array_api_compat/torch/_aliases.py)
348-
to see what functions and behaviors have been wrapped.
349-
350-
351-
## Releasing
352-
353-
To release, first note that CuPy must be tested manually (it isn't tested on
354-
CI). Use the script
355-
356-
```
357-
./test_cupy.sh
358-
```
359-
360-
on a machine with a CUDA GPU.
361-
362-
Once you are ready to release, create a PR with a release branch, so that you
363-
can verify that CI is passing. You must edit
364-
365-
```
366-
array_api_compat/__init__.py
367-
```
368-
369-
and update the version (the version is not computed from the tag because that
370-
would break vendorability). You should also edit
371-
372-
```
373-
CHANGELOG.md
374-
```
375-
376-
with the changes for the release.
377-
378-
Then create a tag
379-
380-
```
381-
git tag -a <version>
382-
```
383-
384-
and push it to GitHub
385-
386-
```
387-
git push origin <version>
388-
```
389-
390-
Check that the `publish distributions` action works. Note that this action
391-
will run even if the other CI fails, so you must make sure that CI is passing
392-
*before* tagging.
393-
394-
This does mean you can ignore CI failures, but ideally you should fix any
395-
failures or update the `*-xfails.txt` files before tagging, so that CI and the
396-
cupy tests pass. Otherwise it will be hard to tell what things are breaking in
397-
the future. It's also a good idea to remove any xpasses from those files (but
398-
be aware that some xfails are from flaky failures, so unless you know the
399-
underlying issue has been fixed, a xpass test is probably still xfail).
9+
See the documentation for more details https://data-apis.org/array-api-compat/

0 commit comments

Comments
 (0)