Skip to content

Commit 1265310

Browse files
authored
Update docstring in api.py for open_mfdataset(), clarifying "chunks" argument (#9121)
Per this discussion: #9119
1 parent 211d313 commit 1265310

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

xarray/backends/api.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -863,7 +863,8 @@ def open_mfdataset(
863863
In general, these should divide the dimensions of each dataset. If int, chunk
864864
each dimension by ``chunks``. By default, chunks will be chosen to load entire
865865
input files into memory at once. This has a major impact on performance: please
866-
see the full documentation for more details [2]_.
866+
see the full documentation for more details [2]_. This argument is evaluated
867+
on a per-file basis, so chunk sizes that span multiple files will be ignored.
867868
concat_dim : str, DataArray, Index or a Sequence of these or None, optional
868869
Dimensions to concatenate files along. You only need to provide this argument
869870
if ``combine='nested'``, and if any of the dimensions along which you want to

0 commit comments

Comments
 (0)