Skip to content

Commit 83b0599

Browse files
committed
Micro optimize dataset.isel for speed on large datasets
This targets optimization for datasets with many "scalar" variables (that is variables without any dimensions). This can happen in the context where you have many pieces of small metadata that relate to various facts about an experimental condition. For example, we have about 80 of these in our datasets (and I want to incrase this number) Our datasets are quite large (On the order of 1TB uncompresed) so we often have one dimension that is in the 10's of thousands. However, it has become quite slow to index in the dataset. We therefore often "carefully slice out the matadata we need" prior to doing anything with our dataset, but that isn't quite possible with you want to orchestrate things with a parent application. These optimizations are likely "minor" but considering the results of the benchmark, I think they are quite worthwhile: * main (as of #9001) - 2.5k its/s * With #9002 - 4.2k its/s * With this Pull Request (on top of #9002) -- 6.1k its/s Thanks for considering.
1 parent 50f8726 commit 83b0599

File tree

1 file changed

+14
-4
lines changed

1 file changed

+14
-4
lines changed

xarray/core/dataset.py

Lines changed: 14 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2980,20 +2980,30 @@ def isel(
29802980
coord_names = self._coord_names.copy()
29812981

29822982
indexes, index_variables = isel_indexes(self.xindexes, indexers)
2983+
all_keys = set(indexers.keys())
29832984

29842985
for name, var in self._variables.items():
29852986
# preserve variable order
29862987
if name in index_variables:
29872988
var = index_variables[name]
2988-
else:
2989-
var_indexers = {k: v for k, v in indexers.items() if k in var.dims}
2990-
if var_indexers:
2989+
dims.update(zip(var.dims, var.shape))
2990+
# Fastpath, skip all of this for variables with no dimensions
2991+
# Keep the result cached for future dictionary update
2992+
elif var_dims := var.dims:
2993+
# Large datasets with alot of metadata may have many scalars
2994+
# without any relevant dimensions for slicing.
2995+
# Pick those out quickly and avoid paying the cost below
2996+
# of resolving the var_indexers variables
2997+
if var_indexer_keys := all_keys.intersection(var_dims):
2998+
var_indexers = {k: indexers[k] for k in var_indexer_keys}
29912999
var = var.isel(var_indexers)
29923000
if drop and var.ndim == 0 and name in coord_names:
29933001
coord_names.remove(name)
29943002
continue
3003+
# Update our reference to `var_dims` after the call to isel
3004+
var_dims = var.dims
3005+
dims.update(zip(var_dims, var.shape))
29953006
variables[name] = var
2996-
dims.update(zip(var.dims, var.shape))
29973007

29983008
return self._construct_direct(
29993009
variables=variables,

0 commit comments

Comments
 (0)