This repository contains benchmarks of Zarr V3 implementations.
Note
Contributions are welcomed for additional benchmarks, more implementations, or otherwise cleaning up this repository.
Also consider restarting development of the official zarr benchmark repository: https://github.com/zarr-developers/zarr-benchmark
zarrs/zarrs
viazarrs/zarrs_tools
- Read executable: zarrs_benchmark_read_sync
- Round trip executable: zarrs_reencode
google/tensorstore
zarr-developers/zarr-python
- With and without the
ZarrsCodecPipeline
fromzarrs/zarrs-python
- With and without
dask
- With and without the
Implementation versions are listed in the benchmark charts.
Warning
Python benchmarks are subject to the overheads of Python and may not be using an optimal API/parameters.
Please open a PR if you can improve these benchmarks.
pydeps
: install python dependencies (recommended to activate a venv first)zarrs_tools
: installzarrs_tools
(setCARGO_HOME
to override the installation dir)generate_data
: generate benchmark databenchmark_read_all
: run read all benchmarkbenchmark_read_chunks
: run chunk-by-chunk benchmarkbenchmark_roundtrip
: run roundtrip benchmarkbenchmark_all
: run all benchmarks
All datasets are uint16
arrays.
Name | Chunk / Shard Shape | Inner Chunk Shape | Compression | Size |
---|---|---|---|---|
Uncompressed | None | 8.00 GB | ||
Compressed |
blosclz 9 + bitshuffling |
659 MB | ||
Compressed + Sharded |
blosclz 9 + bitshuffling |
1.20 GB |
- AMD Ryzen 5900X
- 64GB DDR4 3600MHz (16-19-19-39)
- 2TB Samsung 990 Pro
- Arch Linux (in Windows 11 WSL2, swap disabled, 32GB available memory)
This benchmark measures time and peak memory usage to "round trip" a dataset (potentially chunk-by-chunk).
- The disk cache is cleared between each measurement
- These are best of 3 measurements
Table of raw measurements (benchmarks_roundtrip.md)
This benchmark measures the the minimum time and peak memory usage to read a dataset chunk-by-chunk into memory.
- The disk cache is cleared between each measurement
- These are best of 1 measurements
Table of raw measurements (benchmarks_read_chunks.md)
Note
zarr-python
benchmarks with sharding are not visible in this plot
This benchmark measures the minimum time and and peak memory usage to read an entire dataset into memory.
- The disk cache is cleared between each measurement
- These are best of 3 measurements