You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Agglomerate Mappings can now also be read from the new zarr3-based
format, and from remote object storage.
- AgglomerateFileKey, which identifies an agglomerate file, now holds a
LayerAttachment (which can specify a remote URI) so we can use
VaultPaths for accessing remote agglomerate files
- interface of AgglomerateService methods changed (take the new
AgglomerateFileKey)
AgglomerateService is no longer injected, instead it is explicitly
created in BinaryDataServiceHolder so we can pass it the
sharedChunkContentsCache
- AgglomerateService now delegates to either Hdf5AgglomerateService
(basically the old code) or ZarrAgglomerateService (a lot of duplication
from the other one, but unifying it at this time would sacrifice
performance for hdf5)
- DatasetArray has new public method ReadAsMultiArray, which does not do
any axisorder but instead yields a MultiArray in the order of the
DatasetArray
- removed unused route agglomerateIdsForAllSegmentIds
### URL of deployed dev instance (used for testing):
- https://zarragglomerates.webknossos.xyz
### Steps to test:
- With test dataset from
https://www.notion.so/scalableminds/Test-Datasets-c0563be9c4a4499dae4e16d9b2497cfb?source=copy_link#209b51644c6380ac85e0f6b0c7e339cf
select agglomerate mapping, should be displayed correctly
- Import agglomerate skeleton, should look plausible
- Do some proofreading (splits + merges) should work
- Also test that the old format still works (e.g. use older
test-agglomerate-file dataset with hdf5 agglomerate files)
### TODOs:
<details>
<summary>Backend</summary>
- [x] open zarr agglomerate as zarr array and read contents
- [x] read MultiArray without caring about AxisOrder
- [x] test with 2D
- [x] Read agglomerate arrays with correct types
- [x] Re-Implement public functions of agglomerate service
- [x] applyAgglomerate
- [x] generate agglomerate skeleton
- [x] largest agglomerate id
- [x] generate agglomerate graph
- [x] segmentIdsForAgglomerateId
- [x] agglomerateIdsForSegmentIds
- [x] positionForSegmentId
- [x] What’s up with zarr streaming in the tests? reproduce with
test-dataset, ids are wrong, also in normal data loading
- [x] Create indirection for selecting the zarr agglomerates OR hdf5
agglomerates
- [x] reduce code duplication btw hdf5 and zarr
- [x] Error handling (index lookups always in tryo. abstraction?)
- [x] Read remote (Build VaultPath for URI)
- [x] Discover files?
- [x] Adapt requests to specify which agglomerate file should be read
from (type? full path?)
- [x] Caching / Speedup (added some caching but did not test on
large-scale DS. will be follow-up)
- [x] Clear caches on layer/DS reload
- [x] Make sure the agglomerate zarr directories don’t blow up dataset
exploring
- [x] Code Cleanup
</details>
<details>
<summary>Frontend</summary>
- [x] Starting proofreading with a split action doesn’t properly flush
updateMappingName before calling minCut route → fixed by
#8676
</details>
### Issues:
- contributes to #8618
- contributes to #8567
------
- [x] Updated
[changelog](../blob/master/CHANGELOG.unreleased.md#unreleased)
- [x] Removed dev-only changes like prints and application.conf edits
- [x] Considered [common edge
cases](../blob/master/.github/common_edge_cases.md)
- [x] Needs datastore update after deployment
Copy file name to clipboardExpand all lines: webknossos-datastore/app/com/scalableminds/webknossos/datastore/dataformats/DatasetArrayBucketProvider.scala
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -38,10 +38,10 @@ class DatasetArrayBucketProvider(dataLayer: DataLayer,
0 commit comments