Replies: 2 comments
-
Good point, that is missing from our documentation. https://planetarycomputer.microsoft.com/docs/quickstarts/storage/ shows writing a COG to blob storage, but I'll update that to show Zarr as well. It should be something like store = adlfs.AzureBlobFilesystem(account_name, credential=sas_token).get_mapper("<container-name>/path")
ds.to_zarr(store) |
Beta Was this translation helpful? Give feedback.
0 replies
-
This works!! Minor capitalization detail in And then can re-read the written results via So awesome, thanks so much! I'm happy to help with docs PRs if there's a public repo for them. Thanks again, platform is super exciting! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Sorry if I'm missing something, is there a way to write xarray zarrs back to Blob storage?
The docs have great examples of reading xarray/zarrs from storage (which works great), but I haven't been able to find writes, e.g. here:
https://planetarycomputer.microsoft.com/docs/quickstarts/storage/
My understanding is that using the
dataset.to_zarr()
interface should be particularly great in this context, because it can chunk and parallelize the writes back to the cloud storage, in cloud storage optimized format, for datasets larger than Hub/VM RAM--but I'm getting exceptions trying to write to eitherhttps://...blob.core.windows
orabfs://
URLs. It seems like these optimizations are little more straightforward via the.to_zarr()
interface rather than manualblob_client.upload_blob()
example, but I could totally be missing stuff.Any thoughts greatly appreciated, thank you!
Beta Was this translation helpful? Give feedback.
All reactions