to_zarr to s3 with asynchronous=False #5869
Replies: 4 comments 3 replies
-
I'm afraid this misunderstands the The story around the appropriate values for retries is fraught, trying to find a default value that satisfies most people. For the moment, the retries value is a class attribute, so you could change it using |
Beta Was this translation helpful? Give feedback.
-
PS: better documentation of |
Beta Was this translation helpful? Give feedback.
-
Just hit this now - @raybellwaves - did you find what a good rate was? |
Beta Was this translation helpful? Give feedback.
-
Thanks - I was writing with a 100 machine with 4 cpu each cluster in same s3 region. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm using https://aws.amazon.com/batch/ to run multiple jobs of
to_zarr
. One issue i'm having is with hitting a request rate to s3. withto_zarr("s3://bucket/file.zarr", mode="w")
I would like to pass the
asynchronous=False
to the s3fs config. I'm guessing I could do this withbackend_kwargs
but not sure on the syntax.For reference, here's the default behaviour or
to_zarr
when storing to s3. Note I ran this locally and not on AWS batch which may add an extra layer of debugging.to_zarr.py
:viztracer to_zarr.py
i.e. I see calls to
fsspec/asyn.py
cc. @martindurant
Beta Was this translation helpful? Give feedback.
All reactions