-
I'm working on a new backend to read output from the Parflow hydrologic model. I've been able to get things generally working by following the docs on implementing a new backend, but running into some trouble on getting lazy loading to work, mostly due to the way that Parflow writes output. Parflow writes output in a parflow-specific binary format. Each timestep of a simulation is written out to it's own file, where each file contains a 3d field (dimensions Basically, the way I'm doing the read in now is to defer reading of the actual binary data to a lower level library, and just looping over the files and then concating them together, more or less like:
Per this issue the concat operation is not lazy, and means that my backend isn't very usable for large datasets (particularly long time periods). I'm wondering if there is a "right" way to implement this backend so that it supports lazy loading? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Okay, after a quick side chat with @jhamman this was a trivial fix. All I had to do was expose
to
🚀 |
Beta Was this translation helpful? Give feedback.
Okay, after a quick side chat with @jhamman this was a trivial fix. All I had to do was expose
self.read_parflow_file
so that you can usexr.open_dataset
with the actual files that contain the data. Then, when using the metadata file to read the full dataset I change:to