-
I'm new at using Xarray (using it inside jupyter notebooks). I love it and up to now everything has worked like a charm, except when I started to look at how much RAM is used by my functions (e.g. htop), which is confusing me and I haven't found anything related on the web.. I have a nc file with monthly data from which I want to compute yearly means, taking into account month lengths, masking nan values and also using specific months only, which requires the use of groupby and resample. A basic function to compute a weighted yearly mean from monthly data looks like this:
Any idea how to ensure that the memory is freed up, or am I overlooking something and this behavior is actually expected? Thank you! (I also asked this question on stack exchange, as soon as I got a solution here or there I will add it to my post) |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 2 replies
-
Got a tip on stack overflow: It seems like the groupby and resampling operations are working on my initial dataset |
Beta Was this translation helpful? Give feedback.
-
A couple of suggestions:
|
Beta Was this translation helpful? Give feedback.
Got a tip on stack overflow: It seems like the groupby and resampling operations are working on my initial dataset
ds
. Deleting it afterwards frees up the space, thus I just insert a ds=ds.copy() at the beginning of the function to avoid the problem.Of course I'm still open for suggestions regarding better workflow/optimized memory use for this type of tasks.