Skip to content

Use @lock for adding moving window results to cumulative current array #106

@vlandau

Description

@vlandau

This will result in additional overhead, but significant memory reductions because the cumulative current storage array won't need to have an additional dimension of length n = <number of threads>, it can just be stored in a matrix. I imagine this should be done in addition to #79, where we use a hierarchical parallel processing framework with chunks of the landscape parallelized across processes, and individual moving window solves parallelized via multi-threading.

To start, I'll do some basic benchmarking at different numbers of threads with the @lock method to see what level of overhead we're talking here. Might be minimal, but I imagine it will increase as the number of threads increases because we'll have a "longer line" of processes waiting to write to the array.

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions