Skip to content

Optimize memory usage of forward modelling of prism layer #563

@santisoler

Description

@santisoler

Description of the desired feature:

The current implementation of the forward modelling of the gravity field due to a prism layer converts the prism layer, represented as a regular xarray.Dataset with top and bottom surfaces, into a collection of prisms. For small problems this is not a big deal, but for large problems, the list of prisms can take a significant amount of memory, and allocating it also requires some non-negligible computation time.

The reason why this implementation was initially created was because it was quite easy to convert the layer into a collection of prisms, and then send them to the prism_gravity function.

Alternatively, we could have an underlying Numba code that takes the coordinates of the centers of the prism (the 1d coordinates arrays in the xarray.Dataset), the top and bottom arrays and performs the forward by "building" the boundaries of each prism on the fly. This way we wouldn't need to pre-allocate any list of prisms.

Are you willing to help implement and maintain this feature?

I might if I have the time. But if anyone is struggling with computing terrain corrections of large models and is annoyed by the amount of memory that is currently required, feel free to tackle it!

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementIdea or request for a new feature

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions