You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/papers/joss/paper.md
+70-15Lines changed: 70 additions & 15 deletions
Original file line number
Diff line number
Diff line change
@@ -50,7 +50,7 @@ authors:
50
50
- name: Toby Bischoff
51
51
orcid: XXXX
52
52
affiliation: 1
53
-
- name: LenkaNovak
53
+
- name: Lenka Novak
54
54
orcid: XXXX
55
55
affiliation: 1
56
56
- name: Daniel (Zhengyu) Huang
@@ -99,15 +99,16 @@ Earth system model dynamical cores are traditionally hard-coded to specific equa
99
99
100
100
`ClimaCore.jl` aims to be a more flexible approach, inspired by other mathematical software libraries for constructing spatial discretizations of partial differential equations (PDEs), such as PETSc [@petsc-web-page; @petsc-user-ref; @petsc-efficient], libCEED [@libceed-joss-paper; @libceed-user-manual], MFEM [@MFEMlibrary; @mfem-paper], deal.II [@dealII92], Firedrake [@firedrake], and FeniCS [@FeniCS].
101
101
102
-
However, ESMs tend to have some specific properties, some of which can leverage modern heterogenous architectures (including CPUs and GPUs) or modern ML/AI tools, that there are advantages to developing a new library
102
+
However, ESMs tend to have some specific properties, some of which can leverage modern heterogenous architectures (including CPUs and GPUs) or modern ML/AI tools, that there are advantages to developing a new library.
103
+
104
+
Firstly, ESMs often use a very skewed aspect ratio: when performing global simulations, it is common to use a resolution of O(100km) in the horizontal, compared with O(100m) in the vertical. This leads to several other design considerations:
105
+
- Implicit-explicit (IMEX) timestepping are typically used, with only the vertical components of the governing equations handled implicitly, known as horizontally-explicit, vertically-implicit (HEVI) schemes.
106
+
- Distributed memory parallelism is only used in the horizontal direction, which avoids the need for communication inside the implicit step.
107
+
- Meshes are not fuly unstructured, instead the 3D meshes are constructed by extruding the 2D horizontal mesh. Finally
108
+
- Different discretizations may be used in each dimension, for example our current atmosphere model uses a specral element discretization in the horizontal, with a staggered finite difference discretization in the verfical.
109
+
110
+
Secondly, we aim to support both local high-resolution (box) configurations and global lower-resolution (spherical) simulations, using a unified equation set and discretizations. Specifically, we define the equations using a local Cartesian coordinate system: in a box configuration, this corresponds to the usual global coordinate system, but use latitude, longitude and altitude on a sphere: this means that, `z` is used to refer to the Cartesian coordinate in the box, or the altitude from the surface on the sphere. Similarly, for vector quantities, `u`, `v`, `w` refer to the Cartesian components in the box, or the zonal, meridonal and radial components on the sphere. Additionally, for our atmosphere model we make use of the so called "vector invariant form", which specifies the equations directly in covariant and contravariant components (with respect to the mesh elements).
103
111
104
-
- very skewed aspect ratio for the atmosphere component: O(100km) in the horizontal vs O(10m) in the vertical;
105
-
- implicit-explicit (IMEX) timestepping, with only the vertical components of the governing equations handled implicitly: horizontally-explicit, vertically-implicit (HEVI) schemes;
106
-
- use of different discertizations in each dimension, for example our current atmosphere model uses a specral element discretization in the horizontal, with a staggered finite difference discretization in the verfical;
107
-
- don't need a fully unstructured mesh: 3D meshes are constucted by extruding a 2D mesh;
108
-
- distributed parallely only in the horizontal direction;
109
-
- support both Cartesian and spherical geometries and vector bases: on a sphere, vector components are typical specified in spherical basis (zonal, meridonal, radial);
110
-
- capability to run embedded regional high-resolution and global simulations.
111
112
112
113
113
114
@@ -122,13 +123,17 @@ However, ESMs tend to have some specific properties, some of which can leverage
122
123
# Introduction
123
124
124
125
125
-
126
126
<!-- from README -->
127
127
`ClimaCore.jl` is a the dynamical core (_dycore_) of the atmosphere and land models, providing discretization tools to solve the governing equations of the ESM component models.
128
128
129
129
`ClimaCore.jl`'s high-level API facilitates modularity and composition of differential operators and the definition of flexible discretizations. This, in turn, is coupled with low-level APIs that support different data layouts, specialized implementations, and flexible models for threading, to better face high-performance optimization, data storage, and scalability challenges on modern HPC architectures. `ClimaCore.jl` is designed to be performance portable and can be used in a distributed setting with CPU and GPU clusters.
130
130
131
+
## Why Julia?
132
+
Julia is a compiled, dynamic programming language with great potential in numerical analysis and applied sciences. One of its defining features is multiple dispatch, in which the method of a function to be called is dynamically chosen based on run-time types. Multiple dispatch both increases run-time speed, and allows users to easily extend existing functions to suit their own needs. Another of Julia's useful qualities is array broadcasting, which facilitates efficient operations involving large data structures without extra memory allocations. Julia balances minimal barrier to entry with superior performance compared to similar easy-to-use languages, such as Python or Matlab.
133
+
131
134
## Technical aims and current support
135
+
136
+
132
137
* Support both large-eddy simulation (LES) and general circulation model (GCM) configurations for the atmosphere.
133
138
* A suite of tools for constructing space discretizations.
134
139
* Horizontal spatial discretization:
@@ -151,6 +156,14 @@ However, ESMs tend to have some specific properties, some of which can leverage
To construct a spatial discretization, in ClimaCore's API, we need 4 elements:
211
+
212
+
- Domain: defines the bounding box of a domain. It can be an `IntervalDomain` (1D), a `RectangleDomain` (2D), `SphereDomain` (2D, which can be extruded in 3D).
213
+
-`Mesh`: a division of a domain into elements.
214
+
-`Topology`: determines the ordering and connections between elements of a mesh.
215
+
-`Space`: represents a discretized function space over some domain. Currently two main discretizations are supported: Spectral Element Discretization (both Continuous Galerkin and Discontinuous Galerkin types), a staggered Finite Difference Discretization, and the combination of these two in what we call a hybrid space.
216
+
-`Field`: on a given `Space`, we can construct a `Field`, which can represent mathematically either a scalar-valued field, a vector-valued field, or a combination of these two. A field is simply the association of a space and the values at each node in the space.
217
+
### Composable Spatial Operators
218
+
Operators can compute spatial derivative operations:
219
+
220
+
- For performance reasons, we need to be able to "fuse" multiple operators and function applications.
221
+
- Julia provides a tool for this: broadcasting, with a very flexible API.
222
+
223
+
We can think of operators are "pseudo-functions": can't be called directly, but act similar to functions in the context of broadcasting. They are matrix-free, in the sense that we define the action of the operator directly on a field, without explicitly assembling the matrix representing the discretized operator. ClimaCore.jl supports Spectral element operators for the horizontal direction and finite difference ones for the vertical direction.
224
+
### Other operations
225
+
- DSS, limiters, remapping:
226
+
Since in a finite element representation a given field is discretely defined across subdomains (elements), it might have discontinuous values across element boundaries. When we use a _continuous Galerkin_ (CG) spectral element discretization, we must ensure that the state variable is continuous across element boundaries. Therefore, we apply a so-called Direct Stiffness Summation (DSS) operator to ensure this continuity by removing redundant degrees of freedom multiplicity across element boundaries/corners.
227
+
228
+
For atmospheric model applications, it may be necessary to ensure monotonocity or positivity of some quantities (e.g., moisture). For this reason, ClimaCore.jl supports a class of so-called _flux-limiters_ that take care of finding values that do not satisfy constraints and bringing these values to a closest desirable constraint.
229
+
230
+
In ESMs, for postprocessing and visualization purposes, it is often necessary to map data from a spherical grid (in spherical or Cartesian coordinates) to a latitude and longitude grid. To achieve this, ClimaCore.jl uses the external software package _TempestRemap_, a consistent and monotone remapping package for arbitrary grid geometry [@TempestRemap1;@TempestRemap2].
231
+
232
+
Remapping is following the conservative, consistent and (if required) monotonous method, detailed in [Ullrich and Taylor 2015](https://journals.ametsoc.org/view/journals/mwre/143/6/mwr-d-14-00343.1.xml). This is a linear remapping operator, which obtains the target field by multiplying the source field with a sparse matrix of source-to-target weights ($\psi_i = R_{ij} \psi_j$). The weight for cartesian domains are generated in ClimaCore, and for the equiangular cubed sphere weight generation we use TempestRemap. The sparse matrix multiply is (will be soon! TODO) parallelized. If monotonicity is not required, this method can capitalize on the high order of our CG discretization.
233
+
234
+
### ODE API compatibility [Dennis]
235
+
236
+
237
+
238
+
## Low-level API [Charlie]
239
+
ClimaCore has layered abstractions for data layouts to flexibly and efficiently read and write data of different sizes. The space and field abstraction layers, which contain local geometry and field variable values, sit ontop of this layer so that all fields leverage the flexibility of the data layouts. In addition, operators for slicing data in different ways, for example column-wise and level-wise are supported for fields. These layers help separate concerns of what variables are stored, and in which shape they exist, from the variables that users interact with. One benefit of this is that adjusting memory layout or memory access patterns can be changed internally, which can be very helpful to improve performance.
240
+
241
+
# Parallelism
242
+
243
+
## ClimaComms [Sriharsha]
244
+
245
+
## GPU
246
+
192
247
<!-- Acknowledgement of any financial support. -->
193
248
# Acknowledgements
194
249
We acknowledge contributions from several others who played a role in the evolution of this library, especially contributors and users of an eralier iteration of this effort, [ClimateMachine.jl](https://github.com/CliMA/ClimateMachine.jl)[@climate_machine_zenodo]. The development of this package was supported by the generosity of Eric and Wendy Schmidt by recommendation of the Schmidt Futures program, and by the Defense Advanced Research Projects Agency (Agreement No. HR00112290030).
0 commit comments