You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/single_column_prospect.md
+6-1Lines changed: 6 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -42,7 +42,12 @@ start_date: "20070701"
42
42
site_latitude: 17.0
43
43
site_longitude: -149.0
44
44
```
45
-
By this point, the first 4 entries are intuitive. We need to dispatch over each of these methods to setup the forcing for each component of the model. To obtain the observations, now note that instead of directly specifying a file we must specify a `start_date`, `site_latitude`, and `site_longitude`. This is because we now use `ClimaArtifacts.jl` to store data to ensure reproducibility of our simulation and results. The data is generated by downloading from ECMWF and further documentation for ERA5 data download can be found either directly on the ECMWF page and `ClimaArtifacts.jl`. Note that the profiles, surface temperature, and surface fluxes cannot be obtained from a single request and so together we need 3 files for all the data. We include a script at `src/utils/era5_observations_to_forcing_file.jl` which extracts the profiles and computes the tendencies needed for the simulation from the raw ERA5 reanalysis files. We store the observations directly into an artifact `era5_hourly_atmos_processed` to eliminate the need to reprocess specific sites and locations. This setup means that users are free to choose sites globally at any time at which ERA5 data is available. Unfortunately, global hourly renanalysis is too large to store in an artifact and so we have currently only provided support for the first 5 days of July 2007 in the tropical Pacific, stored in `era5_hourly_atmos_raw`, only available on the `clima` and Caltech `HPC` servers.
45
+
By this point, the first 4 entries are intuitive. We need to dispatch over each of these methods to setup the forcing for each component of the model. To obtain the observations, now note that instead of directly specifying a file we must specify a `start_date`, `site_latitude`, and `site_longitude`. This is because we now use `ClimaArtifacts.jl` to store data to ensure reproducibility of our simulation and results. `start_date` should be in in YYYMMDD format, `site_latitude` should be in degrees between -90 and 90, and `site_longitude` should be between -180 and 180.
46
+
47
+
!!! note
48
+
Depending on the amount of smoothing and data resolution, points near the boundaries will throw index errors. With default settings, users should stay at least 5 points away from the poles (1° for ERA5 data) for smoothing (4 points) and gradients (one extra point).
49
+
50
+
The data is generated by downloading from ECMWF and further documentation for ERA5 data download can be found either directly on the ECMWF page and `ClimaArtifacts.jl`. Note that the profiles, surface temperature, and surface fluxes cannot be obtained from a single request and so together we need 3 files for all the data. We include a script at `src/utils/era5_observations_to_forcing_file.jl` which extracts the profiles and computes the tendencies needed for the simulation from the raw ERA5 reanalysis files. We store the observations directly into an artifact `era5_hourly_atmos_processed` to eliminate the need to reprocess specific sites and locations. This setup means that users are free to choose sites globally at any time at which ERA5 data is available. Unfortunately, global hourly renanalysis is too large to store in an artifact and so we have currently only provided support for the first 5 days of July 2007 in the tropical Pacific, stored in `era5_hourly_atmos_raw`, only available on the `clima` and Caltech `HPC` servers.
46
51
#### Running the Reanalysis-driven case at different times and locations
47
52
You need 3 separate files with specific variables and naming convention for the data processing script to work.
48
53
1. Hourly profiles with variables, following ERA5 naming convention, including `t`, `q`, `u`, `v`, `w`, `z`, `clwc`, `ciwc`. This file should be stored in the appropriate artifacts directory with the following naming scheme `"forcing_and_cloud_hourly_profiles_$(start_date).nc"` where `start_date` should specify the date data starts on formatted YYYYMMDD. We require `clwc` and `ciwc` profiles because these are typical targets for calibration but are not needed to run the simulation directly.
Generate an external forcing file for multi-day single column model runs, reusing daily forcing files if they already exist.
455
+
456
+
# Arguments
457
+
- `parsed_args`: Dictionary containing simulation parameters including start_date, t_end, site_latitude, and site_longitude
458
+
- `forcing_file_path`: Path where the concatenated forcing file will be saved
459
+
- `FT`: Floating point type for the simulation
460
+
461
+
# Keyword Arguments
462
+
- `smooth_amount`: Amount of temporal smoothing to apply (default: 4 - 1° on each side)
463
+
- `time_resolution`: Time resolution in seconds for accumulated variables (defined in ERA5 docs; 3600 for hourly data; 86400 for daily and monthly data)
464
+
- `input_data_dir`: Directory containing raw ERA5 data files, artifact directory by default
465
+
- `output_data_dir`: Directory where individual daily forcing files are stored
0 commit comments