You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -84,74 +84,140 @@ Even though probtest is used exclusively with ICON at the moment, it does not co
84
84
85
85
This command sets up the configuration file. For more help on the command line arguments for `init`, see
86
86
87
-
```
87
+
```console
88
88
python probtest.py init --help
89
89
```
90
90
91
91
The `--template-name` argument can be used to specify the template from which the configuration file is created. One of the templates provided by probtest is `templates/ICON.jinja` which is used as the default in case no other template name is provided. The init command replaces all placeholder values in the template with the values given as command line arguments. All other probtest commands can then read from the configuration file. The name of the configuration file to use is read from the `PROBTEST_CONFIG` environment variable. If this is not set explicitly, probtest will look for a file called `probtest.json` in the current directory.
92
92
93
93
Setting up the configuration file with `init` may not be fitted perfectly to where you want your probtest files to be. In that case, you can manually edit the file after creation. Alternatively, you can add arguments for your probtest commands on the command line which take precedence over the configuration file defaults. For more help on the options on a specific command, see
94
94
95
-
```
95
+
```console
96
96
python probtest.py {command} --help
97
97
```
98
98
99
-
### Example: Check the output of an experiment
99
+
### Example: Check the output of an ICON experiment with an test build compared to a reference build
100
100
101
-
Objective: Run the mch_opr_r04b07 ICON experiment and check if the output of the run is ok. Probtest requires some additional python packages. On Piz Daint, there is a pre-installed python environment which can be loaded with:
101
+
Objective: Run an `exp_name` ICON experiment with an test build and check if the
102
+
output of the test is within a perturbed ensemble of the reference build.
103
+
This is in particular used to validate a GPU build against a CPU reference.
FYI: if the experiment does not generate all of the files listed in the
152
+
`file-id`s above, you you receive a message that certain `file-id` patterns do
153
+
not match any file.
154
+
Those files can remove them from `file-id`s.
119
155
120
-
This will create a `probtest.json` file in the current directory. This file contains all information needed by probtest to process the ICON experiment.
121
-
122
-
With everything set up properly, the chain of commands can be invoked to run the CPU reference binary (`run-ensemble`), generate the statistics files used for probtest comparisons (`stats`) and generate tolerances from these files (`tolerance`).
Note the `--ensemble` option which is set to take precedence over the default `False` from the configuration and make probtest process the model output from each ensemble generated by `run-ensemble`. These commands will generate a number of files:
170
+
These commands will generate a number of files:
131
171
132
172
-`stats_ref.csv`: contains the post-processed output from the unperturbed reference run
133
173
-`stats_{member_num}.csv`: contain the post-processed output from the perturbed reference runs (only needed temporarily to generate the tolerance file)
134
-
-`mch_opr_r04b07_tolerance.csv`: contains tolerance ranges computed from the stats-files
174
+
-`exp_name_tolerance.csv`: contains tolerance ranges computed from the stats-files
135
175
136
-
These can then be used to compare against the output of a test binary (usually a GPU binary). For that, manually run the `exp.mch_opr_r04b07.run` experiment with the test binary to produce the test output. Then use probtest to generate the stats file for this output:
176
+
These can then be used to compare against the output of a test binary (usually a
177
+
GPU binary).
178
+
For that, manually run the `exp_name.run` experiment with the test binary to
To then check if your data from the test binary are validating against reference
184
+
build, first run the experiments with the test build.
185
+
Run your test simulation without probtest:
186
+
```console
187
+
cd icon-base-dir/test-build
188
+
sbatch run/exp_name.run
140
189
```
141
190
142
-
Note how `--model-output-dir` is set to take precedence over the default which points to the reference binary output to now point to the test binary output as well as the name of the generated file with `--stats-file-name` to avoid name clash with the stats file from the reference. This command will generate the following file:
Note how `--model-output-dir` is set to take precedence over the default which
196
+
points to the reference binary output to now point to the test binary output.
197
+
This command will generate the following file:
143
198
144
-
-`stats_cur.csv`: contains the post-processed output of the test binary model output.
199
+
-`stats_exp_name.csv`: contains the post-processed output of the test binary model output.
145
200
146
-
Now all files needed to perform a probtest check are available; the reference file `stats_ref.csv`, the test file `stats_cur.csv` as well as the tolerance range `mch_opr_r04b07_tolerance.csv`. Providing these files to `check` will perform the check:
201
+
Now all files needed to perform a probtest check are available; the reference
202
+
file `stats_ref.csv`, the test file `stats_exp_name.csv` as well as the tolerance
203
+
range `exp_name_tolerance.csv`.
204
+
Providing these files to `check` will perform the check:
Note that the reference `--input-file-ref` and test stats files `--input-file-cur` need to be set by command line arguments. This is because the default stored in the `ICON.jinja` template is pointing to two files from the ensemble as a sanity check.
215
+
Note that the reference `--input-file-ref` and test stats files
216
+
`--input-file-cur` need to be set by command line arguments.
217
+
This is because the default stored in the `ICON.jinja` template is pointing to
218
+
two files from the ensemble as a sanity check.
153
219
154
-
## Developing in probtest
220
+
## Developing probtest
155
221
#### Testing with [pytest](https://docs.pytest.org/en/8.2.x/)
156
222
157
223
Our tests are executed using `pytest`, ensuring a consistent and efficient testing process. Each test dynamically generates its necessary test data, allowing for flexible and isolated testing scenarios.
Reference data, crucial for validating the outcomes of our tests and detecting any deviations in `probtests` results, is maintained in the [tests/data](tests/data) directory. This approach guarantees that our tests are both comprehensive and reliable, safeguarding the integrity of our codebase.
171
237
172
-
### Code formatting
238
+
### Formatting probtest source code
173
239
174
-
Code is formatted using black and isort. Please install the pre-commit hooks (after installing all Python requirements including the `pre-commit` package):
240
+
The probtest source code is formatted using multiple formatters.
241
+
Please install the pre-commit hooks (after installing all Python requirements
242
+
including the `pre-commit` package):
175
243
176
-
```
244
+
```console
177
245
pre-commit install
178
246
```
179
247
180
-
This hook will be executed automatically whenever you commit. It will check your files and format them according to its rules. If files have to be formatted, committing will fail. Just commit again to finalize the commit. You can also run the following command, to trigger the pre-commit action without actually committing:
181
-
182
-
```
248
+
This hook will be executed automatically whenever you commit.
249
+
It will check your files and format them according to its rules.
250
+
If files have to be formatted, committing will fail.
251
+
Just stage and commit again to finalize the commit.
252
+
You can also run the following command, to trigger the pre-commit action without
253
+
actually committing:
254
+
```console
183
255
pre-commit run --all-files
184
256
```
185
-
186
-
If you are using VSCode with the settings provided by this repository in `.vscode/settings.json` formatting is already enabled on save.
0 commit comments