Skip to content

Commit b035c85

Browse files
iluisemlangguth89
andauthored
integrate IFS scores from Quaver into FastEvaluation (ecmwf#600)
* first insterface * working version * save json * add omegaconf * address comment and clean up interface * add config * update scoring class * Fix to allow for channel-selection in get_data and efficiency improvement to plot_data. * Avoid circulra dependency issues with to_list-function. * Fix data selection issues. * Enable proper handling of lists from omegaconf. * update to mlangguth89 fork * refactor forecast step * ruffed * add printing summary * add ZarrData class * adjust size of the plots * attempt to solve sorting issue * Rename model to run in config and in code. * Fixes to Michael's review comments. * Ruffed code. * resync with mlangguth89 + add plot titles * revert mixed * remove plot config + style addition to evaluation package * ruffed * add option to comment out plotting * resync utils to develop --------- Co-authored-by: Michael <m.langguth@fz-juelich.de>
1 parent e1747c8 commit b035c85

File tree

1 file changed

+6
-4
lines changed

1 file changed

+6
-4
lines changed

packages/evaluate/src/weathergen/evaluate/plot_inference.py

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -59,6 +59,7 @@
5959
scores_dict = defaultdict(lambda: defaultdict(dict))
6060

6161
for run_id, run in runs.items():
62+
6263
plotter = Plotter(cfg, run_id)
6364
_logger.info(f"RUN {run_id}: Getting data...")
6465

@@ -68,11 +69,12 @@
6869
_logger.info(f"RUN {run_id}: Processing stream {stream}...")
6970

7071
stream_dict = run["streams"][stream]
72+
73+
if stream_dict.get("plotting"):
74+
_logger.info(f"RUN {run_id}: Plotting stream {stream}...")
75+
plots = plot_data(cfg, run_id, stream, stream_dict)
7176

72-
_logger.info(f"RUN {run_id}: Plotting stream {stream}...")
73-
plots = plot_data(cfg, run_id, stream, stream_dict)
74-
75-
if stream_dict.get("evaluation", None):
77+
if stream_dict.get("evaluation"):
7678
_logger.info(f"Retrieve or compute scores for {run_id} - {stream}...")
7779

7880
metrics_to_compute = []

0 commit comments

Comments
 (0)