Skip to content
Open
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 16 additions & 20 deletions climada/engine/unsequa/input_var.py
Original file line number Diff line number Diff line change
Expand Up @@ -246,9 +246,8 @@ def haz(haz_list, n_ev=None, bounds_int=None, bounds_frac=None, bounds_freq=None
The frequency of all events is multiplied by a number
sampled uniformly from a distribution with (min, max) = bounds_freq
HL: sample uniformly from hazard list
From the provided list of hazard is elements are uniformly
sampled. For example, Hazards outputs from dynamical models
for different input factors.
Uniformly sample one element from the provided list of hazards.
For example, Hazards outputs from dynamical models for different input factors.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would be incorrect phrasing. In general, more than one sample is drawn in total, althought for each global sample, one element is chosen. Maybe ?

Hazard is uniformly sampled the provided list of hazards.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have rephrased so that this is clearer.


If a bounds is None, this parameter is assumed to have no uncertainty.

Expand Down Expand Up @@ -310,8 +309,8 @@ def exp(exp_list, bounds_totval=None, bounds_noise=None):
with (min, max) = bounds_noise. EN is the value of the seed
for the uniform random number generator.
EL: sample uniformly from exposure list
From the provided list of exposure is elements are uniformly
sampled. For example, LitPop instances with different exponents.
Uniformly sample one element from the provided list of exposures.
For example, LitPop instances with different exponents.

If a bounds is None, this parameter is assumed to have no uncertainty.

Expand Down Expand Up @@ -376,9 +375,8 @@ def impfset(
sampled uniformly from a distribution with
(min, max) = bounds_int
IL: sample uniformly from impact function set list
From the provided list of impact function sets elements are uniformly
sampled. For example, impact functions obtained from different
calibration methods.
Uniformly sample one element from the provided list of impact function sets.
For example, impact functions obtained from different calibration methods.


If a bounds is None, this parameter is assumed to have no uncertainty.
Expand Down Expand Up @@ -468,8 +466,8 @@ def ent(
with (min, max) = bounds_noise. EN is the value of the seed
for the uniform random number generator.
EL: sample uniformly from exposure list
From the provided list of exposure is elements are uniformly
sampled. For example, LitPop instances with different exponents.
Uniformly sample one element from the provided list of exposures.
For example, LitPop instances with different exponents.
MDD: scale the mdd (homogeneously)
The value of mdd at each intensity is multiplied by a number
sampled uniformly from a distribution with
Expand All @@ -483,9 +481,8 @@ def ent(
sampled uniformly from a distribution with
(min, max) = bounds_int
IL: sample uniformly from impact function set list
From the provided list of impact function sets elements are uniformly
sampled. For example, impact functions obtained from different
calibration methods.
Uniformly sample one element from the provided list of impact function sets.
For example, impact functions obtained from different calibration methods.

If a bounds is None, this parameter is assumed to have no uncertainty.

Expand Down Expand Up @@ -566,7 +563,7 @@ def ent(
bounds_noise=bounds_noise,
exp_list=exp_list,
meas_set=meas_set,
**kwargs
**kwargs,
),
_ent_unc_dict(
bounds_totval=bounds_totval,
Expand Down Expand Up @@ -616,8 +613,8 @@ def entfut(
with (min, max) = bounds_noise. EN is the value of the seed
for the uniform random number generator.
EL: sample uniformly from exposure list
From the provided list of exposure is elements are uniformly
sampled. For example, LitPop instances with different exponents.
Uniformly sample one element from the provided list of exposures.
For example, LitPop instances with different exponents.
MDD: scale the mdd (homogeneously)
The value of mdd at each intensity is multiplied by a number
sampled uniformly from a distribution with
Expand All @@ -631,9 +628,8 @@ def entfut(
sampled uniformly from a distribution with
(min, max) = bounds_impfi
IL: sample uniformly from impact function set list
From the provided list of impact function sets elements are uniformly
sampled. For example, impact functions obtained from different
calibration methods.
Uniformly sample one element from the provided list of impact function sets.
For example, impact functions obtained from different calibration methods.

If a bounds is None, this parameter is assumed to have no uncertainty.

Expand Down Expand Up @@ -706,7 +702,7 @@ def entfut(
impf_set_list=impf_set_list,
exp_list=exp_list,
meas_set=meas_set,
**kwargs
**kwargs,
),
_entfut_unc_dict(
bounds_eg=bounds_eg,
Expand Down
69 changes: 36 additions & 33 deletions doc/user-guide/climada_engine_unsequa.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This is a tutorial for the unsequa module in CLIMADA. A detailled description can be found in [Kropf (2021)](https://eartharxiv.org/repository/view/3123/)."
"This is a tutorial for the unsequa module in CLIMADA. A detailled description can be found in [Kropf et al. (2022)](https://doi.org/10.5194/gmd-15-7177-2022)."
]
},
{
Expand All @@ -31,7 +31,7 @@
"\n",
"In this module, it is possible to perform global uncertainty analysis, as well as a sensitivity analysis. The word global is meant as opposition to the 'one-factor-at-a-time' (OAT) strategy. The OAT strategy, which consists in analyzing the effect of varying one model input factor at a time while keeping all other fixed, is popular among modellers, but has major shortcomings [Saltelli (2010)](https://www.sciencedirect.com/science/article/abs/pii/S1364815210001180), [Saltelli(2019)](http://www.sciencedirect.com/science/article/pii/S1364815218302822) and should not be used.\n",
"\n",
"A rough schemata of how to perform uncertainty and sensitivity analysis (taken from [Kropf(2021)](https://eartharxiv.org/repository/view/3123/))"
"A rough schemata of how to perform uncertainty and sensitivity analysis (taken from [Kropf et al. (2022)](https://doi.org/10.5194/gmd-15-7177-2022)."
]
},
{
Expand All @@ -50,7 +50,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"1. [Kropf, C.M. et al. Uncertainty and sensitivity analysis for global probabilistic weather and climate risk modelling: an implementation in the CLIMADA platform (2021)](https://eartharxiv.org/repository/view/3123/)\n",
"1. [Kropf, C.M. et al. Uncertainty and sensitivity analysis for probabilistic weather and climate-risk modelling: an implementation in CLIMADA v.3.1.0. Geoscientific Model Development, 15, 7177–7201 (2022)](https://doi.org/10.5194/gmd-15-7177-2022).\n",
"2. [Pianosi, F. et al. Sensitivity analysis of environmental models: A systematic review with practical workflow. Environmental Modelling & Software 79, 214–232 (2016)](https://www.sciencedirect.com/science/article/pii/S1364815216300287).\n",
"3.[Douglas-Smith, D., Iwanaga, T., Croke, B. F. W. & Jakeman, A. J. Certain trends in uncertainty and sensitivity analysis: An overview of software tools and techniques. Environmental Modelling & Software 124, 104588 (2020)](https://doi.org/10.1007/978-1-4899-7547-8_5)\n",
"4. [Knüsel, B. Epistemological Issues in Data-Driven Modeling in Climate Research. (ETH Zurich, 2020)](https://www.research-collection.ethz.ch/handle/20.500.11850/399735)\n",
Expand Down Expand Up @@ -542,12 +542,12 @@
"source": [
"| Attribute | Type | Description |\n",
"| --- | --- | --- |\n",
"| sampling_method | str | The sampling method as defined in [SALib](https://salib.readthedocs.io/en/latest/api.html). Possible choices: 'saltelli', 'fast_sampler', 'latin', 'morris', 'dgsm', 'ff'|\n",
"| sampling_method | str | The sampling method as defined in [SALib](https://salib.readthedocs.io/en/latest/api.html). Possible choices: 'saltelli', 'fast_sampler', 'latin', 'morris', 'dgsm', 'ff', 'finite_diff'|\n",
"| sampling_kwargs | dict | Keyword arguments for the sampling_method. |\n",
"| n_samples | int | Effective number of samples (number of rows of samples_df)|\n",
"| param_labels | list(str) | Name of all the uncertainty input parameters|\n",
"| problem_sa | dict | The description of the uncertainty variables and their distribution as used in [SALib](https://salib.readthedocs.io/en/latest/basics.html). |\n",
"| sensitivity_method | str | Sensitivity analysis method from [SALib.analyse](https://salib.readthedocs.io/en/latest/api.html) Possible choices: 'fast', 'rbd_fact', 'morris', 'sobol', 'delta', 'ff'. Note that in Salib, sampling methods and sensitivity analysis methods should be used in specific pairs.|\n",
"| sensitivity_method | str | Sensitivity analysis method from [SALib.analyse](https://salib.readthedocs.io/en/latest/api.html) Possible choices: 'sobol', 'fast', 'rbd_fast', 'morris', 'dgsm', 'ff', 'pawn', 'rhdm', 'rsa', 'discrepancy', 'hdmr'. Note that in Salib, sampling methods and sensitivity analysis methods should be used in specific pairs.|\n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not see it in this PR, but where the allowed pairing options updated in the respective .py files?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, they are up to date. I think this was done in the unsequa update PR but I forgot to update this tutorial back then.

"| sensitivity_kwargs | dict | Keyword arguments for sensitivity_method. |\n",
"| unit | str | Unit of the exposures value |"
]
Expand Down Expand Up @@ -2466,7 +2466,7 @@
},
{
"cell_type": "code",
"execution_count": 51,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -2475,10 +2475,10 @@
"haz.basin = [\"NA\"] * haz.size\n",
"\n",
"# apply climate change factors\n",
"haz_26 = haz.apply_climate_scenario_knu(ref_year=2050, rcp_scenario=26)\n",
"haz_45 = haz.apply_climate_scenario_knu(ref_year=2050, rcp_scenario=45)\n",
"haz_60 = haz.apply_climate_scenario_knu(ref_year=2050, rcp_scenario=60)\n",
"haz_85 = haz.apply_climate_scenario_knu(ref_year=2050, rcp_scenario=85)\n",
"haz_26 = haz.apply_climate_scenario_knu(target_year=2050, scenario=\"2.6\")\n",
"haz_45 = haz.apply_climate_scenario_knu(target_year=2050, scenario=\"4.5\")\n",
"haz_60 = haz.apply_climate_scenario_knu(target_year=2050, scenario=\"6.0\")\n",
"haz_85 = haz.apply_climate_scenario_knu(target_year=2050, scenario=\"8.5\")\n",
"\n",
"# pack future hazard sets into dictionary - we want to sample from this dictionary later\n",
"haz_fut_list = [haz_26, haz_45, haz_60, haz_85]\n",
Expand All @@ -2489,7 +2489,7 @@
},
{
"cell_type": "code",
"execution_count": 52,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -2501,7 +2501,7 @@
"\n",
"def exp_base_func(x_exp, exp_base):\n",
" exp = exp_base.copy()\n",
" exp.gdf[\"value\"] *= x_exp\n",
" exp.data[\"value\"] *= x_exp\n",
" return exp\n",
"\n",
"\n",
Expand Down Expand Up @@ -2821,7 +2821,7 @@
},
{
"cell_type": "code",
"execution_count": 61,
"execution_count": null,
"metadata": {
"ExecuteTime": {
"end_time": "2023-08-03T12:00:12.180767Z",
Expand All @@ -2844,7 +2844,7 @@
"\n",
" entity = Entity.from_excel(ENT_DEMO_TODAY)\n",
" entity.exposures.ref_year = 2018\n",
" entity.exposures.gdf[\"value\"] *= x_ent\n",
" entity.exposures.data[\"value\"] *= x_ent\n",
" return entity\n",
"\n",
"\n",
Expand Down Expand Up @@ -2954,7 +2954,7 @@
},
{
"cell_type": "code",
"execution_count": 64,
"execution_count": null,
"metadata": {
"ExecuteTime": {
"end_time": "2023-08-03T12:00:12.959984Z",
Expand Down Expand Up @@ -3070,7 +3070,7 @@
],
"source": [
"ent_avg = ent_today_iv.evaluate()\n",
"ent_avg.exposures.gdf.head()"
"ent_avg.exposures.data.head()"
]
},
{
Expand Down Expand Up @@ -5320,7 +5320,7 @@
},
{
"cell_type": "code",
"execution_count": 77,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -5335,7 +5335,7 @@
"\n",
"def exp_func(cnt, x_exp, exp_list=exp_list):\n",
" exp = exp_list[int(cnt)].copy()\n",
" exp.gdf[\"value\"] *= x_exp\n",
" exp.data[\"value\"] *= x_exp\n",
" return exp\n",
"\n",
"\n",
Expand Down Expand Up @@ -5523,7 +5523,7 @@
"source": [
"Loading Hazards or Exposures from file is a rather lengthy operation. Thus, we want to minimize the reading operations, ideally reading each file only once. Simultaneously, Hazard and Exposures can be large in memory, and thus we would like to have at most one of each loaded at a time. Thus, we do not want to use the list capacity from the helper method InputVar.exposures and InputVar.hazard.\n",
"\n",
"For demonstration purposes, we will use below as exposures files the litpop for three countries, and for tha hazard files the winter storms for the same three countries. Note that this does not make a lot of sense for an uncertainty analysis. For your use case, please replace the set of exposures and/or hazard files with meaningful sets, for instance sets of exposures for different resolutions or hazards for different model runs.\n"
"For demonstration purposes, we will use below as exposures files the litpop for three countries, and for the hazard files the winter storms for the same three countries. Note that this does not make a lot of sense for an uncertainty analysis. For your use case, please replace the set of exposures and/or hazard files with meaningful sets, for instance sets of exposures for different resolutions or hazards for different model runs.\n"
]
},
{
Expand Down Expand Up @@ -5600,17 +5600,18 @@
"def exp_func(f_exp, x_exp, filename_list=f_exp_list):\n",
" filename = filename_list[int(f_exp)]\n",
" global exp_base\n",
" if \"exp_base\" in globals():\n",
" if isinstance(exp_base, Exposures):\n",
" if exp_base.gdf[\"filename\"] != str(filename):\n",
" exp_base = Exposures.from_hdf5(filename)\n",
" exp_base.gdf[\"filename\"] = str(filename)\n",
" if (\n",
" \"exp_base\" in globals()\n",
" and isinstance(exp_base, Exposures)\n",
" and exp_base.description == str(filename)\n",
" ):\n",
" pass # if correct file is already loaded in memory, we do not need to reload it\n",
" else:\n",
" exp_base = Exposures.from_hdf5(filename)\n",
" exp_base.gdf[\"filename\"] = str(filename)\n",
" exp_base.description = str(filename)\n",
"\n",
" exp = exp_base.copy()\n",
" exp.gdf[\"value\"] *= x_exp\n",
" exp.data[\"value\"] *= x_exp\n",
" return exp\n",
"\n",
"\n",
Expand All @@ -5624,14 +5625,16 @@
"def haz_func(f_haz, i_haz, filename_list=f_haz_list):\n",
" filename = filename_list[int(f_haz)]\n",
" global haz_base\n",
" if \"haz_base\" in globals():\n",
" if isinstance(haz_base, Hazard):\n",
" if haz_base.filename != str(filename):\n",
" haz_base = Hazard.from_hdf5(filename)\n",
" haz_base.filename = str(filename)\n",
" if (\n",
" \"haz_base\" in globals()\n",
" and isinstance(haz_base, Hazard)\n",
" and hasattr(haz_base, \"description\")\n",
" and haz_base.description == str(filename)\n",
" ):\n",
" pass\n",
" else:\n",
" haz_base = Hazard.from_hdf5(filename)\n",
" haz_base.filename = str(filename)\n",
" setattr(haz_base, \"description\", str(filename))\n",
"\n",
" haz = copy.deepcopy(haz_base)\n",
" haz.intensity *= i_haz\n",
Expand Down Expand Up @@ -5707,7 +5710,7 @@
"source": [
"# Ordering of the samples by hazard first and exposures second\n",
"output_imp = calc_imp.make_sample(N=2**2, sampling_kwargs={\"skip_values\": 2**3})\n",
"output_imp.order_samples(by=[\"f_haz\", \"f_exp\"])"
"output_imp.order_samples(by_parameters=[\"f_haz\", \"f_exp\"])"
]
},
{
Expand Down
Loading
Loading