You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’ve been working with phased execution (pre_run, run, post_run) for global sensitivity analysis and encountered an issue when executing the post_run phase independently.
The analysis uses a quasi-Monte Carlo method with a Halton sequence and variance-based decomposition (VBD) enabled. The problem appears only when the phases are executed separately with VBD active.
To clarify, here’s an example that corresponds to the error:
Example:
Number of variables (M) = 2
Number of samples (N) = 5
Dakota generates N × (M + 2) = 5 × (2 + 2) = 20 total input samples
The following error occurs when running only the post_run phase:
Error in Analyzer::compute_vbd_stats_with_Saltelli(): expected 20 responses; received 5
When all three phases (pre_run, run, and post_run) are run together as one in a single Dakota input file and analysis driver, everything works correctly. The output table is generated as expected (includes all 20 simulations) and Sobol Indices are estimated.
Stepping through the phased execution of the workflow, when I run the post_run phase using the existing output table (produced by the previous run phase), Dakota fails with the error above. The output file is present, correctly formatted, and all paths and filenames are consistent with the full-phase execution.
Each phase in my setup is configured as follows:
pre_run: Generates the input parameter permutations and writes them to file.
run: Prepare model inputs, executes the simulation model for each input permutation and writes the postprocessed output data.
post_run: Estimates the Sobol indices from postprocessed data from the output table.
This leads me to believe that the post_run phase may not be correctly resolving the output table independently when VBD is enabled.
I've attached the Dakota.in file (02_sa.in.txt) used for the post_run phase for reference and the output table out of the run phase (dndc_corn.dat.txt).
To run this example, ensure all files are in the same directory and then follow these steps:
Download the two attached files (02_sa.in.txt and dndc_corn.dat.txt)
Create an empty file named 01_run_drive.py
Execute Dakota using the command: dakota -i 02_sa.in.txt
Any insights or suggestions would be greatly appreciated!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Dear all,
I’ve been working with phased execution (
pre_run
,run
,post_run
) for global sensitivity analysis and encountered an issue when executing thepost_run
phase independently.The analysis uses a quasi-Monte Carlo method with a Halton sequence and variance-based decomposition (VBD) enabled. The problem appears only when the phases are executed separately with VBD active.
To clarify, here’s an example that corresponds to the error:
Example:
The following error occurs when running only the
post_run
phase:When all three phases (
pre_run
,run
, andpost_run
) are run together as one in a single Dakota input file and analysis driver, everything works correctly. The output table is generated as expected (includes all 20 simulations) and Sobol Indices are estimated.Stepping through the phased execution of the workflow, when I run the
post_run
phase using the existing output table (produced by the previousrun
phase), Dakota fails with the error above. The output file is present, correctly formatted, and all paths and filenames are consistent with the full-phase execution.Each phase in my setup is configured as follows:
pre_run
: Generates the input parameter permutations and writes them to file.run
: Prepare model inputs, executes the simulation model for each input permutation and writes the postprocessed output data.post_run
: Estimates the Sobol indices from postprocessed data from the output table.This leads me to believe that the
post_run
phase may not be correctly resolving the output table independently when VBD is enabled.I've attached the Dakota.in file (
02_sa.in.txt
) used for thepost_run
phase for reference and the output table out of therun
phase (dndc_corn.dat.txt
).To run this example, ensure all files are in the same directory and then follow these steps:
02_sa.in.txt
anddndc_corn.dat.txt
)01_run_drive.py
dakota -i 02_sa.in.txt
Any insights or suggestions would be greatly appreciated!
Best regards,
Francesco
02_sa.in.txt
dndc_corn.dat.txt
Beta Was this translation helpful? Give feedback.
All reactions