You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am attempting to run Dakota in serial mode using a sampling method to evaluate function responses. Each evaluation requires two codes to be run in parallel. Each code only needs one processor to run, but the codes must be on two separate processors for the coupling to work properly. Normally, to invoke a single run of these two coupled codes, a command would be issued taking the form:
srun --multi-prog -n 2 -K1 -W1 srun.conf
where srun.conf designates the command to be run on each processor (i.e. code_execution_path args). Obviously, I can't use this command in Dakota since it needs to be able to choose on what processors to run each evaluation. Therefore, I've been using a multi-instruction mpirun command in my analysis_driver file to invoke each evaluation, using this evaluation tiling example as a template. The run command in my driver.py file takes the form:
When I submit the job script to the hpc, the Dakota pre-processing of the input decks for each evaluation code works fine, but then I get the following outputted to errfile multiplied over 18 lines: srun: error: Unable to create step for job xxxxxx: Requested node configuration is not available
I think the problem lies in the invocation command I'm applying within my analysis driver file. So my question is whether you can see if there is something obviously wrong with my inputs here?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
I am attempting to run Dakota in serial mode using a sampling method to evaluate function responses. Each evaluation requires two codes to be run in parallel. Each code only needs one processor to run, but the codes must be on two separate processors for the coupling to work properly. Normally, to invoke a single run of these two coupled codes, a command would be issued taking the form:
srun --multi-prog -n 2 -K1 -W1 srun.conf
where
srun.conf
designates the command to be run on each processor (i.e. code_execution_path args). Obviously, I can't use this command in Dakota since it needs to be able to choose on what processors to run each evaluation. Therefore, I've been using a multi-instruction mpirun command in my analysis_driver file to invoke each evaluation, using this evaluation tiling example as a template. The run command in my driver.py file takes the form:I've tried running both dynamic and static scheduling configuration. The interface section of my Dakota input file looks like this:
The job submission script to our HPC looks like so:
When I submit the job script to the hpc, the Dakota pre-processing of the input decks for each evaluation code works fine, but then I get the following outputted to errfile multiplied over 18 lines:
srun: error: Unable to create step for job xxxxxx: Requested node configuration is not available
I think the problem lies in the invocation command I'm applying within my analysis driver file. So my question is whether you can see if there is something obviously wrong with my inputs here?
Beta Was this translation helpful? Give feedback.
All reactions