Dakota complains analytic gradients when specified #143
-
Using Dakota v6.18I have this very simple problem that I wish to run:
which runs the following python script from sys import argv
def parabola(x: float) -> float:
return (x - 2.2) ** 2 + 1.6, 2 * ( x - 2.2)
if __name__ == "__main__":
input_file = argv[1]
output_file = argv[2]
with open(input_file, 'r') as fin:
num_vars = int(fin.readline().split()[0])
input_vars = {}
for _ in range(num_vars):
value, name = fin.readline().split()
input_vars[name] = float(value)
with open(output_file, 'w') as fout:
output = parabola(**input_vars)
fout.write(f"{output[0]} objective_function_1\n")
fout.write(f"[ {output[1]} ]") but then running
why is Dakota expecting zero gradients? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Dakota has an optimization called the active set vector. It asks your driver only for the information that the method (optpp_q_newton) asks for. It would be any combination of the function value, the gradient, and the hessian of each response. See here for more information. You may want to disable the active set vector using the interface keywords |
Beta Was this translation helpful? Give feedback.
Dakota has an optimization called the active set vector. It asks your driver only for the information that the method (optpp_q_newton) asks for. It would be any combination of the function value, the gradient, and the hessian of each response.
See here for more information.
You may want to disable the active set vector using the interface keywords
deactivate active_set_vector
, which will cause Dakota to request and expect everything you said your driver can return - function values and analytic gradients - for every evaluation.