Multi-fidelity BO #892
-
Hi, I am doing a multi-fidelity BO using the Knowledge gradient acquisition function. I am now running 100 trials and I recommend every 10 a new point. What I notice is that the recommendations don't necessarily get better ( or approach the optimum) over the trials. Is there an explanation for this? Since the recommendation is based on maximizing the posterior mean, does it mean that I should be looking at the performance of the GP in order to improve the accuracy? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
That's a great place to start. KG is more susceptible to model mismatch than other acquisition functions (it looks ahead, so it "trusts" the model more). so performance can suffer quite a bit if the model fit is poor. The other thing to make sure is that the "recoommendations" are obtained as the maximizer of the posterior mean, and not the maximizer of the KG acquisition function. |
Beta Was this translation helpful? Give feedback.
That's a great place to start. KG is more susceptible to model mismatch than other acquisition functions (it looks ahead, so it "trusts" the model more). so performance can suffer quite a bit if the model fit is poor.
The other thing to make sure is that the "recoommendations" are obtained as the maximizer of the posterior mean, and not the maximizer of the KG acquisition function.