Replies: 1 comment 2 replies
-
Sounds like you're interested in an active-learning type approach, where the goal is to learn the underlying function as well as possible rather than trying to optimize it. We do not have many acquisition functions defined for this setting but qNegIntegratedPosteriorVariance might be interesting to you. If you're still interested in optimizing the function but want to give an additional emphasis on exploring the whole space (which should improve global model accuracy), artificially inflating the predictive variance would likely help the acquisition functions focus a bit more on this. You could use UCB with a large beta, or augment the posterior variance of any other acquisition function using a custom
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I want to improve the global accuracy of the GP model, and I would like to know how I apply
BoTorch
's routine to sequentially augment the experimental design point.I know
BoTorch
efficiently addresses the exploitation-exploration tradeoff, but I am rather interested in how I can efficiently reduce the exploration part.I appreciate any ideas and comments and thank you in advance.
Beta Was this translation helpful? Give feedback.
All reactions