Replies: 4 comments 1 reply
-
If and when we are to look into new acq functions, this could serve as points of inspiration: |
Beta Was this translation helpful? Give feedback.
-
See also: scikit-optimize/scikit-optimize#460 And finally: remember, that there is already greedy A optimum implemented: scikit-optimize/scikit-optimize#432 |
Beta Was this translation helpful? Give feedback.
-
This seems like some interesting new acq functions that could be looked into: |
Beta Was this translation helpful? Give feedback.
-
Suggestion for solution to be implemented: Change acquisition functions so they always return a tuple, with the second entry (grad) being None if they don't calculate gradient. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
Our acq functions have implemented the ability to return gradient (https://github.com/novonordisk-research/ProcessOptimizer/blob/762732e84e07859f9460903ffb62584d7059f052/ProcessOptimizer/acquisition.py#L117C1-L119C34). I do not see the neccesity for returning said gradient. If I'm mistaken, then disregard this Issue.
Alternatively, I propose that we change the acq.functions (and the corresponding tests).
Without gradients, we will have a more easy approach to implement new acq functions.
Beta Was this translation helpful? Give feedback.
All reactions