Handling Equality Constraints in Ax Optimization #177
-
Quick question regarding the second assignment (multi-objective optimization): EA + EB + EC + AA + AB + AC == 1 Since equality isn't supported, I tried replacing it with two inequalities: EA + EB + EC + AA + AB + AC <= 1
EA + EB + EC + AA + AB + AC >= 1 However, this approach didn't work. The algorithm exhausted the maximum number of allowed draws (10000) without finding a sufficient number of valid candidates (at least one). In other words, it led to a computational issue. To work around this and still enforce the original constraint, I fixed one of the variables by calculating it from the others: EA = parameterization["EA"]
EB = parameterization["EB"]
EC = parameterization["EC"]
AA = parameterization["AA"]
AB = parameterization["AB"]
AC = total - (EA + EB + EC + AA + AB) # Enforcing: EA + EB + EC + AA + AB + AC == total My understanding is that by fixing AC, we are no longer optimizing this variable.
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
There are a few threads related to this that might help clarify. Part of the trouble is the optimization of the acquisition function. After the model is fit, trying to find the acquisition function maximum is its own hurdle. The issue with an equality constraint is that you end up with a line in a plane, a polygon in 3D, a volume in 4D, etc. This makes it difficult for optimization algorithms (especially ones that rely on random sampling) to handle because it's a needle in a haystack problem - this is why you get the "random draws" error. You'd need to sample an infinite number of random points in a 3D space to get points that land exactly on a plane in that space. When you use the "hidden variable" reparameterization into a linear inequality constraint, the search space gets distorted (e.g., an isosceles triangle in 3D turns into a right triangle in 2D), but you've removed that extra degree of freedom and usually haven't made any "wild" changes to the problem structure. There are some implications to this: facebook/Ax#727 (comment), facebook/Ax#903. Likewise, there is no longer a length scale that is specific to the hidden variable, so from an interpretability or domain knowledge perspective, that becomes a bit difficult. Similar to the "max draws" issue from above, if you choose a parameter that has a very narrow range to be hidden, the allowable search space starts to become a very thin "slice" that's also hard to optimize over. In that sense, one could argue for hiding the variable that has the largest range (e.g., aluminum for an aluminum alloy optimization) or a parameter that someone doesn't care about the interpretability of as much (but also might wonder if that parameter should be included in the first place). Supporting linear equality constraints isn't impossible - just has to be handled carefully due to the issues mentioned above. For example, there is a way to transform an isosceles triangle in 3D to 2D without distorting it, but this needs to be done consistently in a way the optimization algorithm can handle. Direct support of (true) linear equality constraints are a feature request on Ax. The priority might be lower because there's such a simple workaround. |
Beta Was this translation helpful? Give feedback.
There are a few threads related to this that might help clarify. Part of the trouble is the optimization of the acquisition function. After the model is fit, trying to find the acquisition function maximum is its own hurdle. The issue with an equality constraint is that you end up with a line in a plane, a polygon in 3D, a volume in 4D, etc. This makes it difficult for optimization algorithms (especially ones that rely on random sampling) to handle because it's a needle in a haystack problem - this is why you get the "random draws" error. You'd need to sample an infinite number of random points in a 3D space to get points that land exactly on a plane in that space. When you use the "hidde…