-
Notifications
You must be signed in to change notification settings - Fork 73
Description
Hello! I am getting familiar with your library. Currently I am running optimization problem with integer parameters using CMAwM initialized as follows:
dim = 3
bounds = np.concatenate(
[
np.tile([1, 1000], (3, 1)),
]
)
steps = np.concatenate([np.ones(3)])
optimizer = CMAwM(mean=np.array([500.0,500.0,500.0]), sigma=200.0, population_size=30,bounds=bounds, steps=steps)
while True:
solutions = []
for _ in range(optimizer.population_size):
x_for_eval, x_for_tell = optimizer.ask()
value = opt_func([int(max(1,x)) for x in x_for_eval])
evals += 1
solutions.append((x_for_tell, value))
if evals % 300 == 0:
print(f"{evals:5d} {value:10.5f}")
optimizer.tell(solutions)
if optimizer.should_stop():
break
The problem is convergence, to be exact at some point CMAwM has obviously reached optimum but none of criterias are able to detect it. Here is last generation before I stopped it manually, where columns are (iteration, loss: params):
1111 -0.018747633421025346 : 263.0,644.0,1.0
1112 -0.018748337067435318 : 263.0,757.0,1.0
1113 -0.018747687184231726 : 263.0,683.0,1.0
1114 -0.01875012061466767 : 263.0,970.0,1.0
1115 -0.018730090191708456 : 263.0,1000.0,1.0
1116 -0.01874865437954894 : 263.0,796.0,1.0
1117 0.09455581734628662 : 263.0,1000.0,10.0
1118 -0.018749928371640554 : 263.0,973.0,1.0
1119 -0.01875086049838118 : 263.0,947.0,1.0
1120 -0.01874757010203166 : 263.0,652.0,1.0
1121 -0.018750017606867325 : 263.0,892.0,1.0
1122 -0.018519725974662556 : 262.0,1000.0,1.0
1123 -0.018730090191708456 : 263.0,1000.0,1.0
1124 -0.018750851828410502 : 263.0,931.0,1.0
1125 -0.018749568457314694 : 263.0,872.0,1.0
1126 -0.018750609988362414 : 263.0,959.0,1.0
1127 -0.018748871262269175 : 263.0,828.0,1.0
1128 -0.018748023773340648 : 263.0,721.0,1.0
1129 -0.018730090191708456 : 263.0,1000.0,1.0
1130 -0.018750855477153848 : 263.0,948.0,1.0
1131 -0.018750867286653677 : 263.0,945.0,1.0
1132 -0.01874851260608938 : 263.0,493.0,1.0
1133 -0.018730090191708456 : 263.0,1000.0,1.0
1134 -0.018749694748425028 : 263.0,878.0,1.0
1135 -0.018730090191708456 : 263.0,1000.0,1.0
1136 -0.018750212772197267 : 263.0,900.0,1.0
1137 -0.018747848631223123 : 263.0,620.0,1.0
1138 -0.018750727966726312 : 263.0,921.0,1.0
1139 -0.018730090191708456 : 263.0,1000.0,1.0
1140 -0.018730090191708456 : 263.0,1000.0,1.0
which results in
cccma._funhist_values
array([-0.01875088, 0.67681892, -0.01875086, 0.12615959, -0.01875088,
0.11907546, -0.01875087, 0.23133862, -0.01875087, -0.01874699,
-0.01875087, 0.11549962, -0.01875088, -0.018543 , -0.01875087,
-0.01854317, -0.01875088, 0.11178636, -0.01875058, 0.36232975,
-0.01875062, -0.01854576, -0.01875088, 0.14818344, -0.01875087,
0.09455582])
Here you can see that optimum was reached long time ago. I looked at should_stop method at
Line 485 in 291376e
| def should_stop(self) -> bool: |
_funhist_values even one point away on one of evaluations prevents optimizer from stopping. Another problem is that algorithm is trying same points many times (I see at least 9 estimations for one combination of parameters). Looking at solutions it's obvious algorithm in fact makes continuous guesses so that's probably the reason for many identical integer x_for_eval combinations.Waiting for tens of generations until all estimates are in same point and none points on previous n generations were further than
_tolfun from optimum seems very inefficient to me. Same as estimating function at the same point multiple times .
Am I doing something wrong? What is the best course of actions in my case? Regarding stopping criteria I guess I should just write my own criteria.
Isn't though this behavior of first should_stop criteria basically making it very unreliable? I see that some other criteria depend on sigma so with better sigma they may work but I don't want to make assumptions about sigma.
Also is there a way to prevent estimating function at same point many times? I can make a dict and feed values to algorithm, but I would expect algorithm to know that it already tried this point and take care of this for me. Thank you for your attention