-
Notifications
You must be signed in to change notification settings - Fork 69
DO NOT MERGE: An attempt at restarting the controller manager without restarting the whole operator binary #1117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
DO NOT MERGE: An attempt at restarting the controller manager without restarting the whole operator binary #1117
Conversation
…e whole operator binary + a very nasty way of seeing if it works - the controller manager should restart every minute :)
Skipping CI for Draft Pull Request. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: metlos The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for this experimental PR, it looks very promising.
debug("doStart: about to invoke mgr.Start()") | ||
if startErr := mgr.Start(s.runningContext); startErr != nil { | ||
debug("doStart: mgr.Start() returned error. Sending it.") | ||
err <- startErr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if the manager cannot be started, then we can just simply force-kill, right?
err <- startErr | |
os.Exit(1) |
There is nothing we could try to recover. Or is there any other cleanup we would need to do in this case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have one more note/question - I understand the usage of the channel in this case, but I don't understand why we need to use the channel for all other error cases returned from this method. Is it to only unify the error handling, or is there any other reason?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not a fan of panicking in "library" code.
As for the error handling - yes, the error is always returned in the channel so that the caller doesn't have to have special logic for when the "setup" of the start fails as opposed to the failure that happens async-ly during the startup and lifetime of the controller manager.
pkg/restart/restart.go
Outdated
|
||
// now, let's see what caused the cancellation | ||
|
||
if !errors.Is(context.Cause(s.runningContext), restartNeededError) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
checking the cause of the cancel function and comparing it is a bit weird to me - it reminds me catching exceptions in Java. Not sure if this is a best practice in Go though.
Instead of using "CancelCause Context" you could use the standard "Cancel context" and add another field to the StartManager
stating if the restart was triggered or not (in the same way as you do it with the running
field. And have another case when the root context itself:
case <-ctx.Done():
debug("Start: context cancelled, cancellation from above")
<-finished
debug("Start: quitting")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
errors.Is()
is indeed a function defined in Go itself and is used quite extensively. What I like about the context.Cause()
is that at least for my way of thinking, sending "messages" is more understandable than setting flags somehow "on the side".
Also, I am not 100% certain about the Go memory model and what happens if you cancel the context in one thread and handle the cancellation in another - would the update to the flag be visible across threads?
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1117 +/- ##
==========================================
- Coverage 83.75% 83.72% -0.03%
==========================================
Files 82 83 +1
Lines 7929 8038 +109
==========================================
+ Hits 6641 6730 +89
- Misses 1091 1107 +16
- Partials 197 201 +4
🚀 New features to boost your workflow:
|
This is not meant for merging.
It is an experiment at soft-restarting the controller manager within the process of the host operator so that we don't have to rely on external tooling (i.e. ksctl) to manually restart the operator when the set of the registered members changes. The operator itself could detect these occasions and "heal" itself be soft-restarting so that further reconciliations work with the up-to-date set of members. In the longer terms this could also give us the opportunity to consolidate the explicit set of member clusters established at the operator start with the "cluster cache" used in the toolchaincluster controller and health checks.
Let's discuss here whether this potential approach has some merit and whether this solution is actually what we'd want.
At the moment, the restarts are just triggered every minute. This is of course not how it would be working for real - we would request a restart only when a new member is detected or when an existing one is removed.
The frequent restarts are put in place so that we can judge the effects on the memory consumption and/or see any leaks that might be happening in the controller manager due to the way we restart it.
I have ran the experimental version of the host-operator in a cluster and was continuously measuring the memory consumption for 5 hours after creating 5000 user signups in the cluster. This means that a non-trivial amount of objects needed to be loaded in the cache managed by the controller manager and therefore we should be seeing the effects of the leaks if there were any.
So, in the experiment, the host-operator was reconciling 5000 user signups, restarting every minute, for 5 hours. The results are compiled in this spreadsheet: https://docs.google.com/spreadsheets/d/1maSd_KUUYBwA3U1d3UdSMPaQkEwXYmbhWg4duCBvfO0/edit?gid=0#gid=0.
The memory consumption seems quite stable to me:
