-
Notifications
You must be signed in to change notification settings - Fork 45
[29/n] blueprint planner logic for mupdate overrides #8456
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: sunshowers/spr/main.wip-20n-blueprint-planner-logic-for-mupdate-overrides
Are you sure you want to change the base?
Conversation
Created using spr 1.3.6-beta.1
let mut sleds_with_override = BTreeSet::new(); | ||
for sled_id in self.input.all_sled_ids(SledFilter::InService) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking about this, I'm wondering what would happen here if a sled goes away in the middle of this process, disappearing from inventory. In that case the remove_mupdate_override field never gets cleared from the blueprint.
We would do:
- expunge the sled in the first blueprint
- when executed, the sled policy will be updated to Expunged in the planning input
- then next planning cycle it'll no longer be in the InService set
So I think we'll eventually converge -- it'll just take a couple cycles. (A TODO is to add a test for this.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested this with the new inventory-hidden
and inventory-visible
subcommands in reconfigurator-cli.
let old_image_source = self.zones.set_zone_image_source( | ||
&zone_id, | ||
BlueprintZoneImageSource::InstallDataset, | ||
)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RFD 556 says:
Wherever the planner uses the target release, it is instead ignored if its generation number is not greater than min_release_generation (if set).
As discussed in Tuesday's watercooler it's a bit more complex than that -- what we want to do is to only use the install dataset on sleds that have been mupdated, since on other sleds the install dataset may be woefully out of date.
I think I want to make the claim that this code may actually be sufficient as it stands. I don't think we need to try and do any other redirects other than this one (which is admittedly edge-triggered), as long as we prevent new zones from being set up at all while the system is recovering from the mupdate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We decided to not proceed with adding new zones until the mupdate override has been completely cleared.
// override that was set in the above branch. We can remove the | ||
// override from the blueprint. | ||
self.set_remove_mupdate_override(None); | ||
// TODO: change zone sources from InstallDataset to Artifact |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure we'll need to do this here; the normal upgrade path should change zone sources already, right? (It just needs to not do that while a mupdate override is in place.)
Although maybe sled-agent should do something like "if I'm changing from install dataset with hash X to artifact with hash X, don't actually bounce the zone".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure we'll need to do this here; the normal upgrade path should change zone sources already, right? (It just needs to not do that while a mupdate override is in place.)
Good question -- we do this one zone at a time currently, and I guess this would be an opportunity to do a bulk replace. (But why not always do a level-triggered bulk replace?)
Although maybe sled-agent should do something like "if I'm changing from install dataset with hash X to artifact with hash X, don't actually bounce the zone".
Yeah, this is reasonable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolved in #8486, and TODO removed.
); | ||
} | ||
|
||
// TODO: Do the same for RoT/SP/host OS. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this will be as simple as:
- clear any
PendingMgsUpdate
s for this sled - change the host phase 2 in the
OmicronSledConfig
to the equivalent ofInstallDataset
for zones (this doesn't exist yet but will be coming soon)
What I'm less sure about is what happens if there are PendingMgsUpdate
s in the current target blueprint concurrently with a mupdate happening to that sled. Maybe wicket and Nexus end up dueling? If the mupdate completes and changes the contents of any of the target slots Nexus's prechecks should start failing, but if the mupdate happens to not change the target slots, maybe the prechecks still pass and Nexus starts trying to update it again as soon as it comes online?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't done this yet -- worth discussing in the watercooler tomorrow?
// If do_plan_mupdate_override returns Waiting, we don't plan *any* | ||
// additional steps until the system has recovered. | ||
self.do_plan_add()?; | ||
self.do_plan_decommission()?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we could still decommission things if there's a mupdate override in place? This only acts on sleds or disks that an operator has explicitly told us is gone, and is basically a followup to do_plan_expunge()
. (Maybe this step should be ordered before do_sled_add()
anyway? I don't think there are any dependencies between them...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep -- done.
// generation table -- one of the invariants of the target | ||
// release generation is that it only moves forward. | ||
// | ||
// In this case we warn but set the value. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't seem right; we should probably bail out of planning entirely in this case, right? This seems like an "I don't know what's going on in the world" kind of thing that in a simpler system we'd assert on?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah -- done.
In #8456 we'll block the `do_plan_add` step on whether the system is currently recovering from a mupdate override. But there's no reason to block the `do_plan_decommission` step on that. This is easiest expressed by moving decommission to before add.
Created using spr 1.3.6-beta.1
) `HostPhase2DesiredContents` is analogous to `OmicronZoneImageSource`, but for OS images: either keep the current contents of the boot disk or set it to a specific artifact from the TUF repo depot. "Keep the current contents" should show up in three cases, just like `OmicronZoneImageSource::InstallDataset`: 1. It's the default value for deserializing, so we can load old configs that didn't have this value 2. RSS uses it (no TUF repo depot involved at this point) 3. The planner will use this variant as a part of removing a mupdate override (this work is still in PR itself: #8456 (comment))
Created using spr 1.3.6-beta.1
@jgallagher this is ready for you to look at again -- have added clearing the pending MGS update. I'm going to try and land this simultaneously with the sled-agent changes to clear the mupdate override, though, because by itself it will cause the planner to not do anything after the mupdate occurs. |
Created using spr 1.3.6-beta.1
Entry::Occupied(entry) => Some(Box::new(entry.remove())), | ||
}; | ||
|
||
// TODO: Do the same for host OS. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
} | ||
|
||
// Now we need to determine whether to also perform other actions like | ||
// updating or adding zones. We have to be careful here: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an excellent comment; thanks!
Start implementing the blueprint planner logic for mupdate overrides.
There's still a lot of work to do:
Move zone image sources back to artifacts once we know they've been distributed to a sled. (How do we know they've been distributed to a sled?)Done in [24/n] [reconfigurator-planning] support no-op image source updates #8486.Depends on: