Skip to content

CA-408339: Respect xenopsd's NUMA-placement-policy default #6368

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Mar 19, 2025

Conversation

robhoes
Copy link
Member

@robhoes robhoes commented Mar 18, 2025

Xenopsd has an experimental feature that aims to optimise NUMA
placement. This used to be configured by adding numa-placement=true to
the file /etc/xenopsd.conf, which tells xenopsd to enable the feature.

Later, an actual API was added to configure this:
host.set_numa_affinity_policy. The expectation was that, while this
new API should be preferred, the old xenopsd.conf option would still
work for backwards compatibility reasons. This is particularly important
for hosts that are upgraded to the new version.

Unfortunately, while code exists in xenopsd to read and use the config
option when it starts up, when xapi starts up immediately after xenopsd,
it always overrides the NUMA config based its own DB field. The field
type actually has a "default" option, but this gets translated to "any"
(= no NUMA). By default, this means means that the experimental feature
is disabled, no matter what the config file says, and can only be
enabled through the API.

The fix is for xapi to not assign a default value itself, but let
xenopsd decide on the default policy. And xenopsd uses its config file
to do so, as before.

Signed-off-by: Rob Hoes <rob.hoes@citrix.com>
@robhoes robhoes requested review from edwintorok and psafont March 18, 2025 15:48
Xenopsd has an experimental feature that aims to optimise NUMA
placement. This used to be configured by adding `numa-placement=true` to
the file /etc/xenopsd.conf, which tells xenopsd to enable the feature.

Later, an actual API was added to configure this:
`host.set_numa_affinity_policy`. The expectation was that, while this
new API should be preferred, the old xenopsd.conf option would still
work for backwards compatibility reasons. This is particularly important
for hosts that are upgraded to the new version.

Unfortunately, while code exists in xenopsd to read and use the config
option when it starts up, when xapi starts up immediately after xenopsd,
it always overrides the NUMA config based its own DB field. The field
type actually has a "default" option, but this gets translated to "any"
(= no NUMA). By default, this means means that the experimental feature
is disabled, no matter what the config file says, and can only be
enabled through the API.

The fix is for xapi to not assign a default value itself, but let
xenopsd decide on the default policy. And xenopsd uses its config file
to do so, as before.

Signed-off-by: Rob Hoes <rob.hoes@citrix.com>
if !Xenopsd.numa_placement_compat then Best_effort else Any ;
info "Default NUMA affinity policy is '%s'"
Xenops_server.(string_of_numa_affinity_policy !default_numa_affinity_policy) ;
Xenops_server.numa_placement := !Xenops_server.default_numa_affinity_policy ;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest wrap these lines in a function like Xenops_server.init_numa_affinity_policy !Xenopsd.numa_placement_compat. Then the references in Xenops_server will not need to be exposed.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately, Xenops_server.numa_placement is actually used outside Xenops_server, so we can hide it.

@edwintorok edwintorok added this pull request to the merge queue Mar 19, 2025
Merged via the queue into xapi-project:master with commit 29ab6b6 Mar 19, 2025
17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants