Skip to content

sync master to feature/guest-vm-service-aware #6383

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
89 commits
Select commit Hold shift + click to select a range
0e98902
CP-52074: Add systemctl enable and disable API
BengangY Dec 20, 2024
cbab3b6
CP-52074: Add enable and disable ssh API on host
BengangY Jan 2, 2025
944a91d
CP-52074: Add enable and disable ssh API on pool
BengangY Jan 2, 2025
90e9602
CP-52074: Add API for start/stop systemd service sshd (#6198)
BengangY Jan 14, 2025
43729fa
IH-533: Remove usage of forkexecd daemon to execute processes
freddy77 Mar 17, 2024
28b6d33
Merge branch 'master' into private/bengangy/sync-master-to-configure-ssh
BengangY Feb 7, 2025
6a3e8c0
sync master to configure ssh (#6282)
BengangY Feb 7, 2025
387c7f3
CP-53161: Append `baggage` to the env vars list passed by `forkexecd`
GabrielBuica Jan 21, 2025
dd57f59
CP-53161: Link traces even when `smapi` component is experimental
GabrielBuica Jan 21, 2025
8bb3d19
Design proposal for supported image formats
gthvn1 Feb 17, 2025
1972d6d
CP-53161: `observer.py` adds the `baggage` to `smapi`'s attributes
GabrielBuica Feb 5, 2025
87a4a5a
CP-53161: Pass `baggage` back into `xapi` from `smapi`.
GabrielBuica Feb 5, 2025
d4cc475
CA-405864: Drop usage of init.d functions
liulinC Mar 4, 2025
40833fb
(docs) Describe the flows of setting NUMA node affinity in Xen by Xen…
bernhardkaindl Feb 27, 2025
e900040
(doc) Describe how xc_domain_claim_pages() is used to claim pages
bernhardkaindl Feb 27, 2025
76b46f6
CA-407687/XSI-1834: get_subject_information_from_identifier should
liulinC Mar 7, 2025
76ad85f
CA-407687/XSI-1834: get_subject_information_from_identifier should (#…
liulinC Mar 10, 2025
71f8d16
CA-408126 - rrd: Do not lose ds_min/max when adding to the RRD
Mar 10, 2025
182528a
CA-408126 - rrd: Do not lose ds_min/max when adding to the RRD (#6349)
edwintorok Mar 10, 2025
b0d2248
Change Ocaml version in readme
BengangY Mar 11, 2025
7d1bd4a
Change Ocaml version in readme (#6350)
psafont Mar 11, 2025
9b1b8d4
CA-403851 stop management server in Pool.eject ()
Mar 6, 2025
b9c8154
Design proposal for supported image formats (#6308)
lindig Mar 11, 2025
dafcaab
(doc) Describe how xc_domain_claim_pages() is used to claim pages (#6…
psafont Mar 11, 2025
d24d122
(docs) Describe the flows of setting NUMA node affinity in Xen by xen…
psafont Mar 11, 2025
d02e3b5
CA-403851 stop management server in Pool.eject () (#6346)
lindig Mar 11, 2025
5c45898
doc: Update mermaid
Vincent-lau Mar 11, 2025
989e349
doc: Update xapi storage layer code links
Vincent-lau Mar 11, 2025
118fd8e
CA-405864: Fix shellcheck warnings
liulinC Mar 11, 2025
74aeb77
Revert "CA-403851 stop management server in Pool.eject ()"
Vincent-lau Mar 12, 2025
03c1780
Revert "CA-403851 stop management server in Pool.eject ()" (#6352)
Vincent-lau Mar 12, 2025
996b9a7
doc: xapi storage layer update to latest code
Vincent-lau Mar 11, 2025
51b800b
doc: xapi storage layer doc change notice
Vincent-lau Mar 11, 2025
3a8199e
CP-53827, xenctrlext: add domain_claim_pages call
psafont Mar 5, 2025
a810aeb
CP-53827, xenopsd: claim pages for domain on pre_build phase
psafont Mar 7, 2025
555119c
CI: fix compile_commands.json caching
edwintorok Mar 12, 2025
8c10110
ci: regenerate stubs for codechecker
edwintorok Mar 12, 2025
80956ca
CI: fix compile_commands.json caching (#6356)
edwintorok Mar 12, 2025
657aa71
CA-405864: Drop usage of init.d functions (#6339)
liulinC Mar 13, 2025
4a81f7e
Merge branch 'master' into private/bengangy/sync-master-to-ssh
BengangY Mar 13, 2025
0b79d88
Resolve build failure in message_forwarding.ml
BengangY Mar 13, 2025
32ded62
Sync master to feature/configure-ssh (#6357)
BengangY Mar 13, 2025
bb672a9
CP-48676: Reuse pool sessions on slave logins.
snwoods Jan 30, 2025
9aa9e7e
CP-48676: Don't check resuable pool session validity by default
snwoods Sep 23, 2024
e42cca1
CP-48676: Disable pool session reuse by default
snwoods Mar 13, 2025
5ec5732
doc: xapi storage layer doc update (#6351)
lindig Mar 13, 2025
37b5f70
xapi-stdext-date: replace all usages to use clock instead
psafont Mar 13, 2025
94d24a9
xapi-stdext-date: replace all usages to use clock instead (#6358)
psafont Mar 14, 2025
e5c2612
CP-53827, xenopsd: claim pages for domain on pre_build phase (#6355)
psafont Mar 14, 2025
82064d3
CP-53161: `Baggage` threaded through `smapi` and back into `xapi` (#6…
edwintorok Mar 14, 2025
17514dc
CP-48676: Reuse pool sessions on slave logins (#6258)
edwintorok Mar 14, 2025
20d6e34
Merge branch 'master' into private/bengangy/merge-ssh-to-master
BengangY Mar 17, 2025
18ba7e7
Update datamodel_lifecycle.ml
BengangY Mar 17, 2025
18e93ff
Merge feature/configure-ssh to master (#6361)
BengangY Mar 17, 2025
c1f02ed
CA-408126 follow-up: Fix negative ds_min and RRD values in historical…
Mar 14, 2025
6e04ead
Add opam local switch in gitignore
gthvn1 Mar 17, 2025
fef11a3
CA-408126 follow-up: Fix negative ds_min and RRD values in historical…
last-genius Mar 17, 2025
6ba6544
xenopsd: start vncterm for PVH guests
psafont Mar 17, 2025
0d1592d
Add opam local switch in gitignore (#6364)
psafont Mar 17, 2025
24e8927
xenopsd: make vncterm less errorprone
psafont Mar 17, 2025
6d776fe
xenopsd: start vncterm for PVH guests (#6363)
psafont Mar 17, 2025
05441c4
Define SR_CACHING capability
MarkSymsCtx Mar 17, 2025
189c82c
IH-533: Remove usage of forkexecd daemon to execute processes (#5995)
Vincent-lau Mar 18, 2025
660df40
CP-52365 fix up driver-tool invocations
Mar 18, 2025
4becc09
Update datamodel_lifecycle.ml
robhoes Mar 18, 2025
e39ab6f
CP-52365 fix up driver-tool invocations (#6367)
lindig Mar 18, 2025
3608e9d
CA-408339: Respect xenopsd's NUMA-placement-policy default
robhoes Mar 18, 2025
752a186
Use records when accumulating events
Mar 19, 2025
20aad9c
Remove mutable last_generation from Xapi_event
Mar 19, 2025
29ab6b6
CA-408339: Respect xenopsd's NUMA-placement-policy default (#6368)
edwintorok Mar 19, 2025
aff5883
Factor out event reification
Mar 19, 2025
cf8ff83
Use record type for individual event entries
Mar 19, 2025
8e0b253
xenctrlext: do not truncate the amount of memory in claims to 32 bits
psafont Mar 19, 2025
b050c78
xenctrlext: do not truncate the amount of memory in claims to 32 bits…
psafont Mar 19, 2025
8bafeda
CA-407177: Fix swtpm's use of SHA1 on XS9
rosslagerwall Mar 19, 2025
c94207e
forkexecd: do not tie vfork_helper to the forkexec package
psafont Mar 20, 2025
cecdcba
opam: add missing dependencies to packages
psafont Mar 20, 2025
7ab059a
Fix building of all opam packages for xs-opam's CI (#6377)
psafont Mar 20, 2025
60919b6
Define SR_CACHING capability (#6365)
psafont Mar 20, 2025
5ddf28c
Refactor Xapi_event (redux) (#6370)
psafont Mar 20, 2025
c81536f
CA-407177: Fix swtpm's use of SHA1 on XS9 (#6375)
psafont Mar 20, 2025
b39d80e
ci: url of XS_SR_ERRORCODES.xml
psafont Mar 20, 2025
e53aec6
ci: url of XS_SR_ERRORCODES.xml (#6380)
psafont Mar 20, 2025
bec4ca6
CA-404460: Expose Stunnel_verify_error for mismatched certificate
gangj Mar 20, 2025
659284e
CA-404460: Expose Stunnel_verify_error for corrupted certificate
gangj Mar 20, 2025
95d888e
CA-404460: Fix the exposing of Stunnel_verify_error in check_error
gangj Mar 20, 2025
15df700
CA-404460: expose ssl_verify_error during updates syncing
gangj Mar 20, 2025
afe37ec
CA-404460: Expose Stunnel_verify_error for mismatched or corrupted ce…
gangj Mar 21, 2025
e7602a7
Merge branch 'master' into private/changleli/sync-master-to-service-a…
changlei-li Mar 24, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 0 additions & 17 deletions .github/workflows/codechecker.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,39 +27,22 @@ jobs:
- name: Checkout code
uses: actions/checkout@v4

- name: Restore cache for compile_commands.json
uses: actions/cache/restore@v4
id: cache-cmds
with:
path: compile_commands.json
key: compile_commands.json-v1-${{ hashFiles('**/dune') }}

- name: Setup XenAPI environment
if: steps.cache-cmds.outputs.cache-hit != 'true'
uses: ./.github/workflows/setup-xapi-environment
with:
xapi_version: ${{ env.XAPI_VERSION }}

- name: Install dune-compiledb to generate compile_commands.json
if: steps.cache-cmds.outputs.cache-hit != 'true'
run: |
opam pin add -y ezjsonm https://github.com/mirage/ezjsonm/releases/download/v1.3.0/ezjsonm-1.3.0.tbz
opam pin add -y dune-compiledb https://github.com/edwintorok/dune-compiledb/releases/download/0.6.0/dune-compiledb-0.6.0.tbz

- name: Trim dune cache
if: steps.cache-cmds.outputs.cache-hit != 'true'
run: opam exec -- dune cache trim --size=2GiB

- name: Generate compile_commands.json
if: steps.cache-cmds.outputs.cache-hit != 'true'
run: opam exec -- make compile_commands.json

- name: Save cache for cmds.json
uses: actions/cache/save@v4
with:
path: compile_commands.json
key: ${{ steps.cache-cmds.outputs.cache-primary-key }}

- name: Upload compile commands json
uses: actions/upload-artifact@v4
with:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/setup-xapi-environment/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ runs:
shell: bash
run: |
mkdir -p /opt/xensource/sm
wget -O /opt/xensource/sm/XE_SR_ERRORCODES.xml https://raw.githubusercontent.com/xapi-project/sm/master/drivers/XE_SR_ERRORCODES.xml
wget -O /opt/xensource/sm/XE_SR_ERRORCODES.xml https://raw.githubusercontent.com/xapi-project/sm/master/libs/sm/core/XE_SR_ERRORCODES.xml

- name: Load environment file
id: dotenv
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ _coverage/
*.install
*.swp
compile_flags.txt
_opam

# tests
xapi-db.xml
Expand Down
4 changes: 2 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ DUNE_IU_PACKAGES1+=xapi-client xapi-schema xapi-consts xapi-cli-protocol xapi-da
DUNE_IU_PACKAGES1+=xen-api-client xen-api-client-lwt rrdd-plugin rrd-transport
DUNE_IU_PACKAGES1+=gzip http-lib pciutil sexpr stunnel uuid xml-light2 zstd xapi-compression safe-resources
DUNE_IU_PACKAGES1+=message-switch message-switch-cli message-switch-core message-switch-lwt
DUNE_IU_PACKAGES1+=message-switch-unix xapi-idl forkexec xapi-forkexecd xapi-storage xapi-storage-script xapi-storage-cli
DUNE_IU_PACKAGES1+=message-switch-unix xapi-idl xapi-forkexecd xapi-storage xapi-storage-script xapi-storage-cli
DUNE_IU_PACKAGES1+=xapi-nbd varstored-guard xapi-log xapi-open-uri xapi-tracing xapi-tracing-export xapi-expiry-alerts cohttp-posix
DUNE_IU_PACKAGES1+=xapi-rrd xapi-inventory clock xapi-sdk
DUNE_IU_PACKAGES1+=xapi-stdext-date xapi-stdext-encodings xapi-stdext-pervasives xapi-stdext-std xapi-stdext-threads xapi-stdext-unix xapi-stdext-zerocheck xapi-tools
Expand All @@ -173,7 +173,7 @@ DUNE_IU_PACKAGES3=-j $(JOBS) --destdir=$(DESTDIR) --prefix=$(OPTDIR) --libdir=$(
install-dune3:
dune install $(DUNE_IU_PACKAGES3)

DUNE_IU_PACKAGES4=-j $(JOBS) --destdir=$(DESTDIR) --prefix=$(PREFIX) --libdir=$(LIBDIR) --libexecdir=/usr/libexec --mandir=$(MANDIR) vhd-tool
DUNE_IU_PACKAGES4=-j $(JOBS) --destdir=$(DESTDIR) --prefix=$(PREFIX) --libdir=$(LIBDIR) --libexecdir=/usr/libexec --mandir=$(MANDIR) vhd-tool forkexec

install-dune4:
dune install $(DUNE_IU_PACKAGES4)
Expand Down
2 changes: 1 addition & 1 deletion README.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ To build xen-api from source, we recommend using [opam](https://opam.ocaml.org/d
- Run that line, e.g.:

```bash
export OCAML_VERSION_FULL="4.14.1"
export OCAML_VERSION_FULL="4.14.2"
```

4) Setup opam with your environment (i.e. switch).
Expand Down
76 changes: 76 additions & 0 deletions doc/content/design/sm-supported-image-formats.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
---
title: Add supported image formats in sm-list
layout: default
design_doc: true
revision: 2
status: proposed
---

# Introduction

At XCP-ng, we are enhancing support for QCOW2 images in SMAPI. The primary
motivation for this change is to overcome the 2TB size limitation imposed
by the VHD format. By adding support for QCOW2, a Storage Repository (SR) will
be able to host disks in VHD and/or QCOW2 formats, depending on the SR type.
In the future, additional formats—such as VHDx—could also be supported.

We need a mechanism to expose to end users which image formats are supported
by a given SR. The proposal is to extend the SM API object with a new field
that clients (such as XenCenter, XenOrchestra, etc.) can use to determine the
available formats.

# Design Proposal

To expose the available image formats to clients (e.g., XenCenter, XenOrchestra, etc.),
we propose adding a new field called `supported-image-formats` to the Storage Manager (SM)
module. This field will be included in the output of the `SM.get_all_records` call.

The `supported-image-formats` field will be populated by retrieving information
from the SMAPI drivers. Specifically, each driver will update its `DRIVER_INFO`
dictionary with a new key, `supported_image_formats`, which will contain a list
of strings representing the supported image formats
(for example: `["vhd", "raw", "qcow2"]`).

The list designates the driver's preferred VDI format as its first entry. That
means that when migrating a VDI, the destination storage repository will
attempt to create a VDI in this preferred format. If the default format cannot
be used (e.g., due to size limitations), an error will be generated.

If a driver does not provide this information (as is currently the case with existing
drivers), the default value will be an empty array. This signifies that it is the
driver that decides which format it will use. This ensures that the modification
remains compatible with both current and future drivers.

With this new information, listing all parameters of the SM object will return:

```bash
# xe sm-list params=all
```

will output something like:

```
uuid ( RO) : c6ae9a43-fff6-e482-42a9-8c3f8c533e36
name-label ( RO) : Local EXT3 VHD
name-description ( RO) : SR plugin representing disks as VHD files stored on a local EXT3 filesystem, created inside an LVM volume
type ( RO) : ext
vendor ( RO) : Citrix Systems Inc
copyright ( RO) : (C) 2008 Citrix Systems Inc
required-api-version ( RO) : 1.0
capabilities ( RO) [DEPRECATED] : SR_PROBE; SR_SUPPORTS_LOCAL_CACHING; SR_UPDATE; THIN_PROVISIONING; VDI_ACTIVATE; VDI_ATTACH; VDI_CLONE; VDI_CONFIG_CBT; VDI_CREATE; VDI_DEACTIVATE; VDI_DELETE; VDI_DETACH; VDI_GENERATE_CONFIG; VDI_MIRROR; VDI_READ_CACHING; VDI_RESET_ON_BOOT; VDI_RESIZE; VDI_SNAPSHOT; VDI_UPDATE
features (MRO) : SR_PROBE: 1; SR_SUPPORTS_LOCAL_CACHING: 1; SR_UPDATE: 1; THIN_PROVISIONING: 1; VDI_ACTIVATE: 1; VDI_ATTACH: 1; VDI_CLONE: 1; VDI_CONFIG_CBT: 1; VDI_CREATE: 1; VDI_DEACTIVATE: 1; VDI_DELETE: 1; VDI_DETACH: 1; VDI_GENERATE_CONFIG: 1; VDI_MIRROR: 1; VDI_READ_CACHING: 1; VDI_RESET_ON_BOOT: 2; VDI_RESIZE: 1; VDI_SNAPSHOT: 1; VDI_UPDATE: 1
configuration ( RO) : device: local device path (required) (e.g. /dev/sda3)
driver-filename ( RO) : /opt/xensource/sm/EXTSR
required-cluster-stack ( RO) :
supported-image-formats ( RO) : vhd, raw, qcow2
```

This change impacts the SM data model, and as such, the XAPI database version will
be incremented.

# Impact

- **Data Model:** A new field (`supported-image-formats`) is added to the SM records.
- **Client Awareness:** Clients like the `xe` CLI will now be able to query and display the supported image formats for a given SR.
- **Database Versioning:** The XAPI database version will be updated to reflect this change.

157 changes: 157 additions & 0 deletions doc/content/lib/xenctrl/xc_domain_claim_pages.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,157 @@
---
title: xc_domain_claim_pages()
description: Stake a claim for further memory for a domain, and release it too.
---

## Purpose

The purpose of `xc_domain_claim_pages()` is to attempt to
stake a claim on an amount of memory for a given domain which guarantees that
memory allocations for the claimed amount will be successful.

The domain can still attempt to allocate beyond the claim, but those are not
guaranteed to be successful and will fail if the domain's memory reaches it's
`max_mem` value.

Each domain can only have one claim, and the domid is the key of the claim.
By killing the domain, the claim is also released.

Depending on the given size argument, the remaining stack of the domain
can be set initially, updated to the given amount, or reset to no claim (0).

## Management of claims

- The stake is centrally managed by the Xen hypervisor using a
[Hypercall](https://wiki.xenproject.org/wiki/Hypercall).
- Claims are not reflected in the amount of free memory reported by Xen.

## Reporting of claims

- `xl claims` reports the outstanding claims of the domains:
> [!info] Sample output of `xl claims`:
> ```js
> Name ID Mem VCPUs State Time(s) Claimed
> Domain-0 0 2656 8 r----- 957418.2 0
> ```
- `xl info` reports the host-wide outstanding claims:
> [!info] Sample output from `xl info | grep outstanding`:
> ```js
> outstanding_claims : 0
> ```

## Tracking of claims

Xen only tracks:
- the outstanding claims of each domain and
- the outstanding host-wide claims.

Claiming zero pages effectively cancels the domain's outstanding claim
and is always successful.

> [!info]
> - Allocations for outstanding claims are expected to always be successful.
> - But this reduces the amount of outstanding claims if the domain.
> - Freeing memory of the domain increases the domain's claim again:
> - But, when a domain consumes its claim, it is reset.
> - When the claim is reset, freed memory is longer moved to the outstanding claims!
> - It would have to get a new claim on memory to have spare memory again.

> [!warning] The domain's `max_mem` value is used to deny memory allocation
> If an allocation would cause the domain to exceed it's `max_mem`
> value, it will always fail.


## Implementation

Function signature of the libXenCtrl function to call the Xen hypercall:

```c
long xc_memory_op(libxc_handle, XENMEM_claim_pages, struct xen_memory_reservation *)
```

`struct xen_memory_reservation` is defined as :

```c
struct xen_memory_reservation {
.nr_extents = nr_pages, /* number of pages to claim */
.extent_order = 0, /* an order 0 means: 4k pages, only 0 is allowed */
.mem_flags = 0, /* no flags, only 0 is allowed (at the moment) */
.domid = domid /* numerical domain ID of the domain */
};
```

### Concurrency

Xen protects the consistency of the stake of the domain
using the domain's `page_alloc_lock` and the global `heap_lock` of Xen.
Thse spin-locks prevent any "time-of-check-time-of-use" races.
As the hypercall needs to take those spin-locks, it cannot be preempted.

### Return value

The call returns 0 if the hypercall successfully claimed the requested amount
of memory, else it returns non-zero.

## Current users

### <tt>libxl</tt> and the <tt>xl</tt> CLI

If the `struct xc_dom_image` passed by `libxl` to the
[libxenguest](https://github.com/xen-project/xen/tree/master/tools/libs/guest)
functions
[meminit_hvm()](https://github.com/xen-project/xen/blob/de0254b9/tools/libs/guest/xg_dom_x86.c#L1348-L1649)
and
[meminit_pv()](https://github.com/xen-project/xen/blob/de0254b9/tools/libs/guest/xg_dom_x86.c#L1183-L1333)
has it's `claim_enabled` field set, they,
before allocating the domain's system memory using the allocation function
[xc_populate_physmap()](https://github.com/xen-project/xen/blob/de0254b9/xen/common/memory.c#L159-L314) which calls the hypercall to allocate and populate
the domain's main system memory, will attempt to claim the to-be allocated
memory using a call to `xc_domain_claim_pages()`.
In case this fails, they do not attempt to continue and return the error code
of `xc_domain_claim_pages()`.

Both functions also (unconditionally) reset the claim upon return.

But, the `xl` CLI uses this functionality (unless disabled in `xl.conf`)
to make building the domains fail to prevent running out of memory inside
the `meminit_hvm` and `meminit_pv` calls.
Instead, they immediately return an error.

This means that in case the claim fails, `xl` avoids:
- The effort of allocating the memory, thereby not blocking it for other domains.
- The effort of potentially needing to scrub the memory after the build failure.

### xenguest

While [xenguest](../../../xenopsd/walkthroughs/VM.build/xenguest) calls the
[libxenguest](https://github.com/xen-project/xen/tree/master/tools/libs/guest)
functions
[meminit_hvm()](https://github.com/xen-project/xen/blob/de0254b9/tools/libs/guest/xg_dom_x86.c#L1348-L1649)
and
[meminit_pv()](https://github.com/xen-project/xen/blob/de0254b9/tools/libs/guest/xg_dom_x86.c#L1183-L1333)
like `libxl` does, it does not set
[struct xc_dom_image.claim_enabled](https://github.com/xen-project/xen/blob/de0254b9/tools/include/xenguest.h#L186),
so it does not enable the first call to `xc_domain_claim_pages()`
which would claim the amount of memory that these functions will
attempt to allocate and populate for the domain.

#### Future design ideas for improved NUMA support

For improved support for [NUMA](../../../toolstack/features/NUMA/), `xenopsd`
may want to call an updated version of this function for the domain, so it has
a stake on the NUMA node's memory before `xenguest` will allocate for the domain
before assigning an NUMA node to a new domain.

Further, as PV drivers `unmap` and `free` memory for grant tables to Xen and
then re-allocate memory for those grant tables, `xenopsd` may want to try to
stake a very small claim for the domain on the NUMA node of the domain so that
Xen can increase this claim when the PV drivers `free` this memory and re-use
the resulting claimed amount for allocating the grant tables. This would ensure
that the grant tables are then allocated on the local NUMA node of the domain,
avoiding remote memory accesses when accessing the grant tables from inside
the domain.

Note: In case the corresponding backend process in Dom0 is running on another
NUMA node, it would access the domain's grant tables from a remote NUMA node,
but in this would enable a future improvement for Dom0, where it could prefer to
run the corresponding backend process on the same or a neighbouring NUMA node.
Loading
Loading