Skip to content

Commit f94f4a9

Browse files
yamahatabonzini
authored andcommitted
KVM: TDX: Support per-VM KVM_CAP_MAX_VCPUS extension check
Change to report the KVM_CAP_MAX_VCPUS extension from globally to per-VM to allow userspace to be able to query maximum vCPUs for TDX guest via checking the KVM_CAP_MAX_VCPU extension on per-VM basis. Today KVM x86 reports KVM_MAX_VCPUS as guest's maximum vCPUs for all guests globally, and userspace, i.e. Qemu, queries the KVM_MAX_VCPUS extension globally but not on per-VM basis. TDX has its own limit of maximum vCPUs it can support for all TDX guests in addition to KVM_MAX_VCPUS. TDX module reports this limit via the MAX_VCPU_PER_TD global metadata. Different modules may report different values. In practice, the reported value reflects the maximum logical CPUs that ALL the platforms that the module supports can possibly have. Note some old modules may also not support this metadata, in which case the limit is U16_MAX. The current way to always report KVM_MAX_VCPUS in the KVM_CAP_MAX_VCPUS extension is not enough for TDX. To accommodate TDX, change to report the KVM_CAP_MAX_VCPUS extension on per-VM basis. Specifically, override kvm->max_vcpus in tdx_vm_init() for TDX guest, and report kvm->max_vcpus in the KVM_CAP_MAX_VCPUS extension check. Change to report "the number of logical CPUs the platform has" as the maximum vCPUs for TDX guest. Simply forwarding the MAX_VCPU_PER_TD reported by the TDX module would result in an unpredictable ABI because the reported value to userspace would be depending on whims of TDX modules. This works in practice because of the MAX_VCPU_PER_TD reported by the TDX module will never be smaller than the one reported to userspace. But to make sure KVM never reports an unsupported value, sanity check the MAX_VCPU_PER_TD reported by TDX module is not smaller than the number of logical CPUs the platform has, otherwise refuse to use TDX. Note, when creating a TDX guest, TDX actually requires the "maximum vCPUs for _this_ TDX guest" as an input to initialize the TDX guest. But TDX guest's maximum vCPUs is not part of TDREPORT thus not part of attestation, thus there's no need to allow userspace to explicitly _configure_ the maximum vCPUs on per-VM basis. KVM will simply use kvm->max_vcpus as input when initializing the TDX guest. Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
1 parent 8d032b6 commit f94f4a9

File tree

3 files changed

+54
-0
lines changed

3 files changed

+54
-0
lines changed

arch/x86/kvm/vmx/main.c

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@
77
#include "pmu.h"
88
#include "posted_intr.h"
99
#include "tdx.h"
10+
#include "tdx_arch.h"
1011

1112
static __init int vt_hardware_setup(void)
1213
{

arch/x86/kvm/vmx/tdx.c

Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -376,6 +376,19 @@ int tdx_vm_init(struct kvm *kvm)
376376
kvm->arch.has_protected_state = true;
377377
kvm->arch.has_private_mem = true;
378378

379+
/*
380+
* TDX has its own limit of maximum vCPUs it can support for all
381+
* TDX guests in addition to KVM_MAX_VCPUS. TDX module reports
382+
* such limit via the MAX_VCPU_PER_TD global metadata. In
383+
* practice, it reflects the number of logical CPUs that ALL
384+
* platforms that the TDX module supports can possibly have.
385+
*
386+
* Limit TDX guest's maximum vCPUs to the number of logical CPUs
387+
* the platform has. Simply forwarding the MAX_VCPU_PER_TD to
388+
* userspace would result in an unpredictable ABI.
389+
*/
390+
kvm->max_vcpus = min_t(int, kvm->max_vcpus, num_present_cpus());
391+
379392
/* Place holder for TDX specific logic. */
380393
return __tdx_td_init(kvm);
381394
}
@@ -695,6 +708,7 @@ static int __init __do_tdx_bringup(void)
695708

696709
static int __init __tdx_bringup(void)
697710
{
711+
const struct tdx_sys_info_td_conf *td_conf;
698712
int r;
699713

700714
/*
@@ -727,6 +741,43 @@ static int __init __tdx_bringup(void)
727741
if (!(tdx_sysinfo->features.tdx_features0 & MD_FIELD_ID_FEATURES0_TOPOLOGY_ENUM))
728742
goto get_sysinfo_err;
729743

744+
/*
745+
* TDX has its own limit of maximum vCPUs it can support for all
746+
* TDX guests in addition to KVM_MAX_VCPUS. Userspace needs to
747+
* query TDX guest's maximum vCPUs by checking KVM_CAP_MAX_VCPU
748+
* extension on per-VM basis.
749+
*
750+
* TDX module reports such limit via the MAX_VCPU_PER_TD global
751+
* metadata. Different modules may report different values.
752+
* Some old module may also not support this metadata (in which
753+
* case this limit is U16_MAX).
754+
*
755+
* In practice, the reported value reflects the maximum logical
756+
* CPUs that ALL the platforms that the module supports can
757+
* possibly have.
758+
*
759+
* Simply forwarding the MAX_VCPU_PER_TD to userspace could
760+
* result in an unpredictable ABI. KVM instead always advertise
761+
* the number of logical CPUs the platform has as the maximum
762+
* vCPUs for TDX guests.
763+
*
764+
* Make sure MAX_VCPU_PER_TD reported by TDX module is not
765+
* smaller than the number of logical CPUs, otherwise KVM will
766+
* report an unsupported value to userspace.
767+
*
768+
* Note, a platform with TDX enabled in the BIOS cannot support
769+
* physical CPU hotplug, and TDX requires the BIOS has marked
770+
* all logical CPUs in MADT table as enabled. Just use
771+
* num_present_cpus() for the number of logical CPUs.
772+
*/
773+
td_conf = &tdx_sysinfo->td_conf;
774+
if (td_conf->max_vcpus_per_td < num_present_cpus()) {
775+
pr_err("Disable TDX: MAX_VCPU_PER_TD (%u) smaller than number of logical CPUs (%u).\n",
776+
td_conf->max_vcpus_per_td, num_present_cpus());
777+
r = -EINVAL;
778+
goto get_sysinfo_err;
779+
}
780+
730781
/*
731782
* Leave hardware virtualization enabled after TDX is enabled
732783
* successfully. TDX CPU hotplug depends on this.

arch/x86/kvm/x86.c

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4720,6 +4720,8 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
47204720
break;
47214721
case KVM_CAP_MAX_VCPUS:
47224722
r = KVM_MAX_VCPUS;
4723+
if (kvm)
4724+
r = kvm->max_vcpus;
47234725
break;
47244726
case KVM_CAP_MAX_VCPU_ID:
47254727
r = KVM_MAX_VCPU_IDS;

0 commit comments

Comments
 (0)