Skip to content

Commit 2c3412e

Browse files
rpedgecobonzini
authored andcommitted
KVM: x86/mmu: Prevent aliased memslot GFNs
Add a few sanity checks to prevent memslot GFNs from ever having alias bits set. Like other Coco technologies, TDX has the concept of private and shared memory. For TDX the private and shared mappings are managed on separate EPT roots. The private half is managed indirectly though calls into a protected runtime environment called the TDX module, where the shared half is managed within KVM in normal page tables. For TDX, the shared half will be mapped in the higher alias, with a "shared bit" set in the GPA. However, KVM will still manage it with the same memslots as the private half. This means memslot looks ups and zapping operations will be provided with a GFN without the shared bit set. If these memslot GFNs ever had the bit that selects between the two aliases it could lead to unexpected behavior in the complicated code that directs faulting or zapping operations between the roots that map the two aliases. As a safety measure, prevent memslots from being set at a GFN range that contains the alias bit. Also, check in the kvm_faultin_pfn() for the fault path. This later check does less today, as the alias bits are specifically stripped from the GFN being checked, however future code could possibly call in to the fault handler in a way that skips this stripping. Since kvm_faultin_pfn() now has many references to vcpu->kvm, extract it to local variable. Link: https://lore.kernel.org/kvm/ZpbKqG_ZhCWxl-Fc@google.com/ Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Message-ID: <20240718211230.1492011-19-rick.p.edgecombe@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
1 parent df4af9f commit 2c3412e

File tree

3 files changed

+15
-3
lines changed

3 files changed

+15
-3
lines changed

arch/x86/kvm/mmu.h

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -313,4 +313,9 @@ static inline bool kvm_is_addr_direct(struct kvm *kvm, gpa_t gpa)
313313

314314
return !gpa_direct_bits || (gpa & gpa_direct_bits);
315315
}
316+
317+
static inline bool kvm_is_gfn_alias(struct kvm *kvm, gfn_t gfn)
318+
{
319+
return gfn & kvm_gfn_direct_bits(kvm);
320+
}
316321
#endif

arch/x86/kvm/mmu/mmu.c

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4391,8 +4391,12 @@ static int kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu,
43914391
struct kvm_page_fault *fault, unsigned int access)
43924392
{
43934393
struct kvm_memory_slot *slot = fault->slot;
4394+
struct kvm *kvm = vcpu->kvm;
43944395
int ret;
43954396

4397+
if (KVM_BUG_ON(kvm_is_gfn_alias(kvm, fault->gfn), kvm))
4398+
return -EFAULT;
4399+
43964400
/*
43974401
* Note that the mmu_invalidate_seq also serves to detect a concurrent
43984402
* change in attributes. is_page_fault_stale() will detect an
@@ -4406,7 +4410,7 @@ static int kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu,
44064410
* Now that we have a snapshot of mmu_invalidate_seq we can check for a
44074411
* private vs. shared mismatch.
44084412
*/
4409-
if (fault->is_private != kvm_mem_is_private(vcpu->kvm, fault->gfn)) {
4413+
if (fault->is_private != kvm_mem_is_private(kvm, fault->gfn)) {
44104414
kvm_mmu_prepare_memory_fault_exit(vcpu, fault);
44114415
return -EFAULT;
44124416
}
@@ -4468,7 +4472,7 @@ static int kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu,
44684472
* *guaranteed* to need to retry, i.e. waiting until mmu_lock is held
44694473
* to detect retry guarantees the worst case latency for the vCPU.
44704474
*/
4471-
if (mmu_invalidate_retry_gfn_unsafe(vcpu->kvm, fault->mmu_seq, fault->gfn))
4475+
if (mmu_invalidate_retry_gfn_unsafe(kvm, fault->mmu_seq, fault->gfn))
44724476
return RET_PF_RETRY;
44734477

44744478
ret = __kvm_mmu_faultin_pfn(vcpu, fault);
@@ -4488,7 +4492,7 @@ static int kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu,
44884492
* overall cost of failing to detect the invalidation until after
44894493
* mmu_lock is acquired.
44904494
*/
4491-
if (mmu_invalidate_retry_gfn_unsafe(vcpu->kvm, fault->mmu_seq, fault->gfn)) {
4495+
if (mmu_invalidate_retry_gfn_unsafe(kvm, fault->mmu_seq, fault->gfn)) {
44924496
kvm_mmu_finish_page_fault(vcpu, fault, RET_PF_RETRY);
44934497
return RET_PF_RETRY;
44944498
}

arch/x86/kvm/x86.c

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13010,6 +13010,9 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
1301013010
if ((new->base_gfn + new->npages - 1) > kvm_mmu_max_gfn())
1301113011
return -EINVAL;
1301213012

13013+
if (kvm_is_gfn_alias(kvm, new->base_gfn + new->npages - 1))
13014+
return -EINVAL;
13015+
1301313016
return kvm_alloc_memslot_metadata(kvm, new);
1301413017
}
1301513018

0 commit comments

Comments
 (0)