Skip to content

Commit c30e000

Browse files
sean-jcbonzini
authored andcommitted
KVM: x86/mmu: Harden new PGD against roots without shadow pages
Harden kvm_mmu_new_pgd() against NULL pointer dereference bugs by sanity checking that the target root has an associated shadow page prior to dereferencing said shadow page. The code in question is guaranteed to only see roots with shadow pages as fast_pgd_switch() explicitly frees the current root if it doesn't have a shadow page, i.e. is a PAE root, and that in turn prevents valid roots from being cached, but that's all very subtle. Link: https://lore.kernel.org/r/20230729005200.1057358-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
1 parent c5f2d56 commit c30e000

File tree

1 file changed

+19
-6
lines changed

1 file changed

+19
-6
lines changed

arch/x86/kvm/mmu/mmu.c

Lines changed: 19 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4527,9 +4527,19 @@ static void nonpaging_init_context(struct kvm_mmu *context)
45274527
static inline bool is_root_usable(struct kvm_mmu_root_info *root, gpa_t pgd,
45284528
union kvm_mmu_page_role role)
45294529
{
4530-
return (role.direct || pgd == root->pgd) &&
4531-
VALID_PAGE(root->hpa) &&
4532-
role.word == root_to_sp(root->hpa)->role.word;
4530+
struct kvm_mmu_page *sp;
4531+
4532+
if (!VALID_PAGE(root->hpa))
4533+
return false;
4534+
4535+
if (!role.direct && pgd != root->pgd)
4536+
return false;
4537+
4538+
sp = root_to_sp(root->hpa);
4539+
if (WARN_ON_ONCE(!sp))
4540+
return false;
4541+
4542+
return role.word == sp->role.word;
45334543
}
45344544

45354545
/*
@@ -4649,9 +4659,12 @@ void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t new_pgd)
46494659
* If this is a direct root page, it doesn't have a write flooding
46504660
* count. Otherwise, clear the write flooding count.
46514661
*/
4652-
if (!new_role.direct)
4653-
__clear_sp_write_flooding_count(
4654-
root_to_sp(vcpu->arch.mmu->root.hpa));
4662+
if (!new_role.direct) {
4663+
struct kvm_mmu_page *sp = root_to_sp(vcpu->arch.mmu->root.hpa);
4664+
4665+
if (!WARN_ON_ONCE(!sp))
4666+
__clear_sp_write_flooding_count(sp);
4667+
}
46554668
}
46564669
EXPORT_SYMBOL_GPL(kvm_mmu_new_pgd);
46574670

0 commit comments

Comments
 (0)