Skip to content

Commit df4af9f

Browse files
rpedgecobonzini
authored andcommitted
KVM: x86/tdp_mmu: Don't zap valid mirror roots in kvm_tdp_mmu_zap_all()
Don't zap valid mirror roots in kvm_tdp_mmu_zap_all(), which in effect is only direct roots (invalid and valid). For TDX, kvm_tdp_mmu_zap_all() is only called during MMU notifier release. Since, mirrored EPT comes from guest mem, it will never be mapped to userspace, and won't apply. But in addition to be unnecessary, mirrored EPT is cleaned up in a special way during VM destruction. Pass the KVM_INVALID_ROOTS bit into __for_each_tdp_mmu_root_yield_safe() as well, to clean up invalid direct roots, as is the current behavior. While at it, remove an obsolete reference to work item-based zapping. Co-developed-by: Yan Zhao <yan.y.zhao@intel.com> Signed-off-by: Yan Zhao <yan.y.zhao@intel.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Message-ID: <20240718211230.1492011-18-rick.p.edgecombe@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
1 parent a89ecbb commit df4af9f

File tree

1 file changed

+9
-7
lines changed

1 file changed

+9
-7
lines changed

arch/x86/kvm/mmu/tdp_mmu.c

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -999,19 +999,21 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm)
999999
struct kvm_mmu_page *root;
10001000

10011001
/*
1002-
* Zap all roots, including invalid roots, as all SPTEs must be dropped
1003-
* before returning to the caller. Zap directly even if the root is
1004-
* also being zapped by a worker. Walking zapped top-level SPTEs isn't
1005-
* all that expensive and mmu_lock is already held, which means the
1006-
* worker has yielded, i.e. flushing the work instead of zapping here
1007-
* isn't guaranteed to be any faster.
1002+
* Zap all direct roots, including invalid direct roots, as all direct
1003+
* SPTEs must be dropped before returning to the caller. For TDX, mirror
1004+
* roots don't need handling in response to the mmu notifier (the caller).
1005+
*
1006+
* Zap directly even if the root is also being zapped by a concurrent
1007+
* "fast zap". Walking zapped top-level SPTEs isn't all that expensive
1008+
* and mmu_lock is already held, which means the other thread has yielded.
10081009
*
10091010
* A TLB flush is unnecessary, KVM zaps everything if and only the VM
10101011
* is being destroyed or the userspace VMM has exited. In both cases,
10111012
* KVM_RUN is unreachable, i.e. no vCPUs will ever service the request.
10121013
*/
10131014
lockdep_assert_held_write(&kvm->mmu_lock);
1014-
for_each_tdp_mmu_root_yield_safe(kvm, root)
1015+
__for_each_tdp_mmu_root_yield_safe(kvm, root, -1,
1016+
KVM_DIRECT_ROOTS | KVM_INVALID_ROOTS)
10151017
tdp_mmu_zap_root(kvm, root, false);
10161018
}
10171019

0 commit comments

Comments
 (0)