Skip to content

Commit 9b8615c

Browse files
committed
KVM: x86: Rely solely on preempted_in_kernel flag for directed yield
Snapshot preempted_in_kernel using kvm_arch_vcpu_in_kernel() so that the flag is "accurate" (or rather, consistent and deterministic within KVM) for guests with protected state, and explicitly use preempted_in_kernel when checking if a vCPU was preempted in kernel mode instead of bouncing through kvm_arch_vcpu_in_kernel(). Drop the gnarly logic in kvm_arch_vcpu_in_kernel() that redirects to preempted_in_kernel if the target vCPU is not the "running", i.e. loaded, vCPU, as the only reason that code existed was for the directed yield case where KVM wants to check the CPL of a vCPU that may or may not be loaded on the current pCPU. Cc: Like Xu <like.xu.linux@gmail.com> Reviewed-by: Yuan Yao <yuan.yao@intel.com> Link: https://lore.kernel.org/r/20240110003938.490206-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
1 parent 77bcd9e commit 9b8615c

File tree

1 file changed

+2
-6
lines changed

1 file changed

+2
-6
lines changed

arch/x86/kvm/x86.c

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5059,8 +5059,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
50595059
int idx;
50605060

50615061
if (vcpu->preempted) {
5062-
if (!vcpu->arch.guest_state_protected)
5063-
vcpu->arch.preempted_in_kernel = !static_call(kvm_x86_get_cpl)(vcpu);
5062+
vcpu->arch.preempted_in_kernel = kvm_arch_vcpu_in_kernel(vcpu);
50645063

50655064
/*
50665065
* Take the srcu lock as memslots will be accessed to check the gfn
@@ -13056,7 +13055,7 @@ bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu)
1305613055

1305713056
bool kvm_arch_vcpu_preempted_in_kernel(struct kvm_vcpu *vcpu)
1305813057
{
13059-
return kvm_arch_vcpu_in_kernel(vcpu);
13058+
return vcpu->arch.preempted_in_kernel;
1306013059
}
1306113060

1306213061
bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu)
@@ -13079,9 +13078,6 @@ bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
1307913078
if (vcpu->arch.guest_state_protected)
1308013079
return true;
1308113080

13082-
if (vcpu != kvm_get_running_vcpu())
13083-
return vcpu->arch.preempted_in_kernel;
13084-
1308513081
return static_call(kvm_x86_get_cpl)(vcpu) == 0;
1308613082
}
1308713083

0 commit comments

Comments
 (0)