Skip to content

Commit d0831ed

Browse files
committed
Revert "KVM: Fix vcpu_array[0] races"
Now that KVM loads from vcpu_array if and only if the target index is valid with respect to online_vcpus, i.e. now that it is safe to erase a not-fully-onlined vCPU entry, revert to storing into vcpu_array before success is guaranteed. If xa_store() fails, which _should_ be impossible, then putting the vCPU's reference to 'struct kvm' results in a refcounting bug as the vCPU fd has been installed and owns the vCPU's reference. This was found by inspection, but forcing the xa_store() to fail confirms the problem: | Unable to handle kernel paging request at virtual address ffff800080ecd960 | Call trace: | _raw_spin_lock_irq+0x2c/0x70 | kvm_irqfd_release+0x24/0xa0 | kvm_vm_release+0x1c/0x38 | __fput+0x88/0x2ec | ____fput+0x10/0x1c | task_work_run+0xb0/0xd4 | do_exit+0x210/0x854 | do_group_exit+0x70/0x98 | get_signal+0x6b0/0x73c | do_signal+0xa4/0x11e8 | do_notify_resume+0x60/0x12c | el0_svc+0x64/0x68 | el0t_64_sync_handler+0x84/0xfc | el0t_64_sync+0x190/0x194 | Code: b9000909 d503201f 2a1f03e1 52800028 (88e17c08) Practically speaking, this is a non-issue as xa_store() can't fail, absent a nasty kernel bug. But the code is visually jarring and technically broken. This reverts commit afb2acb. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Michal Luczaj <mhal@rbox.co> Cc: Alexander Potapenko <glider@google.com> Cc: Marc Zyngier <maz@kernel.org> Reported-by: Will Deacon <will@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20241009150455.1057573-5-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
1 parent 6e2b235 commit d0831ed

File tree

1 file changed

+5
-9
lines changed

1 file changed

+5
-9
lines changed

virt/kvm/kvm_main.c

Lines changed: 5 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -4128,7 +4128,8 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id)
41284128
}
41294129

41304130
vcpu->vcpu_idx = atomic_read(&kvm->online_vcpus);
4131-
r = xa_reserve(&kvm->vcpu_array, vcpu->vcpu_idx, GFP_KERNEL_ACCOUNT);
4131+
r = xa_insert(&kvm->vcpu_array, vcpu->vcpu_idx, vcpu, GFP_KERNEL_ACCOUNT);
4132+
BUG_ON(r == -EBUSY);
41324133
if (r)
41334134
goto unlock_vcpu_destroy;
41344135

@@ -4143,12 +4144,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id)
41434144
kvm_get_kvm(kvm);
41444145
r = create_vcpu_fd(vcpu);
41454146
if (r < 0)
4146-
goto kvm_put_xa_release;
4147-
4148-
if (KVM_BUG_ON(xa_store(&kvm->vcpu_array, vcpu->vcpu_idx, vcpu, 0), kvm)) {
4149-
r = -EINVAL;
4150-
goto kvm_put_xa_release;
4151-
}
4147+
goto kvm_put_xa_erase;
41524148

41534149
/*
41544150
* Pairs with smp_rmb() in kvm_get_vcpu. Store the vcpu
@@ -4163,10 +4159,10 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id)
41634159
kvm_create_vcpu_debugfs(vcpu);
41644160
return r;
41654161

4166-
kvm_put_xa_release:
4162+
kvm_put_xa_erase:
41674163
mutex_unlock(&vcpu->mutex);
41684164
kvm_put_kvm_no_destroy(kvm);
4169-
xa_release(&kvm->vcpu_array, vcpu->vcpu_idx);
4165+
xa_erase(&kvm->vcpu_array, vcpu->vcpu_idx);
41704166
unlock_vcpu_destroy:
41714167
mutex_unlock(&kvm->lock);
41724168
kvm_dirty_ring_free(&vcpu->dirty_ring);

0 commit comments

Comments
 (0)