Skip to content

Commit e9a2bba

Browse files
committed
Merge tag 'kvm-x86-xen-6.9' of https://github.com/kvm-x86/linux into HEAD
KVM Xen and pfncache changes for 6.9: - Rip out the half-baked support for using gfn_to_pfn caches to manage pages that are "mapped" into guests via physical addresses. - Add support for using gfn_to_pfn caches with only a host virtual address, i.e. to bypass the "gfn" stage of the cache. The primary use case is overlay pages, where the guest may change the gfn used to reference the overlay page, but the backing hva+pfn remains the same. - Add an ioctl() to allow mapping Xen's shared_info page using an hva instead of a gpa, so that userspace doesn't need to reconfigure and invalidate the cache/mapping if the guest changes the gpa (but userspace keeps the resolved hva the same). - When possible, use a single host TSC value when computing the deadline for Xen timers in order to improve the accuracy of the timer emulation. - Inject pending upcall events when the vCPU software-enables its APIC to fix a bug where an upcall can be lost (and to follow Xen's behavior). - Fall back to the slow path instead of warning if "fast" IRQ delivery of Xen events fails, e.g. if the guest has aliased xAPIC IDs. - Extend gfn_to_pfn_cache's mutex to cover (de)activation (in addition to refresh), and drop a now-redundant acquisition of xen_lock (that was protecting the shared_info cache) to fix a deadlock due to recursively acquiring xen_lock.
2 parents e9025cd + 7a36d68 commit e9a2bba

File tree

16 files changed

+601
-268
lines changed

16 files changed

+601
-268
lines changed

Documentation/virt/kvm/api.rst

Lines changed: 41 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -372,7 +372,7 @@ The bits in the dirty bitmap are cleared before the ioctl returns, unless
372372
KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 is enabled. For more information,
373373
see the description of the capability.
374374

375-
Note that the Xen shared info page, if configured, shall always be assumed
375+
Note that the Xen shared_info page, if configured, shall always be assumed
376376
to be dirty. KVM will not explicitly mark it such.
377377

378378

@@ -5487,8 +5487,9 @@ KVM_PV_ASYNC_CLEANUP_PERFORM
54875487
__u8 long_mode;
54885488
__u8 vector;
54895489
__u8 runstate_update_flag;
5490-
struct {
5490+
union {
54915491
__u64 gfn;
5492+
__u64 hva;
54925493
} shared_info;
54935494
struct {
54945495
__u32 send_port;
@@ -5516,19 +5517,20 @@ type values:
55165517

55175518
KVM_XEN_ATTR_TYPE_LONG_MODE
55185519
Sets the ABI mode of the VM to 32-bit or 64-bit (long mode). This
5519-
determines the layout of the shared info pages exposed to the VM.
5520+
determines the layout of the shared_info page exposed to the VM.
55205521

55215522
KVM_XEN_ATTR_TYPE_SHARED_INFO
5522-
Sets the guest physical frame number at which the Xen "shared info"
5523+
Sets the guest physical frame number at which the Xen shared_info
55235524
page resides. Note that although Xen places vcpu_info for the first
55245525
32 vCPUs in the shared_info page, KVM does not automatically do so
5525-
and instead requires that KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO be used
5526-
explicitly even when the vcpu_info for a given vCPU resides at the
5527-
"default" location in the shared_info page. This is because KVM may
5528-
not be aware of the Xen CPU id which is used as the index into the
5529-
vcpu_info[] array, so may know the correct default location.
5530-
5531-
Note that the shared info page may be constantly written to by KVM;
5526+
and instead requires that KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO or
5527+
KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO_HVA be used explicitly even when
5528+
the vcpu_info for a given vCPU resides at the "default" location
5529+
in the shared_info page. This is because KVM may not be aware of
5530+
the Xen CPU id which is used as the index into the vcpu_info[]
5531+
array, so may know the correct default location.
5532+
5533+
Note that the shared_info page may be constantly written to by KVM;
55325534
it contains the event channel bitmap used to deliver interrupts to
55335535
a Xen guest, amongst other things. It is exempt from dirty tracking
55345536
mechanisms — KVM will not explicitly mark the page as dirty each
@@ -5537,9 +5539,21 @@ KVM_XEN_ATTR_TYPE_SHARED_INFO
55375539
any vCPU has been running or any event channel interrupts can be
55385540
routed to the guest.
55395541

5540-
Setting the gfn to KVM_XEN_INVALID_GFN will disable the shared info
5542+
Setting the gfn to KVM_XEN_INVALID_GFN will disable the shared_info
55415543
page.
55425544

5545+
KVM_XEN_ATTR_TYPE_SHARED_INFO_HVA
5546+
If the KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA flag is also set in the
5547+
Xen capabilities, then this attribute may be used to set the
5548+
userspace address at which the shared_info page resides, which
5549+
will always be fixed in the VMM regardless of where it is mapped
5550+
in guest physical address space. This attribute should be used in
5551+
preference to KVM_XEN_ATTR_TYPE_SHARED_INFO as it avoids
5552+
unnecessary invalidation of an internal cache when the page is
5553+
re-mapped in guest physcial address space.
5554+
5555+
Setting the hva to zero will disable the shared_info page.
5556+
55435557
KVM_XEN_ATTR_TYPE_UPCALL_VECTOR
55445558
Sets the exception vector used to deliver Xen event channel upcalls.
55455559
This is the HVM-wide vector injected directly by the hypervisor
@@ -5636,6 +5650,21 @@ KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO
56365650
on dirty logging. Setting the gpa to KVM_XEN_INVALID_GPA will disable
56375651
the vcpu_info.
56385652

5653+
KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO_HVA
5654+
If the KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA flag is also set in the
5655+
Xen capabilities, then this attribute may be used to set the
5656+
userspace address of the vcpu_info for a given vCPU. It should
5657+
only be used when the vcpu_info resides at the "default" location
5658+
in the shared_info page. In this case it is safe to assume the
5659+
userspace address will not change, because the shared_info page is
5660+
an overlay on guest memory and remains at a fixed host address
5661+
regardless of where it is mapped in guest physical address space
5662+
and hence unnecessary invalidation of an internal cache may be
5663+
avoided if the guest memory layout is modified.
5664+
If the vcpu_info does not reside at the "default" location then
5665+
it is not guaranteed to remain at the same host address and
5666+
hence the aforementioned cache invalidation is required.
5667+
56395668
KVM_XEN_VCPU_ATTR_TYPE_VCPU_TIME_INFO
56405669
Sets the guest physical address of an additional pvclock structure
56415670
for a given vCPU. This is typically used for guest vsyscall support.

arch/s390/kvm/diag.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ static int __diag_page_ref_service(struct kvm_vcpu *vcpu)
102102
parm.token_addr & 7 || parm.zarch != 0x8000000000000000ULL)
103103
return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
104104

105-
if (kvm_is_error_gpa(vcpu->kvm, parm.token_addr))
105+
if (!kvm_is_gpa_in_memslot(vcpu->kvm, parm.token_addr))
106106
return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING);
107107

108108
vcpu->arch.pfault_token = parm.token_addr;

arch/s390/kvm/gaccess.c

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -664,7 +664,7 @@ static unsigned long guest_translate(struct kvm_vcpu *vcpu, unsigned long gva,
664664
case ASCE_TYPE_REGION1: {
665665
union region1_table_entry rfte;
666666

667-
if (kvm_is_error_gpa(vcpu->kvm, ptr))
667+
if (!kvm_is_gpa_in_memslot(vcpu->kvm, ptr))
668668
return PGM_ADDRESSING;
669669
if (deref_table(vcpu->kvm, ptr, &rfte.val))
670670
return -EFAULT;
@@ -682,7 +682,7 @@ static unsigned long guest_translate(struct kvm_vcpu *vcpu, unsigned long gva,
682682
case ASCE_TYPE_REGION2: {
683683
union region2_table_entry rste;
684684

685-
if (kvm_is_error_gpa(vcpu->kvm, ptr))
685+
if (!kvm_is_gpa_in_memslot(vcpu->kvm, ptr))
686686
return PGM_ADDRESSING;
687687
if (deref_table(vcpu->kvm, ptr, &rste.val))
688688
return -EFAULT;
@@ -700,7 +700,7 @@ static unsigned long guest_translate(struct kvm_vcpu *vcpu, unsigned long gva,
700700
case ASCE_TYPE_REGION3: {
701701
union region3_table_entry rtte;
702702

703-
if (kvm_is_error_gpa(vcpu->kvm, ptr))
703+
if (!kvm_is_gpa_in_memslot(vcpu->kvm, ptr))
704704
return PGM_ADDRESSING;
705705
if (deref_table(vcpu->kvm, ptr, &rtte.val))
706706
return -EFAULT;
@@ -728,7 +728,7 @@ static unsigned long guest_translate(struct kvm_vcpu *vcpu, unsigned long gva,
728728
case ASCE_TYPE_SEGMENT: {
729729
union segment_table_entry ste;
730730

731-
if (kvm_is_error_gpa(vcpu->kvm, ptr))
731+
if (!kvm_is_gpa_in_memslot(vcpu->kvm, ptr))
732732
return PGM_ADDRESSING;
733733
if (deref_table(vcpu->kvm, ptr, &ste.val))
734734
return -EFAULT;
@@ -748,7 +748,7 @@ static unsigned long guest_translate(struct kvm_vcpu *vcpu, unsigned long gva,
748748
ptr = ste.fc0.pto * (PAGE_SIZE / 2) + vaddr.px * 8;
749749
}
750750
}
751-
if (kvm_is_error_gpa(vcpu->kvm, ptr))
751+
if (!kvm_is_gpa_in_memslot(vcpu->kvm, ptr))
752752
return PGM_ADDRESSING;
753753
if (deref_table(vcpu->kvm, ptr, &pte.val))
754754
return -EFAULT;
@@ -770,7 +770,7 @@ static unsigned long guest_translate(struct kvm_vcpu *vcpu, unsigned long gva,
770770
*prot = PROT_TYPE_IEP;
771771
return PGM_PROTECTION;
772772
}
773-
if (kvm_is_error_gpa(vcpu->kvm, raddr.addr))
773+
if (!kvm_is_gpa_in_memslot(vcpu->kvm, raddr.addr))
774774
return PGM_ADDRESSING;
775775
*gpa = raddr.addr;
776776
return 0;
@@ -957,7 +957,7 @@ static int guest_range_to_gpas(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
957957
return rc;
958958
} else {
959959
gpa = kvm_s390_real_to_abs(vcpu, ga);
960-
if (kvm_is_error_gpa(vcpu->kvm, gpa)) {
960+
if (!kvm_is_gpa_in_memslot(vcpu->kvm, gpa)) {
961961
rc = PGM_ADDRESSING;
962962
prot = PROT_NONE;
963963
}

arch/s390/kvm/kvm-s390.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2878,7 +2878,7 @@ static int kvm_s390_vm_mem_op_abs(struct kvm *kvm, struct kvm_s390_mem_op *mop)
28782878

28792879
srcu_idx = srcu_read_lock(&kvm->srcu);
28802880

2881-
if (kvm_is_error_gpa(kvm, mop->gaddr)) {
2881+
if (!kvm_is_gpa_in_memslot(kvm, mop->gaddr)) {
28822882
r = PGM_ADDRESSING;
28832883
goto out_unlock;
28842884
}
@@ -2940,7 +2940,7 @@ static int kvm_s390_vm_mem_op_cmpxchg(struct kvm *kvm, struct kvm_s390_mem_op *m
29402940

29412941
srcu_idx = srcu_read_lock(&kvm->srcu);
29422942

2943-
if (kvm_is_error_gpa(kvm, mop->gaddr)) {
2943+
if (!kvm_is_gpa_in_memslot(kvm, mop->gaddr)) {
29442944
r = PGM_ADDRESSING;
29452945
goto out_unlock;
29462946
}

arch/s390/kvm/priv.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -149,7 +149,7 @@ static int handle_set_prefix(struct kvm_vcpu *vcpu)
149149
* first page, since address is 8k aligned and memory pieces are always
150150
* at least 1MB aligned and have at least a size of 1MB.
151151
*/
152-
if (kvm_is_error_gpa(vcpu->kvm, address))
152+
if (!kvm_is_gpa_in_memslot(vcpu->kvm, address))
153153
return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING);
154154

155155
kvm_s390_set_prefix(vcpu, address);
@@ -464,7 +464,7 @@ static int handle_test_block(struct kvm_vcpu *vcpu)
464464
return kvm_s390_inject_prog_irq(vcpu, &vcpu->arch.pgm);
465465
addr = kvm_s390_real_to_abs(vcpu, addr);
466466

467-
if (kvm_is_error_gpa(vcpu->kvm, addr))
467+
if (!kvm_is_gpa_in_memslot(vcpu->kvm, addr))
468468
return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING);
469469
/*
470470
* We don't expect errors on modern systems, and do not care

arch/s390/kvm/sigp.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -172,7 +172,7 @@ static int __sigp_set_prefix(struct kvm_vcpu *vcpu, struct kvm_vcpu *dst_vcpu,
172172
* first page, since address is 8k aligned and memory pieces are always
173173
* at least 1MB aligned and have at least a size of 1MB.
174174
*/
175-
if (kvm_is_error_gpa(vcpu->kvm, irq.u.prefix.address)) {
175+
if (!kvm_is_gpa_in_memslot(vcpu->kvm, irq.u.prefix.address)) {
176176
*reg &= 0xffffffff00000000UL;
177177
*reg |= SIGP_STATUS_INVALID_PARAMETER;
178178
return SIGP_CC_STATUS_STORED;

arch/x86/include/uapi/asm/kvm.h

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -549,6 +549,7 @@ struct kvm_x86_mce {
549549
#define KVM_XEN_HVM_CONFIG_EVTCHN_SEND (1 << 5)
550550
#define KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG (1 << 6)
551551
#define KVM_XEN_HVM_CONFIG_PVCLOCK_TSC_UNSTABLE (1 << 7)
552+
#define KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA (1 << 8)
552553

553554
struct kvm_xen_hvm_config {
554555
__u32 flags;
@@ -567,9 +568,10 @@ struct kvm_xen_hvm_attr {
567568
__u8 long_mode;
568569
__u8 vector;
569570
__u8 runstate_update_flag;
570-
struct {
571+
union {
571572
__u64 gfn;
572573
#define KVM_XEN_INVALID_GFN ((__u64)-1)
574+
__u64 hva;
573575
} shared_info;
574576
struct {
575577
__u32 send_port;
@@ -611,13 +613,16 @@ struct kvm_xen_hvm_attr {
611613
#define KVM_XEN_ATTR_TYPE_XEN_VERSION 0x4
612614
/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG */
613615
#define KVM_XEN_ATTR_TYPE_RUNSTATE_UPDATE_FLAG 0x5
616+
/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA */
617+
#define KVM_XEN_ATTR_TYPE_SHARED_INFO_HVA 0x6
614618

615619
struct kvm_xen_vcpu_attr {
616620
__u16 type;
617621
__u16 pad[3];
618622
union {
619623
__u64 gpa;
620624
#define KVM_XEN_INVALID_GPA ((__u64)-1)
625+
__u64 hva;
621626
__u64 pad[8];
622627
struct {
623628
__u64 state;
@@ -648,6 +653,8 @@ struct kvm_xen_vcpu_attr {
648653
#define KVM_XEN_VCPU_ATTR_TYPE_VCPU_ID 0x6
649654
#define KVM_XEN_VCPU_ATTR_TYPE_TIMER 0x7
650655
#define KVM_XEN_VCPU_ATTR_TYPE_UPCALL_VECTOR 0x8
656+
/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA */
657+
#define KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO_HVA 0x9
651658

652659
/* Secure Encrypted Virtualization command */
653660
enum sev_cmd_id {

arch/x86/kvm/lapic.c

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,7 @@
4141
#include "ioapic.h"
4242
#include "trace.h"
4343
#include "x86.h"
44+
#include "xen.h"
4445
#include "cpuid.h"
4546
#include "hyperv.h"
4647
#include "smm.h"
@@ -502,8 +503,10 @@ static inline void apic_set_spiv(struct kvm_lapic *apic, u32 val)
502503
}
503504

504505
/* Check if there are APF page ready requests pending */
505-
if (enabled)
506+
if (enabled) {
506507
kvm_make_request(KVM_REQ_APF_READY, apic->vcpu);
508+
kvm_xen_sw_enable_lapic(apic->vcpu);
509+
}
507510
}
508511

509512
static inline void kvm_apic_set_xapic_id(struct kvm_lapic *apic, u8 id)

arch/x86/kvm/x86.c

Lines changed: 60 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -2854,7 +2854,11 @@ static inline u64 vgettsc(struct pvclock_clock *clock, u64 *tsc_timestamp,
28542854
return v * clock->mult;
28552855
}
28562856

2857-
static int do_monotonic_raw(s64 *t, u64 *tsc_timestamp)
2857+
/*
2858+
* As with get_kvmclock_base_ns(), this counts from boot time, at the
2859+
* frequency of CLOCK_MONOTONIC_RAW (hence adding gtos->offs_boot).
2860+
*/
2861+
static int do_kvmclock_base(s64 *t, u64 *tsc_timestamp)
28582862
{
28592863
struct pvclock_gtod_data *gtod = &pvclock_gtod_data;
28602864
unsigned long seq;
@@ -2873,6 +2877,29 @@ static int do_monotonic_raw(s64 *t, u64 *tsc_timestamp)
28732877
return mode;
28742878
}
28752879

2880+
/*
2881+
* This calculates CLOCK_MONOTONIC at the time of the TSC snapshot, with
2882+
* no boot time offset.
2883+
*/
2884+
static int do_monotonic(s64 *t, u64 *tsc_timestamp)
2885+
{
2886+
struct pvclock_gtod_data *gtod = &pvclock_gtod_data;
2887+
unsigned long seq;
2888+
int mode;
2889+
u64 ns;
2890+
2891+
do {
2892+
seq = read_seqcount_begin(&gtod->seq);
2893+
ns = gtod->clock.base_cycles;
2894+
ns += vgettsc(&gtod->clock, tsc_timestamp, &mode);
2895+
ns >>= gtod->clock.shift;
2896+
ns += ktime_to_ns(gtod->clock.offset);
2897+
} while (unlikely(read_seqcount_retry(&gtod->seq, seq)));
2898+
*t = ns;
2899+
2900+
return mode;
2901+
}
2902+
28762903
static int do_realtime(struct timespec64 *ts, u64 *tsc_timestamp)
28772904
{
28782905
struct pvclock_gtod_data *gtod = &pvclock_gtod_data;
@@ -2894,18 +2921,42 @@ static int do_realtime(struct timespec64 *ts, u64 *tsc_timestamp)
28942921
return mode;
28952922
}
28962923

2897-
/* returns true if host is using TSC based clocksource */
2924+
/*
2925+
* Calculates the kvmclock_base_ns (CLOCK_MONOTONIC_RAW + boot time) and
2926+
* reports the TSC value from which it do so. Returns true if host is
2927+
* using TSC based clocksource.
2928+
*/
28982929
static bool kvm_get_time_and_clockread(s64 *kernel_ns, u64 *tsc_timestamp)
28992930
{
29002931
/* checked again under seqlock below */
29012932
if (!gtod_is_based_on_tsc(pvclock_gtod_data.clock.vclock_mode))
29022933
return false;
29032934

2904-
return gtod_is_based_on_tsc(do_monotonic_raw(kernel_ns,
2905-
tsc_timestamp));
2935+
return gtod_is_based_on_tsc(do_kvmclock_base(kernel_ns,
2936+
tsc_timestamp));
29062937
}
29072938

2908-
/* returns true if host is using TSC based clocksource */
2939+
/*
2940+
* Calculates CLOCK_MONOTONIC and reports the TSC value from which it did
2941+
* so. Returns true if host is using TSC based clocksource.
2942+
*/
2943+
bool kvm_get_monotonic_and_clockread(s64 *kernel_ns, u64 *tsc_timestamp)
2944+
{
2945+
/* checked again under seqlock below */
2946+
if (!gtod_is_based_on_tsc(pvclock_gtod_data.clock.vclock_mode))
2947+
return false;
2948+
2949+
return gtod_is_based_on_tsc(do_monotonic(kernel_ns,
2950+
tsc_timestamp));
2951+
}
2952+
2953+
/*
2954+
* Calculates CLOCK_REALTIME and reports the TSC value from which it did
2955+
* so. Returns true if host is using TSC based clocksource.
2956+
*
2957+
* DO NOT USE this for anything related to migration. You want CLOCK_TAI
2958+
* for that.
2959+
*/
29092960
static bool kvm_get_walltime_and_clockread(struct timespec64 *ts,
29102961
u64 *tsc_timestamp)
29112962
{
@@ -3152,7 +3203,7 @@ static void kvm_setup_guest_pvclock(struct kvm_vcpu *v,
31523203

31533204
guest_hv_clock->version = ++vcpu->hv_clock.version;
31543205

3155-
mark_page_dirty_in_slot(v->kvm, gpc->memslot, gpc->gpa >> PAGE_SHIFT);
3206+
kvm_gpc_mark_dirty_in_slot(gpc);
31563207
read_unlock_irqrestore(&gpc->lock, flags);
31573208

31583209
trace_kvm_pvclock_update(v->vcpu_id, &vcpu->hv_clock);
@@ -4674,7 +4725,8 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
46744725
KVM_XEN_HVM_CONFIG_SHARED_INFO |
46754726
KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL |
46764727
KVM_XEN_HVM_CONFIG_EVTCHN_SEND |
4677-
KVM_XEN_HVM_CONFIG_PVCLOCK_TSC_UNSTABLE;
4728+
KVM_XEN_HVM_CONFIG_PVCLOCK_TSC_UNSTABLE |
4729+
KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA;
46784730
if (sched_info_on())
46794731
r |= KVM_XEN_HVM_CONFIG_RUNSTATE |
46804732
KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG;
@@ -12027,7 +12079,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
1202712079
vcpu->arch.regs_avail = ~0;
1202812080
vcpu->arch.regs_dirty = ~0;
1202912081

12030-
kvm_gpc_init(&vcpu->arch.pv_time, vcpu->kvm, vcpu, KVM_HOST_USES_PFN);
12082+
kvm_gpc_init(&vcpu->arch.pv_time, vcpu->kvm);
1203112083

1203212084
if (!irqchip_in_kernel(vcpu->kvm) || kvm_vcpu_is_reset_bsp(vcpu))
1203312085
vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;

0 commit comments

Comments
 (0)