Skip to content

Commit 4bbef7e

Browse files
sean-jcbonzini
authored andcommitted
KVM: SVM: Simplify and harden helper to flush SEV guest page(s)
Rework sev_flush_guest_memory() to explicitly handle only a single page, and harden it to fall back to WBINVD if VM_PAGE_FLUSH fails. Per-page flushing is currently used only to flush the VMSA, and in its current form, the helper is completely broken with respect to flushing actual guest memory, i.e. won't work correctly for an arbitrary memory range. VM_PAGE_FLUSH takes a host virtual address, and is subject to normal page walks, i.e. will fault if the address is not present in the host page tables or does not have the correct permissions. Current AMD CPUs also do not honor SMAP overrides (undocumented in kernel versions of the APM), so passing in a userspace address is completely out of the question. In other words, KVM would need to manually walk the host page tables to get the pfn, ensure the pfn is stable, and then use the direct map to invoke VM_PAGE_FLUSH. And the latter might not even work, e.g. if userspace is particularly evil/clever and backs the guest with Secret Memory (which unmaps memory from the direct map). Signed-off-by: Sean Christopherson <seanjc@google.com> Fixes: add5e2f ("KVM: SVM: Add support for the SEV-ES VMSA") Reported-by: Mingwei Zhang <mizhang@google.com> Cc: stable@vger.kernel.org Signed-off-by: Mingwei Zhang <mizhang@google.com> Message-Id: <20220421031407.2516575-2-mizhang@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
1 parent 266a19a commit 4bbef7e

File tree

1 file changed

+20
-34
lines changed

1 file changed

+20
-34
lines changed

arch/x86/kvm/svm/sev.c

Lines changed: 20 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -2226,9 +2226,18 @@ int sev_cpu_init(struct svm_cpu_data *sd)
22262226
* Pages used by hardware to hold guest encrypted state must be flushed before
22272227
* returning them to the system.
22282228
*/
2229-
static void sev_flush_guest_memory(struct vcpu_svm *svm, void *va,
2230-
unsigned long len)
2229+
static void sev_flush_encrypted_page(struct kvm_vcpu *vcpu, void *va)
22312230
{
2231+
int asid = to_kvm_svm(vcpu->kvm)->sev_info.asid;
2232+
2233+
/*
2234+
* Note! The address must be a kernel address, as regular page walk
2235+
* checks are performed by VM_PAGE_FLUSH, i.e. operating on a user
2236+
* address is non-deterministic and unsafe. This function deliberately
2237+
* takes a pointer to deter passing in a user address.
2238+
*/
2239+
unsigned long addr = (unsigned long)va;
2240+
22322241
/*
22332242
* If hardware enforced cache coherency for encrypted mappings of the
22342243
* same physical page is supported, nothing to do.
@@ -2237,40 +2246,16 @@ static void sev_flush_guest_memory(struct vcpu_svm *svm, void *va,
22372246
return;
22382247

22392248
/*
2240-
* If the VM Page Flush MSR is supported, use it to flush the page
2241-
* (using the page virtual address and the guest ASID).
2249+
* VM Page Flush takes a host virtual address and a guest ASID. Fall
2250+
* back to WBINVD if this faults so as not to make any problems worse
2251+
* by leaving stale encrypted data in the cache.
22422252
*/
2243-
if (boot_cpu_has(X86_FEATURE_VM_PAGE_FLUSH)) {
2244-
struct kvm_sev_info *sev;
2245-
unsigned long va_start;
2246-
u64 start, stop;
2253+
if (WARN_ON_ONCE(wrmsrl_safe(MSR_AMD64_VM_PAGE_FLUSH, addr | asid)))
2254+
goto do_wbinvd;
22472255

2248-
/* Align start and stop to page boundaries. */
2249-
va_start = (unsigned long)va;
2250-
start = (u64)va_start & PAGE_MASK;
2251-
stop = PAGE_ALIGN((u64)va_start + len);
2252-
2253-
if (start < stop) {
2254-
sev = &to_kvm_svm(svm->vcpu.kvm)->sev_info;
2255-
2256-
while (start < stop) {
2257-
wrmsrl(MSR_AMD64_VM_PAGE_FLUSH,
2258-
start | sev->asid);
2259-
2260-
start += PAGE_SIZE;
2261-
}
2256+
return;
22622257

2263-
return;
2264-
}
2265-
2266-
WARN(1, "Address overflow, using WBINVD\n");
2267-
}
2268-
2269-
/*
2270-
* Hardware should always have one of the above features,
2271-
* but if not, use WBINVD and issue a warning.
2272-
*/
2273-
WARN_ONCE(1, "Using WBINVD to flush guest memory\n");
2258+
do_wbinvd:
22742259
wbinvd_on_all_cpus();
22752260
}
22762261

@@ -2284,7 +2269,8 @@ void sev_free_vcpu(struct kvm_vcpu *vcpu)
22842269
svm = to_svm(vcpu);
22852270

22862271
if (vcpu->arch.guest_state_protected)
2287-
sev_flush_guest_memory(svm, svm->sev_es.vmsa, PAGE_SIZE);
2272+
sev_flush_encrypted_page(vcpu, svm->sev_es.vmsa);
2273+
22882274
__free_page(virt_to_page(svm->sev_es.vmsa));
22892275

22902276
if (svm->sev_es.ghcb_sa_free)

0 commit comments

Comments
 (0)