Skip to content
This repository was archived by the owner on Nov 8, 2023. It is now read-only.

Commit 27bd5fd

Browse files
nikunjadbonzini
authored andcommitted
KVM: SEV-ES: Prevent MSR access post VMSA encryption
KVM currently allows userspace to read/write MSRs even after the VMSA is encrypted. This can cause unintentional issues if MSR access has side- effects. For ex, while migrating a guest, userspace could attempt to migrate MSR_IA32_DEBUGCTLMSR and end up unintentionally disabling LBRV on the target. Fix this by preventing access to those MSRs which are context switched via the VMSA, once the VMSA is encrypted. Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Nikunj A Dadhania <nikunj@amd.com> Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com> Message-ID: <20240531044644.768-2-ravi.bangoria@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
1 parent b4bd556 commit 27bd5fd

File tree

1 file changed

+18
-0
lines changed

1 file changed

+18
-0
lines changed

arch/x86/kvm/svm/svm.c

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2822,10 +2822,24 @@ static int svm_get_msr_feature(struct kvm_msr_entry *msr)
28222822
return 0;
28232823
}
28242824

2825+
static bool
2826+
sev_es_prevent_msr_access(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
2827+
{
2828+
return sev_es_guest(vcpu->kvm) &&
2829+
vcpu->arch.guest_state_protected &&
2830+
svm_msrpm_offset(msr_info->index) != MSR_INVALID &&
2831+
!msr_write_intercepted(vcpu, msr_info->index);
2832+
}
2833+
28252834
static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
28262835
{
28272836
struct vcpu_svm *svm = to_svm(vcpu);
28282837

2838+
if (sev_es_prevent_msr_access(vcpu, msr_info)) {
2839+
msr_info->data = 0;
2840+
return -EINVAL;
2841+
}
2842+
28292843
switch (msr_info->index) {
28302844
case MSR_AMD64_TSC_RATIO:
28312845
if (!msr_info->host_initiated &&
@@ -2976,6 +2990,10 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
29762990

29772991
u32 ecx = msr->index;
29782992
u64 data = msr->data;
2993+
2994+
if (sev_es_prevent_msr_access(vcpu, msr))
2995+
return -EINVAL;
2996+
29792997
switch (ecx) {
29802998
case MSR_AMD64_TSC_RATIO:
29812999

0 commit comments

Comments
 (0)