Skip to content

Commit 79cc6cd

Browse files
ouptonMarc Zyngier
authored andcommitted
KVM: arm64: nv: Clarify safety of allowing TLBI unmaps to reschedule
There's been a decent amount of attention around unmaps of nested MMUs, and TLBI handling is no exception to this. Add a comment clarifying why it is safe to reschedule during a TLBI unmap, even without a reference on the MMU in progress. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20241007233028.2236133-5-oliver.upton@linux.dev Signed-off-by: Marc Zyngier <maz@kernel.org>
1 parent c268f20 commit 79cc6cd

File tree

1 file changed

+27
-0
lines changed

1 file changed

+27
-0
lines changed

arch/arm64/kvm/sys_regs.c

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2989,6 +2989,29 @@ union tlbi_info {
29892989
static void s2_mmu_unmap_range(struct kvm_s2_mmu *mmu,
29902990
const union tlbi_info *info)
29912991
{
2992+
/*
2993+
* The unmap operation is allowed to drop the MMU lock and block, which
2994+
* means that @mmu could be used for a different context than the one
2995+
* currently being invalidated.
2996+
*
2997+
* This behavior is still safe, as:
2998+
*
2999+
* 1) The vCPU(s) that recycled the MMU are responsible for invalidating
3000+
* the entire MMU before reusing it, which still honors the intent
3001+
* of a TLBI.
3002+
*
3003+
* 2) Until the guest TLBI instruction is 'retired' (i.e. increment PC
3004+
* and ERET to the guest), other vCPUs are allowed to use stale
3005+
* translations.
3006+
*
3007+
* 3) Accidentally unmapping an unrelated MMU context is nonfatal, and
3008+
* at worst may cause more aborts for shadow stage-2 fills.
3009+
*
3010+
* Dropping the MMU lock also implies that shadow stage-2 fills could
3011+
* happen behind the back of the TLBI. This is still safe, though, as
3012+
* the L1 needs to put its stage-2 in a consistent state before doing
3013+
* the TLBI.
3014+
*/
29923015
kvm_stage2_unmap_range(mmu, info->range.start, info->range.size, true);
29933016
}
29943017

@@ -3084,6 +3107,10 @@ static void s2_mmu_unmap_ipa(struct kvm_s2_mmu *mmu,
30843107
max_size = compute_tlb_inval_range(mmu, info->ipa.addr);
30853108
base_addr &= ~(max_size - 1);
30863109

3110+
/*
3111+
* See comment in s2_mmu_unmap_range() for why this is allowed to
3112+
* reschedule.
3113+
*/
30873114
kvm_stage2_unmap_range(mmu, base_addr, max_size, true);
30883115
}
30893116

0 commit comments

Comments
 (0)