Skip to content

Commit d4234d1

Browse files
Zhenhua Huangwilldeacon
authored andcommitted
arm64: mm: Populate vmemmap at the page level if not section aligned
On the arm64 platform with 4K base page config, SECTION_SIZE_BITS is set to 27, making one section 128M. The related page struct which vmemmap points to is 2M then. Commit c1cc155 ("arm64: MMU initialisation") optimizes the vmemmap to populate at the PMD section level which was suitable initially since hot plug granule is always one section(128M). However, commit ba72b4c ("mm/sparsemem: support sub-section hotplug") introduced a 2M(SUBSECTION_SIZE) hot plug granule, which disrupted the existing arm64 assumptions. The first problem is that if start or end is not aligned to a section boundary, such as when a subsection is hot added, populating the entire section is wasteful. The next problem is if we hotplug something that spans part of 128 MiB section (subsections, let's call it memblock1), and then hotplug something that spans another part of a 128 MiB section(subsections, let's call it memblock2), and subsequently unplug memblock1, vmemmap_free() will clear the entire PMD entry which also supports memblock2 even though memblock2 is still active. Assuming hotplug/unplug sizes are guaranteed to be symmetric. Do the fix similar to x86-64: populate to pages levels if start/end is not aligned with section boundary. Cc: stable@vger.kernel.org # v5.4+ Fixes: ba72b4c ("mm/sparsemem: support sub-section hotplug") Acked-by: David Hildenbrand <david@redhat.com> Signed-off-by: Zhenhua Huang <quic_zhenhuah@quicinc.com> Reviewed-by: Oscar Salvador <osalvador@suse.de> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20250304072700.3405036-1-quic_zhenhuah@quicinc.com Signed-off-by: Will Deacon <will@kernel.org>
1 parent eed6bfa commit d4234d1

File tree

1 file changed

+4
-1
lines changed

1 file changed

+4
-1
lines changed

arch/arm64/mm/mmu.c

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1177,8 +1177,11 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
11771177
struct vmem_altmap *altmap)
11781178
{
11791179
WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
1180+
/* [start, end] should be within one section */
1181+
WARN_ON_ONCE(end - start > PAGES_PER_SECTION * sizeof(struct page));
11801182

1181-
if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES))
1183+
if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) ||
1184+
(end - start < PAGES_PER_SECTION * sizeof(struct page)))
11821185
return vmemmap_populate_basepages(start, end, node, altmap);
11831186
else
11841187
return vmemmap_populate_hugepages(start, end, node, altmap);

0 commit comments

Comments
 (0)