Skip to content

Commit e6e07b6

Browse files
T.J. Mercierakpm00
authored andcommitted
alloc_tag: handle incomplete bulk allocations in vm_module_tags_populate
alloc_pages_bulk_node() may partially succeed and allocate fewer than the requested nr_pages. There are several conditions under which this can occur, but we have encountered the case where CONFIG_PAGE_OWNER is enabled causing all bulk allocations to always fallback to single page allocations due to commit 187ad46 ("mm/page_alloc: avoid page allocator recursion with pagesets.lock held"). Currently vm_module_tags_populate() immediately fails when alloc_pages_bulk_node() returns fewer than the requested number of pages. When this happens memory allocation profiling gets disabled, for example [ 14.297583] [9: modprobe: 465] Failed to allocate memory for allocation tags in the module scsc_wlan. Memory allocation profiling is disabled! [ 14.299339] [9: modprobe: 465] modprobe: Failed to insmod '/vendor/lib/modules/scsc_wlan.ko' with args '': Out of memory This patch causes vm_module_tags_populate() to retry bulk allocations for the remaining memory instead of failing immediately which will avoid the disablement of memory allocation profiling. Link: https://lkml.kernel.org/r/20250409225111.3770347-1-tjmercier@google.com Fixes: 0f9b685 ("alloc_tag: populate memory for module tags as needed") Signed-off-by: T.J. Mercier <tjmercier@google.com> Reported-by: Janghyuck Kim <janghyuck.kim@samsung.com> Acked-by: Suren Baghdasaryan <surenb@google.com> Cc: Kent Overstreet <kent.overstreet@linux.dev> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent 0aa8dbe commit e6e07b6

File tree

1 file changed

+12
-3
lines changed

1 file changed

+12
-3
lines changed

lib/alloc_tag.c

Lines changed: 12 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -422,11 +422,20 @@ static int vm_module_tags_populate(void)
422422
unsigned long old_shadow_end = ALIGN(phys_end, MODULE_ALIGN);
423423
unsigned long new_shadow_end = ALIGN(new_end, MODULE_ALIGN);
424424
unsigned long more_pages;
425-
unsigned long nr;
425+
unsigned long nr = 0;
426426

427427
more_pages = ALIGN(new_end - phys_end, PAGE_SIZE) >> PAGE_SHIFT;
428-
nr = alloc_pages_bulk_node(GFP_KERNEL | __GFP_NOWARN,
429-
NUMA_NO_NODE, more_pages, next_page);
428+
while (nr < more_pages) {
429+
unsigned long allocated;
430+
431+
allocated = alloc_pages_bulk_node(GFP_KERNEL | __GFP_NOWARN,
432+
NUMA_NO_NODE, more_pages - nr, next_page + nr);
433+
434+
if (!allocated)
435+
break;
436+
nr += allocated;
437+
}
438+
430439
if (nr < more_pages ||
431440
vmap_pages_range(phys_end, phys_end + (nr << PAGE_SHIFT), PAGE_KERNEL,
432441
next_page, PAGE_SHIFT) < 0) {

0 commit comments

Comments
 (0)