Skip to content

Commit 3b8000a

Browse files
npiggintorvalds
authored andcommitted
mm/vmalloc: huge vmalloc backing pages should be split rather than compound
Huge vmalloc higher-order backing pages were allocated with __GFP_COMP in order to allow the sub-pages to be refcounted by callers such as "remap_vmalloc_page [sic]" (remap_vmalloc_range). However a similar problem exists for other struct page fields callers use, for example fb_deferred_io_fault() takes a vmalloc'ed page and not only refcounts it but uses ->lru, ->mapping, ->index. This is not compatible with compound sub-pages, and can cause bad page state issues like BUG: Bad page state in process swapper/0 pfn:00743 page:(____ptrval____) refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x743 flags: 0x7ffff000000000(node=0|zone=0|lastcpupid=0x7ffff) raw: 007ffff000000000 c00c00000001d0c8 c00c00000001d0c8 0000000000000000 raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000 page dumped because: corrupted mapping in tail page Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.18.0-rc3-00082-gfc6fff4a7ce1-dirty #2810 Call Trace: dump_stack_lvl+0x74/0xa8 (unreliable) bad_page+0x12c/0x170 free_tail_pages_check+0xe8/0x190 free_pcp_prepare+0x31c/0x4e0 free_unref_page+0x40/0x1b0 __vunmap+0x1d8/0x420 ... The correct approach is to use split high-order pages for the huge vmalloc backing. These allow callers to treat them in exactly the same way as individually-allocated order-0 pages. Link: https://lore.kernel.org/all/14444103-d51b-0fb3-ee63-c3f182f0b546@molgen.mpg.de/ Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Cc: Paul Menzel <pmenzel@molgen.mpg.de> Cc: Song Liu <songliubraving@fb.com> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent d569e86 commit 3b8000a

File tree

1 file changed

+21
-15
lines changed

1 file changed

+21
-15
lines changed

mm/vmalloc.c

Lines changed: 21 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -2653,15 +2653,18 @@ static void __vunmap(const void *addr, int deallocate_pages)
26532653
vm_remove_mappings(area, deallocate_pages);
26542654

26552655
if (deallocate_pages) {
2656-
unsigned int page_order = vm_area_page_order(area);
2657-
int i, step = 1U << page_order;
2656+
int i;
26582657

2659-
for (i = 0; i < area->nr_pages; i += step) {
2658+
for (i = 0; i < area->nr_pages; i++) {
26602659
struct page *page = area->pages[i];
26612660

26622661
BUG_ON(!page);
2663-
mod_memcg_page_state(page, MEMCG_VMALLOC, -step);
2664-
__free_pages(page, page_order);
2662+
mod_memcg_page_state(page, MEMCG_VMALLOC, -1);
2663+
/*
2664+
* High-order allocs for huge vmallocs are split, so
2665+
* can be freed as an array of order-0 allocations
2666+
*/
2667+
__free_pages(page, 0);
26652668
cond_resched();
26662669
}
26672670
atomic_long_sub(area->nr_pages, &nr_vmalloc_pages);
@@ -2914,12 +2917,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
29142917
if (nr != nr_pages_request)
29152918
break;
29162919
}
2917-
} else
2918-
/*
2919-
* Compound pages required for remap_vmalloc_page if
2920-
* high-order pages.
2921-
*/
2922-
gfp |= __GFP_COMP;
2920+
}
29232921

29242922
/* High-order pages or fallback path if "bulk" fails. */
29252923

@@ -2933,6 +2931,15 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
29332931
page = alloc_pages_node(nid, gfp, order);
29342932
if (unlikely(!page))
29352933
break;
2934+
/*
2935+
* Higher order allocations must be able to be treated as
2936+
* indepdenent small pages by callers (as they can with
2937+
* small-page vmallocs). Some drivers do their own refcounting
2938+
* on vmalloc_to_page() pages, some use page->mapping,
2939+
* page->lru, etc.
2940+
*/
2941+
if (order)
2942+
split_page(page, order);
29362943

29372944
/*
29382945
* Careful, we allocate and map page-order pages, but
@@ -2992,11 +2999,10 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
29922999

29933000
atomic_long_add(area->nr_pages, &nr_vmalloc_pages);
29943001
if (gfp_mask & __GFP_ACCOUNT) {
2995-
int i, step = 1U << page_order;
3002+
int i;
29963003

2997-
for (i = 0; i < area->nr_pages; i += step)
2998-
mod_memcg_page_state(area->pages[i], MEMCG_VMALLOC,
2999-
step);
3004+
for (i = 0; i < area->nr_pages; i++)
3005+
mod_memcg_page_state(area->pages[i], MEMCG_VMALLOC, 1);
30003006
}
30013007

30023008
/*

0 commit comments

Comments
 (0)