Skip to content

Commit 7b08675

Browse files
hnazakpm00
authored andcommitted
mm: page_alloc: fix CMA and HIGHATOMIC landing on the wrong buddy list
Commit 4b23a68 ("mm/page_alloc: protect PCP lists with a spinlock") bypasses the pcplist on lock contention and returns the page directly to the buddy list of the page's migratetype. For pages that don't have their own pcplist, such as CMA and HIGHATOMIC, the migratetype is temporarily updated such that the page can hitch a ride on the MOVABLE pcplist. Their true type is later reassessed when flushing in free_pcppages_bulk(). However, when lock contention is detected after the type was already overridden, the bypass will then put the page on the wrong buddy list. Once on the MOVABLE buddy list, the page becomes eligible for fallbacks and even stealing. In the case of HIGHATOMIC, otherwise ineligible allocations can dip into the highatomic reserves. In the case of CMA, the page can be lost from the CMA region permanently. Use a separate pcpmigratetype variable for the pcplist override. Use the original migratetype when going directly to the buddy. This fixes the bug and should make the intentions more obvious in the code. Originally sent here to address the HIGHATOMIC case: https://lore.kernel.org/lkml/20230821183733.106619-4-hannes@cmpxchg.org/ Changelog updated in response to the CMA-specific bug report. [mgorman@techsingularity.net: updated changelog] Link: https://lkml.kernel.org/r/20230911181108.GA104295@cmpxchg.org Fixes: 4b23a68 ("mm/page_alloc: protect PCP lists with a spinlock") Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reported-by: Joe Liu <joe.liu@mediatek.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent e72590f commit 7b08675

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

mm/page_alloc.c

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2400,32 +2400,32 @@ void free_unref_page(struct page *page, unsigned int order)
24002400
struct per_cpu_pages *pcp;
24012401
struct zone *zone;
24022402
unsigned long pfn = page_to_pfn(page);
2403-
int migratetype;
2403+
int migratetype, pcpmigratetype;
24042404

24052405
if (!free_unref_page_prepare(page, pfn, order))
24062406
return;
24072407

24082408
/*
24092409
* We only track unmovable, reclaimable and movable on pcp lists.
24102410
* Place ISOLATE pages on the isolated list because they are being
2411-
* offlined but treat HIGHATOMIC as movable pages so we can get those
2412-
* areas back if necessary. Otherwise, we may have to free
2411+
* offlined but treat HIGHATOMIC and CMA as movable pages so we can
2412+
* get those areas back if necessary. Otherwise, we may have to free
24132413
* excessively into the page allocator
24142414
*/
2415-
migratetype = get_pcppage_migratetype(page);
2415+
migratetype = pcpmigratetype = get_pcppage_migratetype(page);
24162416
if (unlikely(migratetype >= MIGRATE_PCPTYPES)) {
24172417
if (unlikely(is_migrate_isolate(migratetype))) {
24182418
free_one_page(page_zone(page), page, pfn, order, migratetype, FPI_NONE);
24192419
return;
24202420
}
2421-
migratetype = MIGRATE_MOVABLE;
2421+
pcpmigratetype = MIGRATE_MOVABLE;
24222422
}
24232423

24242424
zone = page_zone(page);
24252425
pcp_trylock_prepare(UP_flags);
24262426
pcp = pcp_spin_trylock(zone->per_cpu_pageset);
24272427
if (pcp) {
2428-
free_unref_page_commit(zone, pcp, page, migratetype, order);
2428+
free_unref_page_commit(zone, pcp, page, pcpmigratetype, order);
24292429
pcp_spin_unlock(pcp);
24302430
} else {
24312431
free_one_page(zone, page, pfn, order, migratetype, FPI_NONE);

0 commit comments

Comments
 (0)