Skip to content

Commit f945116

Browse files
hnazakpm00
authored andcommitted
mm: page_alloc: remove stale CMA guard code
In the past, movable allocations could be disallowed from CMA through PF_MEMALLOC_PIN. As CMA pages are funneled through the MOVABLE pcplist, this required filtering that cornercase during allocations, such that pinnable allocations wouldn't accidentally get a CMA page. However, since 8e3560d ("mm: honor PF_MEMALLOC_PIN for all movable pages"), PF_MEMALLOC_PIN automatically excludes __GFP_MOVABLE. Once again, MOVABLE implies CMA is allowed. Remove the stale filtering code. Also remove a stale comment that was introduced as part of the filtering code, because the filtering let order-0 pages fall through to the buddy allocator. See 1d91df8 ("mm/page_alloc: handle a missing case for memalloc_nocma_{save/restore} APIs") for context. The comment's been obsolete since the introduction of the explicit ALLOC_HIGHATOMIC flag in eb2e2b4 ("mm/page_alloc: explicitly record high-order atomic allocations in alloc_flags"). Link: https://lkml.kernel.org/r/20230824153821.243148-1-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Mel Gorman <mgorman@techsingularity.net> Cc: David Hildenbrand <david@redhat.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Miaohe Lin <linmiaohe@huawei.com> Cc: Pasha Tatashin <pasha.tatashin@soleen.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent 12af80f commit f945116

File tree

1 file changed

+4
-17
lines changed

1 file changed

+4
-17
lines changed

mm/page_alloc.c

Lines changed: 4 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -2641,12 +2641,6 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone,
26412641
do {
26422642
page = NULL;
26432643
spin_lock_irqsave(&zone->lock, flags);
2644-
/*
2645-
* order-0 request can reach here when the pcplist is skipped
2646-
* due to non-CMA allocation context. HIGHATOMIC area is
2647-
* reserved for high-order atomic allocation, so order-0
2648-
* request should skip it.
2649-
*/
26502644
if (alloc_flags & ALLOC_HIGHATOMIC)
26512645
page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC);
26522646
if (!page) {
@@ -2780,17 +2774,10 @@ struct page *rmqueue(struct zone *preferred_zone,
27802774
WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1));
27812775

27822776
if (likely(pcp_allowed_order(order))) {
2783-
/*
2784-
* MIGRATE_MOVABLE pcplist could have the pages on CMA area and
2785-
* we need to skip it when CMA area isn't allowed.
2786-
*/
2787-
if (!IS_ENABLED(CONFIG_CMA) || alloc_flags & ALLOC_CMA ||
2788-
migratetype != MIGRATE_MOVABLE) {
2789-
page = rmqueue_pcplist(preferred_zone, zone, order,
2790-
migratetype, alloc_flags);
2791-
if (likely(page))
2792-
goto out;
2793-
}
2777+
page = rmqueue_pcplist(preferred_zone, zone, order,
2778+
migratetype, alloc_flags);
2779+
if (likely(page))
2780+
goto out;
27942781
}
27952782

27962783
page = rmqueue_buddy(preferred_zone, zone, order, alloc_flags,

0 commit comments

Comments
 (0)