Skip to content

Commit 953f064

Browse files
Li Xinhaitorvalds
authored andcommitted
mm/hugetlb: try preferred node first when alloc gigantic page from cma
Since commit cf11e85 ("mm: hugetlb: optionally allocate gigantic hugepages using cma"), the gigantic page would be allocated from node which is not the preferred node, although there are pages available from that node. The reason is that the nid parameter has been ignored in alloc_gigantic_page(). Besides, the __GFP_THISNODE also need be checked if user required to alloc only from the preferred node. After this patch, the preferred node is tried first before other allowed nodes, and don't try to allocate from other nodes if __GFP_THISNODE is specified. If user don't specify the preferred node, the current node will be used as preferred node, which makes sure consistent behavior of allocating gigantic and non-gigantic hugetlb page. Fixes: cf11e85 ("mm: hugetlb: optionally allocate gigantic hugepages using cma") Signed-off-by: Li Xinhai <lixinhai.lxh@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Roman Gushchin <guro@fb.com> Link: https://lkml.kernel.org/r/20200902025016.697260-1-lixinhai.lxh@gmail.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent 3d321bf commit 953f064

File tree

1 file changed

+17
-6
lines changed

1 file changed

+17
-6
lines changed

mm/hugetlb.c

Lines changed: 17 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1250,21 +1250,32 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
12501250
int nid, nodemask_t *nodemask)
12511251
{
12521252
unsigned long nr_pages = 1UL << huge_page_order(h);
1253+
if (nid == NUMA_NO_NODE)
1254+
nid = numa_mem_id();
12531255

12541256
#ifdef CONFIG_CMA
12551257
{
12561258
struct page *page;
12571259
int node;
12581260

1259-
for_each_node_mask(node, *nodemask) {
1260-
if (!hugetlb_cma[node])
1261-
continue;
1262-
1263-
page = cma_alloc(hugetlb_cma[node], nr_pages,
1264-
huge_page_order(h), true);
1261+
if (hugetlb_cma[nid]) {
1262+
page = cma_alloc(hugetlb_cma[nid], nr_pages,
1263+
huge_page_order(h), true);
12651264
if (page)
12661265
return page;
12671266
}
1267+
1268+
if (!(gfp_mask & __GFP_THISNODE)) {
1269+
for_each_node_mask(node, *nodemask) {
1270+
if (node == nid || !hugetlb_cma[node])
1271+
continue;
1272+
1273+
page = cma_alloc(hugetlb_cma[node], nr_pages,
1274+
huge_page_order(h), true);
1275+
if (page)
1276+
return page;
1277+
}
1278+
}
12681279
}
12691280
#endif
12701281

0 commit comments

Comments
 (0)