Skip to content

Commit 16e96ba

Browse files
nhatsmrtakpm00
authored andcommitted
mm/swap_state: update zswap LRU's protection range with the folio locked
When a folio is swapped in, the protection size of the corresponding zswap LRU is incremented, so that the zswap shrinker is more conservative with its reclaiming action. This field is embedded within the struct lruvec, so updating it requires looking up the folio's memcg and lruvec. However, currently this lookup can happen after the folio is unlocked, for instance if a new folio is allocated, and swap_read_folio() unlocks the folio before returning. In this scenario, there is no stability guarantee for the binding between a folio and its memcg and lruvec: * A folio's memcg and lruvec can be freed between the lookup and the update, leading to a UAF. * Folio migration can clear the now-unlocked folio's memcg_data, which directs the zswap LRU protection size update towards the root memcg instead of the original memcg. This was recently picked up by the syzbot thanks to a warning in the inlined folio_lruvec() call. Move the zswap LRU protection range update above the swap_read_folio() call, and only when a new page is allocated, to prevent this. [nphamcs@gmail.com: add VM_WARN_ON_ONCE() to zswap_folio_swapin()] Link: https://lkml.kernel.org/r/20240206180855.3987204-1-nphamcs@gmail.com [nphamcs@gmail.com: remove unneeded if (folio) checks] Link: https://lkml.kernel.org/r/20240206191355.83755-1-nphamcs@gmail.com Link: https://lkml.kernel.org/r/20240205232442.3240571-1-nphamcs@gmail.com Fixes: b5ba474 ("zswap: shrink zswap pool based on memory pressure") Reported-by: syzbot+17a611d10af7d18a7092@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/000000000000ae47f90610803260@google.com/ Signed-off-by: Nhat Pham <nphamcs@gmail.com> Reviewed-by: Chengming Zhou <zhouchengming@bytedance.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent 7efa6f2 commit 16e96ba

File tree

2 files changed

+9
-8
lines changed

2 files changed

+9
-8
lines changed

mm/swap_state.c

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -680,9 +680,10 @@ struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
680680
/* The page was likely read above, so no need for plugging here */
681681
folio = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
682682
&page_allocated, false);
683-
if (unlikely(page_allocated))
683+
if (unlikely(page_allocated)) {
684+
zswap_folio_swapin(folio);
684685
swap_read_folio(folio, false, NULL);
685-
zswap_folio_swapin(folio);
686+
}
686687
return folio;
687688
}
688689

@@ -855,9 +856,10 @@ static struct folio *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
855856
/* The folio was likely read above, so no need for plugging here */
856857
folio = __read_swap_cache_async(targ_entry, gfp_mask, mpol, targ_ilx,
857858
&page_allocated, false);
858-
if (unlikely(page_allocated))
859+
if (unlikely(page_allocated)) {
860+
zswap_folio_swapin(folio);
859861
swap_read_folio(folio, false, NULL);
860-
zswap_folio_swapin(folio);
862+
}
861863
return folio;
862864
}
863865

mm/zswap.c

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -377,10 +377,9 @@ void zswap_folio_swapin(struct folio *folio)
377377
{
378378
struct lruvec *lruvec;
379379

380-
if (folio) {
381-
lruvec = folio_lruvec(folio);
382-
atomic_long_inc(&lruvec->zswap_lruvec_state.nr_zswap_protected);
383-
}
380+
VM_WARN_ON_ONCE(!folio_test_locked(folio));
381+
lruvec = folio_lruvec(folio);
382+
atomic_long_inc(&lruvec->zswap_lruvec_state.nr_zswap_protected);
384383
}
385384

386385
/*********************************

0 commit comments

Comments
 (0)