Skip to content

Commit fdb54d9

Browse files
xairyakpm00
authored andcommitted
kasan, slub: fix HW_TAGS zeroing with slub_debug
Commit 946fa0d ("mm/slub: extend redzone check to extra allocated kmalloc space than requested") added precise kmalloc redzone poisoning to the slub_debug functionality. However, this commit didn't account for HW_TAGS KASAN fully initializing the object via its built-in memory initialization feature. Even though HW_TAGS KASAN memory initialization contains special memory initialization handling for when slub_debug is enabled, it does not account for in-object slub_debug redzones. As a result, HW_TAGS KASAN can overwrite these redzones and cause false-positive slub_debug reports. To fix the issue, avoid HW_TAGS KASAN memory initialization when slub_debug is enabled altogether. Implement this by moving the __slub_debug_enabled check to slab_post_alloc_hook. Common slab code seems like a more appropriate place for a slub_debug check anyway. Link: https://lkml.kernel.org/r/678ac92ab790dba9198f9ca14f405651b97c8502.1688561016.git.andreyknvl@google.com Fixes: 946fa0d ("mm/slub: extend redzone check to extra allocated kmalloc space than requested") Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Reported-by: Will Deacon <will@kernel.org> Acked-by: Marco Elver <elver@google.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Feng Tang <feng.tang@intel.com> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: kasan-dev@googlegroups.com Cc: Pekka Enberg <penberg@kernel.org> Cc: Peter Collingbourne <pcc@google.com> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent 05c56e7 commit fdb54d9

File tree

2 files changed

+14
-14
lines changed

2 files changed

+14
-14
lines changed

mm/kasan/kasan.h

Lines changed: 0 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -466,18 +466,6 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init)
466466

467467
if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK))
468468
return;
469-
/*
470-
* Explicitly initialize the memory with the precise object size to
471-
* avoid overwriting the slab redzone. This disables initialization in
472-
* the arch code and may thus lead to performance penalty. This penalty
473-
* does not affect production builds, as slab redzones are not enabled
474-
* there.
475-
*/
476-
if (__slub_debug_enabled() &&
477-
init && ((unsigned long)size & KASAN_GRANULE_MASK)) {
478-
init = false;
479-
memzero_explicit((void *)addr, size);
480-
}
481469
size = round_up(size, KASAN_GRANULE_SIZE);
482470

483471
hw_set_mem_tag_range((void *)addr, size, tag, init);

mm/slab.h

Lines changed: 14 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -723,6 +723,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
723723
unsigned int orig_size)
724724
{
725725
unsigned int zero_size = s->object_size;
726+
bool kasan_init = init;
726727
size_t i;
727728

728729
flags &= gfp_allowed_mask;
@@ -739,6 +740,17 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
739740
(s->flags & SLAB_KMALLOC))
740741
zero_size = orig_size;
741742

743+
/*
744+
* When slub_debug is enabled, avoid memory initialization integrated
745+
* into KASAN and instead zero out the memory via the memset below with
746+
* the proper size. Otherwise, KASAN might overwrite SLUB redzones and
747+
* cause false-positive reports. This does not lead to a performance
748+
* penalty on production builds, as slub_debug is not intended to be
749+
* enabled there.
750+
*/
751+
if (__slub_debug_enabled())
752+
kasan_init = false;
753+
742754
/*
743755
* As memory initialization might be integrated into KASAN,
744756
* kasan_slab_alloc and initialization memset must be
@@ -747,8 +759,8 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
747759
* As p[i] might get tagged, memset and kmemleak hook come after KASAN.
748760
*/
749761
for (i = 0; i < size; i++) {
750-
p[i] = kasan_slab_alloc(s, p[i], flags, init);
751-
if (p[i] && init && !kasan_has_integrated_init())
762+
p[i] = kasan_slab_alloc(s, p[i], flags, kasan_init);
763+
if (p[i] && init && (!kasan_init || !kasan_has_integrated_init()))
752764
memset(p[i], 0, zero_size);
753765
kmemleak_alloc_recursive(p[i], s->object_size, 1,
754766
s->flags, flags);

0 commit comments

Comments
 (0)