Skip to content

Commit 13f9ca7

Browse files
committed
debugobjects: Refill per CPU pool more agressively
Right now the per CPU pools are only refilled when they become empty. That's suboptimal especially when there are still non-freed objects in the to free list. Check whether an allocation from the per CPU pool emptied a batch and try to allocate from the free pool if that still has objects available. kmem_cache_alloc() kmem_cache_free() Baseline: 295k 245k Refill: 225k 173k Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Zhen Lei <thunder.leizhen@huawei.com> Link: https://lore.kernel.org/all/20241007164914.439053085@linutronix.de
1 parent a201a96 commit 13f9ca7

File tree

1 file changed

+18
-0
lines changed

1 file changed

+18
-0
lines changed

lib/debugobjects.c

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -255,6 +255,24 @@ static struct debug_obj *pcpu_alloc(void)
255255

256256
if (likely(obj)) {
257257
pcp->cnt--;
258+
/*
259+
* If this emptied a batch try to refill from the
260+
* free pool. Don't do that if this was the top-most
261+
* batch as pcpu_free() expects the per CPU pool
262+
* to be less than ODEBUG_POOL_PERCPU_SIZE.
263+
*/
264+
if (unlikely(pcp->cnt < (ODEBUG_POOL_PERCPU_SIZE - ODEBUG_BATCH_SIZE) &&
265+
!(pcp->cnt % ODEBUG_BATCH_SIZE))) {
266+
/*
267+
* Don't try to allocate from the regular pool here
268+
* to not exhaust it prematurely.
269+
*/
270+
if (pool_count(&pool_to_free)) {
271+
guard(raw_spinlock)(&pool_lock);
272+
pool_move_batch(pcp, &pool_to_free);
273+
pcpu_refill_stats();
274+
}
275+
}
258276
return obj;
259277
}
260278

0 commit comments

Comments
 (0)