Skip to content

Commit 6bb05a3

Browse files
Waiman-LongKAGA-KOKO
authored andcommitted
clocksource: Use migrate_disable() to avoid calling get_random_u32() in atomic context
The following bug report happened with a PREEMPT_RT kernel: BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48 in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 2012, name: kwatchdog preempt_count: 1, expected: 0 RCU nest depth: 0, expected: 0 get_random_u32+0x4f/0x110 clocksource_verify_choose_cpus+0xab/0x1a0 clocksource_verify_percpu.part.0+0x6b/0x330 clocksource_watchdog_kthread+0x193/0x1a0 It is due to the fact that clocksource_verify_choose_cpus() is invoked with preemption disabled. This function invokes get_random_u32() to obtain random numbers for choosing CPUs. The batched_entropy_32 local lock and/or the base_crng.lock spinlock in driver/char/random.c will be acquired during the call. In PREEMPT_RT kernel, they are both sleeping locks and so cannot be acquired in atomic context. Fix this problem by using migrate_disable() to allow smp_processor_id() to be reliably used without introducing atomic context. preempt_disable() is then called after clocksource_verify_choose_cpus() but before the clocksource measurement is being run to avoid introducing unexpected latency. Fixes: 7560c02 ("clocksource: Check per-CPU clock synchronization when marked unstable") Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://lore.kernel.org/all/20250131173323.891943-2-longman@redhat.com
1 parent bb2784d commit 6bb05a3

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed

kernel/time/clocksource.c

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -373,17 +373,18 @@ void clocksource_verify_percpu(struct clocksource *cs)
373373
cpumask_clear(&cpus_ahead);
374374
cpumask_clear(&cpus_behind);
375375
cpus_read_lock();
376-
preempt_disable();
376+
migrate_disable();
377377
clocksource_verify_choose_cpus();
378378
if (cpumask_empty(&cpus_chosen)) {
379-
preempt_enable();
379+
migrate_enable();
380380
cpus_read_unlock();
381381
pr_warn("Not enough CPUs to check clocksource '%s'.\n", cs->name);
382382
return;
383383
}
384384
testcpu = smp_processor_id();
385385
pr_info("Checking clocksource %s synchronization from CPU %d to CPUs %*pbl.\n",
386386
cs->name, testcpu, cpumask_pr_args(&cpus_chosen));
387+
preempt_disable();
387388
for_each_cpu(cpu, &cpus_chosen) {
388389
if (cpu == testcpu)
389390
continue;
@@ -403,6 +404,7 @@ void clocksource_verify_percpu(struct clocksource *cs)
403404
cs_nsec_min = cs_nsec;
404405
}
405406
preempt_enable();
407+
migrate_enable();
406408
cpus_read_unlock();
407409
if (!cpumask_empty(&cpus_ahead))
408410
pr_warn(" CPUs %*pbl ahead of CPU %d for clocksource %s.\n",

0 commit comments

Comments
 (0)