Skip to content

Commit 9cc6fea

Browse files
committed
Merge tag 'core-core-2023-10-29-v2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull core updates from Thomas Gleixner: "Two small updates to ptrace_stop(): - Add a comment to explain that the preempt_disable() before unlocking tasklist lock is not a correctness problem and just avoids the tracer to preempt the tracee before the tracee schedules out. - Make that preempt_disable() conditional on PREEMPT_RT=n. RT enabled kernels cannot disable preemption at this point because cgroup_enter_frozen() and sched_submit_work() acquire spinlocks or rwlocks which are substituted by sleeping locks on RT. Acquiring a sleeping lock in a preemption disable region is obviously not possible. This obviously brings back the potential slowdown of ptrace() for RT enabled kernels, but that's a price to be paid for latency guarantees" * tag 'core-core-2023-10-29-v2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: signal: Don't disable preemption in ptrace_stop() on PREEMPT_RT signal: Add a proper comment about preempt_disable() in ptrace_stop()
2 parents ecb8cd2 + 1aabbc5 commit 9cc6fea

File tree

1 file changed

+28
-5
lines changed

1 file changed

+28
-5
lines changed

kernel/signal.c

Lines changed: 28 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2329,15 +2329,38 @@ static int ptrace_stop(int exit_code, int why, unsigned long message,
23292329
do_notify_parent_cldstop(current, false, why);
23302330

23312331
/*
2332-
* Don't want to allow preemption here, because
2333-
* sys_ptrace() needs this task to be inactive.
2332+
* The previous do_notify_parent_cldstop() invocation woke ptracer.
2333+
* One a PREEMPTION kernel this can result in preemption requirement
2334+
* which will be fulfilled after read_unlock() and the ptracer will be
2335+
* put on the CPU.
2336+
* The ptracer is in wait_task_inactive(, __TASK_TRACED) waiting for
2337+
* this task wait in schedule(). If this task gets preempted then it
2338+
* remains enqueued on the runqueue. The ptracer will observe this and
2339+
* then sleep for a delay of one HZ tick. In the meantime this task
2340+
* gets scheduled, enters schedule() and will wait for the ptracer.
23342341
*
2335-
* XXX: implement read_unlock_no_resched().
2342+
* This preemption point is not bad from a correctness point of
2343+
* view but extends the runtime by one HZ tick time due to the
2344+
* ptracer's sleep. The preempt-disable section ensures that there
2345+
* will be no preemption between unlock and schedule() and so
2346+
* improving the performance since the ptracer will observe that
2347+
* the tracee is scheduled out once it gets on the CPU.
2348+
*
2349+
* On PREEMPT_RT locking tasklist_lock does not disable preemption.
2350+
* Therefore the task can be preempted after do_notify_parent_cldstop()
2351+
* before unlocking tasklist_lock so there is no benefit in doing this.
2352+
*
2353+
* In fact disabling preemption is harmful on PREEMPT_RT because
2354+
* the spinlock_t in cgroup_enter_frozen() must not be acquired
2355+
* with preemption disabled due to the 'sleeping' spinlock
2356+
* substitution of RT.
23362357
*/
2337-
preempt_disable();
2358+
if (!IS_ENABLED(CONFIG_PREEMPT_RT))
2359+
preempt_disable();
23382360
read_unlock(&tasklist_lock);
23392361
cgroup_enter_frozen();
2340-
preempt_enable_no_resched();
2362+
if (!IS_ENABLED(CONFIG_PREEMPT_RT))
2363+
preempt_enable_no_resched();
23412364
schedule();
23422365
cgroup_leave_frozen(true);
23432366

0 commit comments

Comments
 (0)