Skip to content

Commit 73298c7

Browse files
paulmckrcufbq
authored andcommitted
rcu: Remove references to old grace-period-wait primitives
The rcu_barrier_sched(), synchronize_sched(), and synchronize_rcu_bh() RCU API members have been gone for many years. This commit therefore removes non-historical instances of them. Reported-by: Joe Perches <joe@perches.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
1 parent 81a208c commit 73298c7

File tree

2 files changed

+8
-14
lines changed

2 files changed

+8
-14
lines changed

Documentation/RCU/rcubarrier.rst

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -329,10 +329,7 @@ Answer:
329329
was first added back in 2005. This is because on_each_cpu()
330330
disables preemption, which acted as an RCU read-side critical
331331
section, thus preventing CPU 0's grace period from completing
332-
until on_each_cpu() had dealt with all of the CPUs. However,
333-
with the advent of preemptible RCU, rcu_barrier() no longer
334-
waited on nonpreemptible regions of code in preemptible kernels,
335-
that being the job of the new rcu_barrier_sched() function.
332+
until on_each_cpu() had dealt with all of the CPUs.
336333

337334
However, with the RCU flavor consolidation around v4.20, this
338335
possibility was once again ruled out, because the consolidated

include/linux/rcupdate.h

Lines changed: 7 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -806,11 +806,9 @@ do { \
806806
* sections, invocation of the corresponding RCU callback is deferred
807807
* until after the all the other CPUs exit their critical sections.
808808
*
809-
* In v5.0 and later kernels, synchronize_rcu() and call_rcu() also
810-
* wait for regions of code with preemption disabled, including regions of
811-
* code with interrupts or softirqs disabled. In pre-v5.0 kernels, which
812-
* define synchronize_sched(), only code enclosed within rcu_read_lock()
813-
* and rcu_read_unlock() are guaranteed to be waited for.
809+
* Both synchronize_rcu() and call_rcu() also wait for regions of code
810+
* with preemption disabled, including regions of code with interrupts or
811+
* softirqs disabled.
814812
*
815813
* Note, however, that RCU callbacks are permitted to run concurrently
816814
* with new RCU read-side critical sections. One way that this can happen
@@ -865,11 +863,10 @@ static __always_inline void rcu_read_lock(void)
865863
* rcu_read_unlock() - marks the end of an RCU read-side critical section.
866864
*
867865
* In almost all situations, rcu_read_unlock() is immune from deadlock.
868-
* In recent kernels that have consolidated synchronize_sched() and
869-
* synchronize_rcu_bh() into synchronize_rcu(), this deadlock immunity
870-
* also extends to the scheduler's runqueue and priority-inheritance
871-
* spinlocks, courtesy of the quiescent-state deferral that is carried
872-
* out when rcu_read_unlock() is invoked with interrupts disabled.
866+
* This deadlock immunity also extends to the scheduler's runqueue
867+
* and priority-inheritance spinlocks, courtesy of the quiescent-state
868+
* deferral that is carried out when rcu_read_unlock() is invoked with
869+
* interrupts disabled.
873870
*
874871
* See rcu_read_lock() for more information.
875872
*/

0 commit comments

Comments
 (0)