Skip to content

Commit 3cec274

Browse files
paulmckrcufbq
authored andcommitted
srcu: Make SRCU-fast also be NMI-safe
BPF uses rcu_read_lock_trace() in NMI context, so srcu_read_lock_fast() must be NMI-safe if it is to have any chance of addressing RCU Tasks Trace use cases. This commit therefore causes srcu_read_lock_fast() and srcu_read_unlock_fast() to use atomic_long_inc() instead of this_cpu_inc() on architectures that support NMIs but do not have NMI-safe implementations of this_cpu_inc(). Note that both x86 and arm64 have NMI-safe implementations of this_cpu_inc(), and thus do not pay the performance penalty inherent in atomic_inc_long(). It is tempting to use this trick to fold srcu_read_lock_nmisafe() into srcu_read_lock(), but this would need careful thought, review, and performance analysis. Though those smp_mb() calls might well make performance a non-issue. Reported-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
1 parent f8b8df1 commit 3cec274

File tree

1 file changed

+24
-10
lines changed

1 file changed

+24
-10
lines changed

include/linux/srcutree.h

Lines changed: 24 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -231,17 +231,24 @@ static inline struct srcu_ctr __percpu *__srcu_ctr_to_ptr(struct srcu_struct *ss
231231
* srcu_struct. Returns a pointer that must be passed to the matching
232232
* srcu_read_unlock_fast().
233233
*
234-
* Note that this_cpu_inc() is an RCU read-side critical section either
235-
* because it disables interrupts, because it is a single instruction,
236-
* or because it is a read-modify-write atomic operation, depending on
237-
* the whims of the architecture.
234+
* Note that both this_cpu_inc() and atomic_long_inc() are RCU read-side
235+
* critical sections either because they disables interrupts, because they
236+
* are a single instruction, or because they are a read-modify-write atomic
237+
* operation, depending on the whims of the architecture.
238+
*
239+
* This means that __srcu_read_lock_fast() is not all that fast
240+
* on architectures that support NMIs but do not supply NMI-safe
241+
* implementations of this_cpu_inc().
238242
*/
239243
static inline struct srcu_ctr __percpu *__srcu_read_lock_fast(struct srcu_struct *ssp)
240244
{
241245
struct srcu_ctr __percpu *scp = READ_ONCE(ssp->srcu_ctrp);
242246

243247
RCU_LOCKDEP_WARN(!rcu_is_watching(), "RCU must be watching srcu_read_lock_fast().");
244-
this_cpu_inc(scp->srcu_locks.counter); /* Y */
248+
if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
249+
this_cpu_inc(scp->srcu_locks.counter); /* Y */
250+
else
251+
atomic_long_inc(raw_cpu_ptr(&scp->srcu_locks)); /* Z */
245252
barrier(); /* Avoid leaking the critical section. */
246253
return scp;
247254
}
@@ -252,15 +259,22 @@ static inline struct srcu_ctr __percpu *__srcu_read_lock_fast(struct srcu_struct
252259
* different CPU than that which was incremented by the corresponding
253260
* srcu_read_lock_fast(), but it must be within the same task.
254261
*
255-
* Note that this_cpu_inc() is an RCU read-side critical section either
256-
* because it disables interrupts, because it is a single instruction,
257-
* or because it is a read-modify-write atomic operation, depending on
258-
* the whims of the architecture.
262+
* Note that both this_cpu_inc() and atomic_long_inc() are RCU read-side
263+
* critical sections either because they disables interrupts, because they
264+
* are a single instruction, or because they are a read-modify-write atomic
265+
* operation, depending on the whims of the architecture.
266+
*
267+
* This means that __srcu_read_unlock_fast() is not all that fast
268+
* on architectures that support NMIs but do not supply NMI-safe
269+
* implementations of this_cpu_inc().
259270
*/
260271
static inline void __srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp)
261272
{
262273
barrier(); /* Avoid leaking the critical section. */
263-
this_cpu_inc(scp->srcu_unlocks.counter); /* Z */
274+
if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
275+
this_cpu_inc(scp->srcu_unlocks.counter); /* Z */
276+
else
277+
atomic_long_inc(raw_cpu_ptr(&scp->srcu_unlocks)); /* Z */
264278
RCU_LOCKDEP_WARN(!rcu_is_watching(), "RCU must be watching srcu_read_unlock_fast().");
265279
}
266280

0 commit comments

Comments
 (0)