Skip to content

Commit 79443a7

Browse files
committed
cpufreq/sched: Explicitly synchronize limits_changed flag handling
The handling of the limits_changed flag in struct sugov_policy needs to be explicitly synchronized to ensure that cpufreq policy limits updates will not be missed in some cases. Without that synchronization it is theoretically possible that the limits_changed update in sugov_should_update_freq() will be reordered with respect to the reads of the policy limits in cpufreq_driver_resolve_freq() and in that case, if the limits_changed update in sugov_limits() clobbers the one in sugov_should_update_freq(), the new policy limits may not take effect for a long time. Likewise, the limits_changed update in sugov_limits() may theoretically get reordered with respect to the updates of the policy limits in cpufreq_set_policy() and if sugov_should_update_freq() runs between them, the policy limits change may be missed. To ensure that the above situations will not take place, add memory barriers preventing the reordering in question from taking place and add READ_ONCE() and WRITE_ONCE() annotations around all of the limits_changed flag updates to prevent the compiler from messing up with that code. Fixes: 600f5ba ("cpufreq: schedutil: Don't skip freq update when limits change") Cc: 5.3+ <stable@vger.kernel.org> # 5.3+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Christian Loehle <christian.loehle@arm.com> Link: https://patch.msgid.link/3376719.44csPzL39Z@rjwysocki.net
1 parent cfde542 commit 79443a7

File tree

1 file changed

+24
-4
lines changed

1 file changed

+24
-4
lines changed

kernel/sched/cpufreq_schedutil.c

Lines changed: 24 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -81,9 +81,20 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time)
8181
if (!cpufreq_this_cpu_can_update(sg_policy->policy))
8282
return false;
8383

84-
if (unlikely(sg_policy->limits_changed)) {
85-
sg_policy->limits_changed = false;
84+
if (unlikely(READ_ONCE(sg_policy->limits_changed))) {
85+
WRITE_ONCE(sg_policy->limits_changed, false);
8686
sg_policy->need_freq_update = true;
87+
88+
/*
89+
* The above limits_changed update must occur before the reads
90+
* of policy limits in cpufreq_driver_resolve_freq() or a policy
91+
* limits update might be missed, so use a memory barrier to
92+
* ensure it.
93+
*
94+
* This pairs with the write memory barrier in sugov_limits().
95+
*/
96+
smp_mb();
97+
8798
return true;
8899
}
89100

@@ -377,7 +388,7 @@ static inline bool sugov_hold_freq(struct sugov_cpu *sg_cpu) { return false; }
377388
static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
378389
{
379390
if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
380-
sg_cpu->sg_policy->limits_changed = true;
391+
WRITE_ONCE(sg_cpu->sg_policy->limits_changed, true);
381392
}
382393

383394
static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
@@ -883,7 +894,16 @@ static void sugov_limits(struct cpufreq_policy *policy)
883894
mutex_unlock(&sg_policy->work_lock);
884895
}
885896

886-
sg_policy->limits_changed = true;
897+
/*
898+
* The limits_changed update below must take place before the updates
899+
* of policy limits in cpufreq_set_policy() or a policy limits update
900+
* might be missed, so use a memory barrier to ensure it.
901+
*
902+
* This pairs with the memory barrier in sugov_should_update_freq().
903+
*/
904+
smp_wmb();
905+
906+
WRITE_ONCE(sg_policy->limits_changed, true);
887907
}
888908

889909
struct cpufreq_governor schedutil_gov = {

0 commit comments

Comments
 (0)