Skip to content

Commit a26b24b

Browse files
Kan LiangIngo Molnar
authored andcommitted
perf/x86/intel: Use better start period for frequency mode
Freqency mode is the current default mode of Linux perf. A period of 1 is used as a starting period. The period is auto-adjusted on each tick or an overflow, to meet the frequency target. The start period of 1 is too low and may trigger some issues: - Many HWs do not support period 1 well. https://lore.kernel.org/lkml/875xs2oh69.ffs@tglx/ - For an event that occurs frequently, period 1 is too far away from the real period. Lots of samples are generated at the beginning. The distribution of samples may not be even. - A low starting period for frequently occurring events also challenges virtualization, which has a longer path to handle a PMI. The limit_period value only checks the minimum acceptable value for HW. It cannot be used to set the start period, because some events may need a very low period. The limit_period cannot be set too high. It doesn't help with the events that occur frequently. It's hard to find a universal starting period for all events. The idea implemented by this patch is to only give an estimate for the popular HW and HW cache events. For the rest of the events, start from the lowest possible recommended value. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250117151913.3043942-3-kan.liang@linux.intel.com
1 parent 0d39844 commit a26b24b

File tree

1 file changed

+85
-0
lines changed

1 file changed

+85
-0
lines changed

arch/x86/events/intel/core.c

Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3952,6 +3952,85 @@ static inline bool intel_pmu_has_cap(struct perf_event *event, int idx)
39523952
return test_bit(idx, (unsigned long *)&intel_cap->capabilities);
39533953
}
39543954

3955+
static u64 intel_pmu_freq_start_period(struct perf_event *event)
3956+
{
3957+
int type = event->attr.type;
3958+
u64 config, factor;
3959+
s64 start;
3960+
3961+
/*
3962+
* The 127 is the lowest possible recommended SAV (sample after value)
3963+
* for a 4000 freq (default freq), according to the event list JSON file.
3964+
* Also, assume the workload is idle 50% time.
3965+
*/
3966+
factor = 64 * 4000;
3967+
if (type != PERF_TYPE_HARDWARE && type != PERF_TYPE_HW_CACHE)
3968+
goto end;
3969+
3970+
/*
3971+
* The estimation of the start period in the freq mode is
3972+
* based on the below assumption.
3973+
*
3974+
* For a cycles or an instructions event, 1GHZ of the
3975+
* underlying platform, 1 IPC. The workload is idle 50% time.
3976+
* The start period = 1,000,000,000 * 1 / freq / 2.
3977+
* = 500,000,000 / freq
3978+
*
3979+
* Usually, the branch-related events occur less than the
3980+
* instructions event. According to the Intel event list JSON
3981+
* file, the SAV (sample after value) of a branch-related event
3982+
* is usually 1/4 of an instruction event.
3983+
* The start period of branch-related events = 125,000,000 / freq.
3984+
*
3985+
* The cache-related events occurs even less. The SAV is usually
3986+
* 1/20 of an instruction event.
3987+
* The start period of cache-related events = 25,000,000 / freq.
3988+
*/
3989+
config = event->attr.config & PERF_HW_EVENT_MASK;
3990+
if (type == PERF_TYPE_HARDWARE) {
3991+
switch (config) {
3992+
case PERF_COUNT_HW_CPU_CYCLES:
3993+
case PERF_COUNT_HW_INSTRUCTIONS:
3994+
case PERF_COUNT_HW_BUS_CYCLES:
3995+
case PERF_COUNT_HW_STALLED_CYCLES_FRONTEND:
3996+
case PERF_COUNT_HW_STALLED_CYCLES_BACKEND:
3997+
case PERF_COUNT_HW_REF_CPU_CYCLES:
3998+
factor = 500000000;
3999+
break;
4000+
case PERF_COUNT_HW_BRANCH_INSTRUCTIONS:
4001+
case PERF_COUNT_HW_BRANCH_MISSES:
4002+
factor = 125000000;
4003+
break;
4004+
case PERF_COUNT_HW_CACHE_REFERENCES:
4005+
case PERF_COUNT_HW_CACHE_MISSES:
4006+
factor = 25000000;
4007+
break;
4008+
default:
4009+
goto end;
4010+
}
4011+
}
4012+
4013+
if (type == PERF_TYPE_HW_CACHE)
4014+
factor = 25000000;
4015+
end:
4016+
/*
4017+
* Usually, a prime or a number with less factors (close to prime)
4018+
* is chosen as an SAV, which makes it less likely that the sampling
4019+
* period synchronizes with some periodic event in the workload.
4020+
* Minus 1 to make it at least avoiding values near power of twos
4021+
* for the default freq.
4022+
*/
4023+
start = DIV_ROUND_UP_ULL(factor, event->attr.sample_freq) - 1;
4024+
4025+
if (start > x86_pmu.max_period)
4026+
start = x86_pmu.max_period;
4027+
4028+
if (x86_pmu.limit_period)
4029+
x86_pmu.limit_period(event, &start);
4030+
4031+
return start;
4032+
}
4033+
39554034
static int intel_pmu_hw_config(struct perf_event *event)
39564035
{
39574036
int ret = x86_pmu_hw_config(event);
@@ -3963,6 +4042,12 @@ static int intel_pmu_hw_config(struct perf_event *event)
39634042
if (ret)
39644043
return ret;
39654044

4045+
if (event->attr.freq && event->attr.sample_freq) {
4046+
event->hw.sample_period = intel_pmu_freq_start_period(event);
4047+
event->hw.last_period = event->hw.sample_period;
4048+
local64_set(&event->hw.period_left, event->hw.sample_period);
4049+
}
4050+
39664051
if (event->attr.precise_ip) {
39674052
if ((event->attr.config & INTEL_ARCH_EVENT_MASK) == INTEL_FIXED_VLBR_EVENT)
39684053
return -EINVAL;

0 commit comments

Comments
 (0)