Skip to content

Commit 99bcd91

Browse files
ahunter6Ingo Molnar
authored andcommitted
perf/x86/intel: Fix segfault with PEBS-via-PT with sample_freq
Currently, using PEBS-via-PT with a sample frequency instead of a sample period, causes a segfault. For example: BUG: kernel NULL pointer dereference, address: 0000000000000195 <NMI> ? __die_body.cold+0x19/0x27 ? page_fault_oops+0xca/0x290 ? exc_page_fault+0x7e/0x1b0 ? asm_exc_page_fault+0x26/0x30 ? intel_pmu_pebs_event_update_no_drain+0x40/0x60 ? intel_pmu_pebs_event_update_no_drain+0x32/0x60 intel_pmu_drain_pebs_icl+0x333/0x350 handle_pmi_common+0x272/0x3c0 intel_pmu_handle_irq+0x10a/0x2e0 perf_event_nmi_handler+0x2a/0x50 That happens because intel_pmu_pebs_event_update_no_drain() assumes all the pebs_enabled bits represent counter indexes, which is not always the case. In this particular case, bits 60 and 61 are set for PEBS-via-PT purposes. The behaviour of PEBS-via-PT with sample frequency is questionable because although a PMI is generated (PEBS_PMI_AFTER_EACH_RECORD), the period is not adjusted anyway. Putting that aside, fix intel_pmu_pebs_event_update_no_drain() by passing the mask of counter bits instead of 'size'. Note, prior to the Fixes commit, 'size' would be limited to the maximum counter index, so the issue was not hit. Fixes: 722e42e ("perf/x86: Support counter mask") Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Ian Rogers <irogers@google.com> Cc: linux-perf-users@vger.kernel.org Link: https://lore.kernel.org/r/20250508134452.73960-1-adrian.hunter@intel.com
1 parent 82f2b0b commit 99bcd91

File tree

1 file changed

+5
-4
lines changed
  • arch/x86/events/intel

1 file changed

+5
-4
lines changed

arch/x86/events/intel/ds.c

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2465,8 +2465,9 @@ static void intel_pmu_drain_pebs_core(struct pt_regs *iregs, struct perf_sample_
24652465
setup_pebs_fixed_sample_data);
24662466
}
24672467

2468-
static void intel_pmu_pebs_event_update_no_drain(struct cpu_hw_events *cpuc, int size)
2468+
static void intel_pmu_pebs_event_update_no_drain(struct cpu_hw_events *cpuc, u64 mask)
24692469
{
2470+
u64 pebs_enabled = cpuc->pebs_enabled & mask;
24702471
struct perf_event *event;
24712472
int bit;
24722473

@@ -2477,7 +2478,7 @@ static void intel_pmu_pebs_event_update_no_drain(struct cpu_hw_events *cpuc, int
24772478
* It needs to call intel_pmu_save_and_restart_reload() to
24782479
* update the event->count for this case.
24792480
*/
2480-
for_each_set_bit(bit, (unsigned long *)&cpuc->pebs_enabled, size) {
2481+
for_each_set_bit(bit, (unsigned long *)&pebs_enabled, X86_PMC_IDX_MAX) {
24812482
event = cpuc->events[bit];
24822483
if (event->hw.flags & PERF_X86_EVENT_AUTO_RELOAD)
24832484
intel_pmu_save_and_restart_reload(event, 0);
@@ -2512,7 +2513,7 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs, struct perf_sample_d
25122513
}
25132514

25142515
if (unlikely(base >= top)) {
2515-
intel_pmu_pebs_event_update_no_drain(cpuc, size);
2516+
intel_pmu_pebs_event_update_no_drain(cpuc, mask);
25162517
return;
25172518
}
25182519

@@ -2626,7 +2627,7 @@ static void intel_pmu_drain_pebs_icl(struct pt_regs *iregs, struct perf_sample_d
26262627
(hybrid(cpuc->pmu, fixed_cntr_mask64) << INTEL_PMC_IDX_FIXED);
26272628

26282629
if (unlikely(base >= top)) {
2629-
intel_pmu_pebs_event_update_no_drain(cpuc, X86_PMC_IDX_MAX);
2630+
intel_pmu_pebs_event_update_no_drain(cpuc, mask);
26302631
return;
26312632
}
26322633

0 commit comments

Comments
 (0)