Skip to content

Commit 31029a8

Browse files
committed
ring-buffer: Include dropped pages in counting dirty patches
The function ring_buffer_nr_dirty_pages() was created to find out how many pages are filled in the ring buffer. There's two running counters. One is incremented whenever a new page is touched (pages_touched) and the other is whenever a page is read (pages_read). The dirty count is the number touched minus the number read. This is used to determine if a blocked task should be woken up if the percentage of the ring buffer it is waiting for is hit. The problem is that it does not take into account dropped pages (when the new writes overwrite pages that were not read). And then the dirty pages will always be greater than the percentage. This makes the "buffer_percent" file inaccurate, as the number of dirty pages end up always being larger than the percentage, event when it's not and this causes user space to be woken up more than it wants to be. Add a new counter to keep track of lost pages, and include that in the accounting of dirty pages so that it is actually accurate. Link: https://lkml.kernel.org/r/20221021123013.55fb6055@gandalf.local.home Fixes: 2c2b0a7 ("ring-buffer: Add percentage of ring buffer full to wake up reader") Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
1 parent 42fb0a1 commit 31029a8

File tree

1 file changed

+12
-0
lines changed

1 file changed

+12
-0
lines changed

kernel/trace/ring_buffer.c

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -519,6 +519,7 @@ struct ring_buffer_per_cpu {
519519
local_t committing;
520520
local_t commits;
521521
local_t pages_touched;
522+
local_t pages_lost;
522523
local_t pages_read;
523524
long last_pages_touch;
524525
size_t shortest_full;
@@ -894,10 +895,18 @@ size_t ring_buffer_nr_pages(struct trace_buffer *buffer, int cpu)
894895
size_t ring_buffer_nr_dirty_pages(struct trace_buffer *buffer, int cpu)
895896
{
896897
size_t read;
898+
size_t lost;
897899
size_t cnt;
898900

899901
read = local_read(&buffer->buffers[cpu]->pages_read);
902+
lost = local_read(&buffer->buffers[cpu]->pages_lost);
900903
cnt = local_read(&buffer->buffers[cpu]->pages_touched);
904+
905+
if (WARN_ON_ONCE(cnt < lost))
906+
return 0;
907+
908+
cnt -= lost;
909+
901910
/* The reader can read an empty page, but not more than that */
902911
if (cnt < read) {
903912
WARN_ON_ONCE(read > cnt + 1);
@@ -2031,6 +2040,7 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages)
20312040
*/
20322041
local_add(page_entries, &cpu_buffer->overrun);
20332042
local_sub(BUF_PAGE_SIZE, &cpu_buffer->entries_bytes);
2043+
local_inc(&cpu_buffer->pages_lost);
20342044
}
20352045

20362046
/*
@@ -2515,6 +2525,7 @@ rb_handle_head_page(struct ring_buffer_per_cpu *cpu_buffer,
25152525
*/
25162526
local_add(entries, &cpu_buffer->overrun);
25172527
local_sub(BUF_PAGE_SIZE, &cpu_buffer->entries_bytes);
2528+
local_inc(&cpu_buffer->pages_lost);
25182529

25192530
/*
25202531
* The entries will be zeroed out when we move the
@@ -5265,6 +5276,7 @@ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer)
52655276
local_set(&cpu_buffer->committing, 0);
52665277
local_set(&cpu_buffer->commits, 0);
52675278
local_set(&cpu_buffer->pages_touched, 0);
5279+
local_set(&cpu_buffer->pages_lost, 0);
52685280
local_set(&cpu_buffer->pages_read, 0);
52695281
cpu_buffer->last_pages_touch = 0;
52705282
cpu_buffer->shortest_full = 0;

0 commit comments

Comments
 (0)