Skip to content

Conversation

KosmX
Copy link

@KosmX KosmX commented May 30, 2025

This is necessary to increase on higher-end systems (high polling rate hardware/high refresh-rate) to avoid crashing some apps that cannot keep up with the event loop momentarily.

For example, if a mouse has 1000 Hz polling rate (and one pointer event is 20 bytes), it takes only 200 ms to overflow the default, 4 kB buffer. This is not uncommon for an app to fail to receive events when something is loading. (personally it is an issue with Minecraft)

I need this patch upstream in Niri

This is necessary to increase on higher-end systems (high polling rate hardware/high refresh-rate) to avoid crashing some apps that cannot keep up with the event loop momentarily
@ids1024
Copy link
Member

ids1024 commented Jun 16, 2025

I guess we probably need something like this in the rs backend too? fill_incoming_buffers will read until self.in_data is full, which is defined as Buffer::new(2 * MAX_BYTES_OUT). Though remaining bytes would still be stored in the kernel socket buffer up to its limit. I assume libwayland is similar here.

(wl_connection_set_max_buffer_size in https://gitlab.freedesktop.org/wayland/wayland/-/blob/main/src/connection.c?ref_type=heads#L296-315 is where the size of the libwayland ring buffers is set based on this.)

I also see in that function uint32_t max_size_bits; /* 0 for unlimited */. But wl_display_set_default_max_buffer_size and wl_client_set_max_buffer_size don't document anything like that...

@KosmX
Copy link
Author

KosmX commented Jun 17, 2025

Most of those functions are for libwayland internal, called automatically when a new client connects, except for wl_display_set_default_max_buffer_size. That is called by almost all compositors to set some new limit (like 1MB in Mutter).

Right now I don't have time to dig into exact protocol, and how buffers are used, I just wanted to fix this bug in Niri (tested and works).

If needed, I can dig into this protocol much deeper and add the patch to the rust backend too after my exams.

@ids1024
Copy link
Member

ids1024 commented Jun 17, 2025

Yeah, the wl_connection stuff is internal, but is where the functionality is actually implemented. If the public function has the behavior that 0 means "unlimited" maybe a Option<NonZeroUsize> would be a better Rust API, but then it it would be good to get libwayland to document this behavior.

It looks like there are 3 public functions related to this in libwayland:

  • wl_display_set_default_max_buffer_size (this one)
  • wl_client_set_max_buffer_size (for a specific client)
  • wl_display_set_max_buffer_size (for libwayland-client)

Though I don't know how commonly the latter two are used.

If the rs backend has the same issue, then we definitely do want a way to address it there too. Probably not that hard, I just need to understand exactly what the libwayland behavior is to match it.

@KosmX
Copy link
Author

KosmX commented Aug 23, 2025

If the public function has the behavior that 0 means "unlimited" maybe a Option would be a better Rust API

There is no such behavior (or I don't see it), and even if there was, I would avoid doing it: an unlimited buffer and a frozen app is practically a memory leak. 1MB buffer is enough to be frozen for 1 minute with a 1khz polling rate mouse. If that's not enough, at a point it is better to just kill a dead connection.

@KosmX
Copy link
Author

KosmX commented Sep 4, 2025

I looked into the rust wayland implemntation (the one contained in this repo) and I was kinda horrified.
Unlike libwayland (the C implementation), the rust implementation does not have dynamic buffer allocation, and it does not even use a ring-buffer, just a regular byte array, and when the first entry is read, the whole array is just memcopyed. (within the same array, so at least there are no unnecessary allocations happening at each read)

Allocating bigger buffers for each client with the current behavior will just slow down everything, and may be more than what we should allocate.

To make rust wayland have the same default buffer size feature as libwayland has, a lot more work is needed, not just a few exposed function, but actual rewrites of the buffer implementation. And if possible, I would like to keep that refactor out of this PR (simply for merging it faster so Niri can use the feature right away).

I see a few options on how we should do the refactoring:

  • Not the easiest, but maybe the most robust is to copy the ring-buffer implementation from libwayland, make it growable.
    There are some challenges here, like right now, the messages are directly serialized into the ringbuffer, which makes it a bit harder to know if we need to re-allocate the buffer, but doable afterall.
  • Maybe a simpler way, though I don't think it is a good option, is to ignore the binary ringbuffers, and serialize/deserialize directly to/from sockets, and buffer the messages instead of binary data. Biggest drawback for this, is we cannot measure how much data we keep in the buffer, and I think, an implementation like this would also be slower.
  • We can also just allocate much larger buffers, but with the current copy logic, a read would be O(n) where n is the data in buffer. Which is just far from ideal.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants