Skip to content

Add BRP watch_id and bevy/unwatch #16407

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

villor
Copy link
Contributor

@villor villor commented Nov 16, 2024

Objective

BRP supports watching requests by using the +watch-suffix. But it has some limitations:

  • There is no way to differentiate one watcher from another, other than by using the client provided params. This makes it hard to deal with any needed state for ongoing watching requests, as well as detecting when a watcher is closed.
  • Watching requests can not be closed/canceled manually, which can be problematic when using transports where multiple requests are communicated over a single connection (see Add WebSocket support for RemoteHttpPlugin #16403)

Solution

  • Add a server owned watch_id (RemoteWatchingRequestId = u32) used to uniquely identify a specific watcher.
    • Not to be confused with the id field in JSON RPC 2.0, which is a client owned id that may or may not exist and might not be unique.
  • Change the +watch-request handling to always send an initial response with the watch_id of that request.
{
  "jsonrpc": "2.0",
  "id": 5,
  "result": {
    "watch_id": 1
  }
}
  • Update the In-param for the watching request handlers to also include the watch_id.
  • Add a bevy/unwatch-method which can be used to close a current watcher (or all watchers).

Testing

  • Tested together with Add WebSocket support for RemoteHttpPlugin #16403.
    • ✅ +watch-requests now return their watch_id
    • ✅ bevy/unwatch works for both a specific watch_id, and for all watchers
  • I have also used this approach while experimenting with bevy_remote_inspector, where I use the watch_id to detect closed watchers and perform cleanup in a separate system.

@BenjaminBrienen BenjaminBrienen added C-Feature A new feature, making something new possible D-Straightforward Simple bug fixes and API improvements, docs, test and examples S-Needs-Review Needs reviewer attention (from anyone!) to move forward A-Networking Sending data between clients, servers and machines labels Nov 17, 2024
Copy link
Member

@alice-i-cecile alice-i-cecile left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but if you have time swapping to a named struct to avoid the complex tuples would be nice.

@@ -348,7 +364,7 @@ pub fn process_remote_get_request(In(params): In<Option<Value>>, world: &World)

/// Handles a `bevy/get+watch` request coming from a client.
pub fn process_remote_get_watching_request(
In(params): In<Option<Value>>,
In((_, params)): In<RemoteWatchingSystemParams>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we providing the watch_id if they are not used anywhere in the watch systems?
In case they might be used in the future?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea is to give watch handlers a way to identify which watcher it is handling, giving them a unique key for use with any needed state/cache that needs to be handled on watcher level, rather than method level.

As you’ve noticed it is not used by any of the built in methods, as they are not stateful. But I have encountered the need for watcher state while building some custom methods.

I’m not completely certain this is the best solution though. There are other ways. For example each watcher could have their own SystemState, allowing use of Local instead of managing state using ids and hash maps. Another benefit is cleanup would be automatic.

}
}

Ok(Value::Null)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might want to return an error if a watch_id is provided that doesn't exist

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! That makes sense.

//! `params`:
//! - `watch_id`: The ID of request to unwatch.
//!
//! `result`: null.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might want to return an error if a watch_id is provided that doesn't exist

@villor
Copy link
Contributor Author

villor commented Dec 23, 2024

Since in-engine websocket transport might be off the table, this companion PR may seem unnecessary as well. However I believe this functionality is still useful, for any third-party stream-based transports, as well as a potential future binary TCP transport.

While trying to think about ways to solve the connection limit issue with SSE, I realized one way could be to re-design watchers a bit by using a single SSE endpoint as a stream for all watcher responses. This would ensure any number of watchers can be used even from a web browser.

With the above in mind, as well as the state stuff, I think we should hold off a bit on this PR and try to figure out a better watcher design in general.

@alice-i-cecile alice-i-cecile added S-Blocked This cannot move forward until something else changes and removed S-Needs-Review Needs reviewer attention (from anyone!) to move forward labels Dec 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-Networking Sending data between clients, servers and machines C-Feature A new feature, making something new possible D-Straightforward Simple bug fixes and API improvements, docs, test and examples S-Blocked This cannot move forward until something else changes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants