Description
Proposal
The standard library currently does not provide a oneshot channel. This ACP advocates for the addition of a oneshot
channel to the standard library (under the std::sync
module, next to std::sync::mpsc
and soon std::sync::mpmc
).
Problem statement
There is no way to precisely and succinctly send ownership of a single value from one thread to another, allowing the receiving thread to block until the value is received.
Motivating examples or use cases
Since this is a rather simple primitive and there are many different use cases for a synchronous oneshot channel, I'll just explain the use case I came across.
Suppose we have a single "processing" thread that other worker threads send values to. The processing thread receives these values and does stuff with each value. Once the processing thread is done with a value, it needs to send it back to the originating thread. Additionally, the sending thread needs to be able to block on receiving the response from the processing thread.
For my use case, the processing thread is a background I/O thread that reads from and writes to disk, and the worker threads send requests for the background thread to read into and write from specified buffers. In this particular scenario, the processing thread sends back a unit, but other use cases might involve it returning the result of a computation that only it can perform.
A oneshot channel makes this relatively easy. The sending threads create a oneshot channel, and they send the sender along with their value, keeping the receiving side for themselves. They can block on the receiving side until the processing thread has finished and sends back a response.
Solution sketch
The API would be very similar to the other channels mentioned above.
#![feature(oneshot_channel)] // or just `#![feature(oneshot)]`?
pub fn channel<T>() -> (Sender<T>, Receiver<T>) { ... }
pub struct Sender<T> { ... }
impl<T> Sender<T> {
// vvvv Consumes! vvvvvvvvv Some sort of error handling...
pub fn send(self, t: T) -> Result<(), SendError> { ... }
}
pub struct Receiver<T> { ... }
impl<T> Receiver<T> {
// vvvv Consumes! vvvvvvvvv Some sort of error handling...
pub fn recv(self) -> Result<T, RecvError> { ... }
}
While other interesting methods could be added, I'll keep this ACP short.
Implementation
We could simply port over the oneshot
crate's implementation. I'm also unsure if integrating that code into the standard library internals would enable additional feature possibilities, though this is a potential benefit.
Alternatives
There is the oneshot
crate, but I think most people would prefer to have this functionality directly in the standard library (especially since a use case of oneshot
would likely be alongside std::sync::mpsc
, and it feels somewhat weird to use the standard library mpsc
but a third-party oneshot
).
I can think of a few ways to do this in safe Rust, but the ergonomics of each aren't great.
The first is to just use an mpsc
for a single value and then close the channel. This is actually quite a good alternative, and I doubt that a dedicated oneshot channel would outperform mpsc
in any way. However, I think one of the main benefits of a oneshot channel is the API: it ensures that only one message can be sent by taking ownership in the interface, preventing accidental multiple sends.
The second is to use a Arc<Mutex<Option<T>>
plus a Condvar
and manually synchronize the replace
and take
after both sides hold the lock. In my opinion, this is overkill for a single value and error-prone if Condvar
and Option<T>
are misused.
There is technically a third option, discussed very briefly in this Zulip thread, which involves using OnceLock::wait
. However, this approach is also error-prone (relying on Arc
strong count), not ergonomic, would probably require unsafe
code, and even if unsafe
was not used it would likely not be performant because the receiving thread would need to spin wait for the Arc
to drop (indefinitely).
Links and related work
What happens now?
This issue contains an API change proposal (or ACP) and is part of the libs-api team feature lifecycle. Once this issue is filed, the libs-api team will review open proposals as capability becomes available. Current response times do not have a clear estimate, but may be up to several months.
Possible responses
The libs team may respond in various different ways. First, the team will consider the problem (this doesn't require any concrete solution or alternatives to have been proposed):
- We think this problem seems worth solving, and the standard library might be the right place to solve it.
- We think that this probably doesn't belong in the standard library.
Second, if there's a concrete solution:
- We think this specific solution looks roughly right, approved, you or someone else should implement this. (Further review will still happen on the subsequent implementation PR.)
- We're not sure this is the right solution, and the alternatives or other materials don't give us enough information to be sure about that. Here are some questions we have that aren't answered, or rough ideas about alternatives we'd want to see discussed.