Skip to content

Conversation

@klihub
Copy link
Member

@klihub klihub commented Oct 9, 2025

If our connection gets closed before we had a chance to get Configure()'d by the runtime, cancel Start()'s wait for the result by letting it know about the failure. Otherwise the stub might get stuck in Start().

@klihub klihub requested review from chrishenzie and mikebrow October 9, 2025 07:09
func (stub *stub) connClosed() {
select {
// if our connection gets closed before we get Configure()'d, let Start() know
case stub.cfgErrC <- ttrpc.ErrClosed:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since we are overloading the cfgErrC is there any chance we end up with two writers to the channel in one Start?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So if connClosed writes to cfgErrc while Configure is running, the deferred send in Configure would block forever and leak a goroutine?

What if we make the send in Configure non-blocking as well? That would ensure whichever function gets there first wins.

defer func() {
	select {
	case stub.cfgErrC <- retErr:
	default:
	}
}()

What do you think?

Copy link
Member Author

@klihub klihub Oct 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure, because I don't know (and did not try to check/test) how racy a socket connection close soon after a ttrpc message sending over the same socket can end up being wrt. ttrpc delivering the message or an onClose() callback. Therefore I wanted to err on the side of safety, so although cfgErrC is a buffered channel with a capacity of 1, I still decided to do an attempted/non-blocking send here with a select, so we can't get stuck here. If sending here fails, it means that there is already a pending/unreceived cfgErr in the channel, which will nudge Start() out of the wait-receive, so no harm is done if the extra ttrpc.ErrClosed attempted to send here is lost. And if Configure() has not failed yet, or the failure error has not been delivered over the channel yet, sending will succeed as the channel is empty, which again should nudge Start() out of the wait-receive.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So if connClosed writes to cfgErrc while Configure is running, the deferred send in Configure would block forever and leak a goroutine?

Hmm, I think it shouldn't. The channel is buffered with a capacity of 1. We always read/receive 1 error, sent from Configure() or connClosed(), whichever comes first, and we try to send at most 2. So if we send 2, then 1 stays buffered in the channel, which should not be a problem either, because if the stub is ever re-Start()ed (so we try to go through this again), it creates a new cfgErrC channel.

What if we make the send in Configure non-blocking as well? That would ensure whichever function gets there first wins.

defer func() {
	select {
	case stub.cfgErrC <- retErr:
	default:
	}
}()

What do you think?

Yes, I think that is a good idea. I updated the PR accordingly.

Copy link
Member

@mikebrow mikebrow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@klihub klihub force-pushed the fixes/cancel-start-on-early-conn-loss branch from bf9edb1 to d8515b0 Compare October 10, 2025 08:26
If our connection gets closed before we had a chance to get
Configure()'d by the runtime, cancel Start()'s wait for the
result by letting it know about the failure.

Signed-off-by: Krisztian Litkey <krisztian.litkey@intel.com>
@klihub klihub force-pushed the fixes/cancel-start-on-early-conn-loss branch from d8515b0 to 1681e81 Compare October 10, 2025 14:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants