-
Notifications
You must be signed in to change notification settings - Fork 99
[server][dvc] change the client side transfer timeout configurable and close channel once timeout. #1805
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[server][dvc] change the client side transfer timeout configurable and close channel once timeout. #1805
Conversation
…d close channel once timeout.
…hen the channel becomes inactive.
...nts/da-vinci-client/src/main/java/com/linkedin/davinci/blobtransfer/BlobSnapshotManager.java
Outdated
Show resolved
Hide resolved
} | ||
}, REQUEST_TIMEOUT_IN_MINUTES, TimeUnit.MINUTES); | ||
}, blobReceiveTimeoutInMin, TimeUnit.MINUTES); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For connection establishment, we should certainly have a timeout, but for the blob transfer, shall we just use the server timeout instead of both server/client timeout since the misconfiguration can cause weird issues?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is fine to only enforce the client-side timeout and when client timeout happens, it will close the channel and server will receive the exception during write, and it can close the channel as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, lets only have configurable timeout at server side. Removed the timeout logic at client side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should add a test to simulate the concurrent blob transfer requests to make sure the global limit work.
...ent/src/main/java/com/linkedin/davinci/blobtransfer/server/P2PFileTransferServerHandler.java
Show resolved
Hide resolved
...ent/src/main/java/com/linkedin/davinci/blobtransfer/server/P2PFileTransferServerHandler.java
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
Problem Statement
When onboarding the blob transfer bootstrap feature to a large store (e.g., 10GB per partition, 120GB per host), the transfer time is so long that it triggers a client-side timeout exception. Upon reaching the timeout, a partition cleanup is performed before moving to the next host.
However, during the cleanup process, the channels are not closed, and Netty continues receiving transferred files. If files are being cleaned up while validation is happening, checksum failures occur, resulting in checksum errors. These failures trigger the exceptionCaught method, which eventually leads to the channel being closed.
As a result, incomplete cleanups occur—some files are deleted, but others that are still being transferred or created after the cleanup begins remain. This race condition arises because file transfers and cleanups are happening concurrently.
Ultimately, even if the blob transfer fails and the bootstrap falls back to Kafka ingestion, the incomplete cleanup leads to database corruption due to residual files.
Solution
Code changes
Concurrency-Specific Checks
Both reviewer and PR author to verify
synchronized
,RWLock
) are used where needed.ConcurrentHashMap
,CopyOnWriteArrayList
).How was this PR tested?
Does this PR introduce any user-facing or breaking changes?