Replies: 2 comments 2 replies
-
@mkeskells can you document your back pressure ideas here? |
Beta Was this translation helpful? Give feedback.
1 reply
-
Perhaps we could write the records to local files and then upload to permanent storage on a flush. In this way we should avoid the issue where we crash before Kafka has not committed the record but we have written it to storage. In this case I think a restart will result in duplicate records in the storage. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I would like to introduce a feature that ensure that we don't buffer too many records in memory before we flush to files.
Theer is a basic implementation of this in #319 but this is intermingled and depends on several other features in the same PR
I think that we need to have some support for back pressure, and this can evolve into a more sophisticated solution
The Goals are
Its isn't expected that this can be done without some configuration changes
related to
#270
#259
Beta Was this translation helpful? Give feedback.
All reactions