Avoid change streams on the storage database #276
Draft
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Background
The MongoDB storage adapter relied on change streams to detect changes to:
The idea is that each API process is "notified" of changes, so that:
The issue
The issue is that change streams can have high overhead on a cluster. A change stream is effectively reading all changes in the oplog, then filtering it to the watched collection/pipeline.
This is fine when you only have a small number of open change streams, or low volumes of writes. However, we have cases where there are 100+ change streams open at a time. And even though those collections are not modified often, when you also have a 20k/s write rate (could happen when reprocessing sync rules), you suddenly end up with 100k document scans/s, even though very few documents are returned.
I cannot find any good documentation on this - this performance impact is not mentioned in the docs. But this script demonstrates the issue quite clearly: https://gist.github.com/rkistner/0d898880b0a0a48d1557f64e01992795
I also suspect that MongoDB has an optimization for this issue in Flex clusters, but that code is private unfortunately.
The fix
The fix is to not use watch/change streams in the storage adapter. The actual implementation is different for read and write checkpoints.
Read checkpoints
Diff for this part
We implement this similar to the NOTIFY functionality we use for Postgres storage:
checkpoint_events
capped collection.An alternative would be to just use polling on sync_rules. However, method has lower latency, and reduced overhead when the instance is mostly idle.
Tailable cursors are an under-documented feature, but it does appear to work well for this case. It gives functionality similar to change streams, with better efficiency, at the cost of requiring explicit writes to the collection.
Write checkpoints
Diff for this part
For write checkpoints, we now use the same mechanism as for bucket_data and parameter_data: On each new read checkpoint, we read all the write checkpoints created after the previous read checkpoint.
What makes this a larger change is that:
a. Custom write checkpoints are now persisted in the same batch/transaction as other data, and gets a matching op_id.
b. Manged write checkpoints gets a
processed_at_lsn
field, populated when a read checkpoints are committed. We may change this to also use an op_id in the future, but that would complicate the current implementation a bit.This reverts big parts of #230, but does not go back to the old approach. This actually results in less code and a simpler architecture overall.
TODO