Skip to content

Conversation

bdeggleston
Copy link
Member

No description provided.

// for correctness vs complex protocols topology updates. You could make the case that mutable state would be
// a better tradeoff for node replacement, but it seems likely that handling token movements will be simpler
// if we use a copy on write pattern for topology changes.
private final ReentrantReadWriteLock shardLock = new ReentrantReadWriteLock();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Practically it should be fine to keep this lock unfair, but I wonder if we'll find workloads with high read and write query throughput to starve topology changes. Could be worth using StampedLock here, we don't seem to require reentrancy.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree this is probably fine, but it would be worth looking at better solutions. Would you mind if we just added a TODO to consider better options here before merge, instead of addressing it in this ticket?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah definitely, fine to defer

*/
private boolean isTrackedReplicationEnabled(String keyspace)
{
return ClusterMetadata.current().schema.getKeyspaceMetadata(keyspace).useMutationTracking();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Null check for dropped keyspace?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

StreamPlan assumes that you're only sending/requesting streams for keyspaces that exist, and the rest of the class has the same null unsafety around keyspace lookups. I can add a more descriptive error message if you like, but if the keyspace doesn't exist, other parts of stream plan will have already thrown an NPE before getting here


public long serializedSize(int version)
{
return 0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Intentional?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is, yes. OutgoingStreamMessage does the same thing since these messages aren't serialized into a buffer, but put directly into the socket.

// end-of-stream marker
out.writeBoolean(false);

session.logStreamSent(this);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we avoid side effects like timeout scheduling during serde?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the same pattern used by sstable streaming

mutation.getKeyspaceName(),
mutation.key().getToken());

mutation.apply();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking we'd receive all the mutation logs, then do replay to apply all before completing the session, rather than deserializing and applying each at a time

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’d thought about doing that as part of this initial implementation, but it wasn’t really clear what would be gained from the additional state changes. IIRC the motivation for staging sstables before making them visible to reads is to prevent data resurrection, but that’s not a concern with read reconciliation. Additionally, I think we’ll only be doing log streaming like this for pending ranges, so any data written here won’t actually be read until the streams complete successfully anyway.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants