-
Notifications
You must be signed in to change notification settings - Fork 1.1k
PeerDAS: Downscore peers that request too many data columns via RPC #15205
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: peerDAS
Are you sure you want to change the base?
Conversation
…sive data column RPC requests
…yRoot handlers - track total columns (count * columns) for ByRange and number of requested columns for ByRoot
…corer - Add comprehensive test suite for configuration behavior - Test default values, custom config, partial config - Test config immutability - Fix config handling in constructor to ensure immutability - Initialize config with defaults and selectively override
The criteria chosen for scoring based on RPC requests and cutting off bad peers are currently arbitrary. I'm assuming there's already a good idea for what these values should be set to, but if not, I can dig in further. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Questions on changing general scorer weight which may not be peer das related
if config != nil { | ||
if config.DecayInterval != 0 { | ||
cfg.DecayInterval = config.DecayInterval | ||
} | ||
if config.Decay != 0 { | ||
cfg.Decay = config.Decay | ||
} | ||
if config.Threshold != 0 { | ||
cfg.Threshold = config.Threshold | ||
} | ||
if config.PenaltyFactor != 0 { | ||
cfg.PenaltyFactor = config.PenaltyFactor | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a direct comment on this PR, but more of a general note for, this pattern shows up in quite a few places, and we could consider replacing it with a generic helper like:
func override[T comparable](val T, def T) T {
var zero T
if val != zero {
return val
}
return def
}
beacon-chain/p2p/peers/scorers/data_column_rpc_request_scorer.go
Outdated
Show resolved
Hide resolved
for pid := range s.store.Peers() { | ||
if s.isBadPeerNoLock(pid) != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This access the map twice but it's probably fine
defer s.store.Unlock() | ||
|
||
peerData := s.store.PeerDataGetOrCreate(pid) | ||
peerData.DataColumnRequestCount += uint64(numColumns) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dont think we can overflow here but will be good to double check
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This now just increments the counter each time it's called instead of taking a number of columns to increment. Do you want an overflow check on peerData.DataColumnRequestCount++
?
s.setScorerWeight(s.scorers.badResponsesScorer, 0.3) | ||
s.setScorerWeight(s.scorers.badResponsesScorer, 0.25) | ||
s.scorers.blockProviderScorer = newBlockProviderScorer(store, config.BlockProviderScorerConfig) | ||
s.setScorerWeight(s.scorers.blockProviderScorer, 0.0) | ||
s.scorers.peerStatusScorer = newPeerStatusScorer(store, config.PeerStatusScorerConfig) | ||
s.setScorerWeight(s.scorers.peerStatusScorer, 0.3) | ||
s.setScorerWeight(s.scorers.peerStatusScorer, 0.25) | ||
s.scorers.gossipScorer = newGossipScorer(store, config.GossipScorerConfig) | ||
s.setScorerWeight(s.scorers.gossipScorer, 0.4) | ||
s.setScorerWeight(s.scorers.gossipScorer, 0.35) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we changing these values? they dont seem peer das related no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I restored the old values and just added a weight of 0.15 for the new scorer, but now they don't add up to 1 anymore. If that's fine, I think we're good to go.
The latest changes only penalize a peer for requesting recent slots (within the past 16 slots by default). This should avoid penalizing nodes during their initial sync, but if a node consistently requests columns from recent blocks via RPC, they'll be pruned. |
What type of PR is this?
Feature
What does this PR do? Why is it needed?
Downscores peers that request too many data columns via RPC to manage upload bandwidth for nodes.
Which issues(s) does this PR fix?
Part of #14129.
Other notes for review
Acknowledgements