-
Notifications
You must be signed in to change notification settings - Fork 412
rustfmt
: Run on peer_handler.rs
#3725
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rustfmt
: Run on peer_handler.rs
#3725
Conversation
👋 Thanks for assigning @TheBlueMatt as a reviewer! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good, no code / behavior changes, just formatting. left a minor question
lightning/src/ln/peer_handler.rs
Outdated
let logger = WithContext::from(&self.logger, peer.their_node_id.map(|p| p.0), None, None); | ||
let logger = WithContext::from( | ||
&self.logger, | ||
peer.their_node_id.map(|p| p.0), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Extract into a var of to save some lines, there are others like this in the file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed most pending feedback.
lightning/src/ln/peer_handler.rs
Outdated
let logger = WithContext::from(&self.logger, peer.their_node_id.map(|p| p.0), None, None); | ||
let logger = WithContext::from( | ||
&self.logger, | ||
peer.their_node_id.map(|p| p.0), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
5e7f142
to
6403b34
Compare
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #3725 +/- ##
==========================================
+ Coverage 89.11% 89.34% +0.23%
==========================================
Files 156 156
Lines 123435 126684 +3249
Branches 123435 126684 +3249
==========================================
+ Hits 109995 113191 +3196
- Misses 10758 10828 +70
+ Partials 2682 2665 -17 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Highlighted a few patterns that could be used to clean up a number of places across the diff.
None | ||
} | ||
fn handle_reply_channel_range( | ||
&self, _their_node_id: PublicKey, _msg: msgs::ReplyChannelRange, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All the dummy impls can probably get their parameters all replaced with just _
, which would make them more dense and no less readable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no less readable
Hmm, I tend to disagree on this one. Even if they are not used, its adding context what is not being used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean sometimes, sure, but when we're implementing a trait with 20 methods where all of the implementations are just "return nothing", I think we probably don't care too much :p
6403b34
to
365963c
Compare
MessageSendEvent::SendStfu { ref node_id, ref msg} => { | ||
let logger = WithContext::from(&self.logger, Some(*node_id), Some(msg.channel_id), None); | ||
MessageSendEvent::SendStfu { ref node_id, ref msg } => { | ||
let logger = WithContext::from( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not like it has to happen in this PR but this really makes clearer that we need some kind of logger_from_msg
utility maybe in wire.rs
.
a96f916
to
56cf495
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Excuse the delay here. Addressed most pending feedback. Let me know if there's something particular left to address.
The tests could be cleaned up a bit more #3725 (comment). |
56cf495
to
b7dbd05
Compare
Excuse the delay here once more! Now rebased and addressed this remaining comment by pulling out some more handlers into intermediate variables as a fixup. Let me know if there's something else you'd like to see. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically just #3725 (comment) but a few other nits that look trivial to fix.
b7dbd05
to
6934b64
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feel free to squash, IMO
6934b64
to
8f3fad6
Compare
Squashed without further changes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ugh, looks like feedback was missed
8f3fad6
to
8f48f95
Compare
Force-pushed with the following changes: diff --git a/lightning/src/ln/peer_handler.rs b/lightning/src/ln/peer_handler.rs
index bcae7690f..73c831884 100644
--- a/lightning/src/ln/peer_handler.rs
+++ b/lightning/src/ln/peer_handler.rs
@@ -4420,48 +4420,40 @@ mod tests {
let act_three_with_init_b = fd_b.outbound_data.lock().unwrap().split_off(0);
- assert!(!peer_a
- .peers
- .read()
- .unwrap()
- .get(&fd_a)
- .unwrap()
- .lock()
- .unwrap()
- .handshake_complete());
+ {
+ let peer_a_lock = peer_a.peers.read().unwrap();
+ let handshake_complete =
+ peer_a_lock.get(&fd_a).unwrap().lock().unwrap().handshake_complete();
+ assert!(!handshake_complete);
+ }
+
assert_eq!(peer_a.read_event(&mut fd_a, &act_three_with_init_b).unwrap(), false);
peer_a.process_events();
- assert!(peer_a
- .peers
- .read()
- .unwrap()
- .get(&fd_a)
- .unwrap()
- .lock()
- .unwrap()
- .handshake_complete());
+
+ {
+ let peer_a_lock = peer_a.peers.read().unwrap();
+ let handshake_complete =
+ peer_a_lock.get(&fd_a).unwrap().lock().unwrap().handshake_complete();
+ assert!(handshake_complete);
+ }
let init_a = fd_a.outbound_data.lock().unwrap().split_off(0);
assert!(!init_a.is_empty());
- assert!(!peer_b
- .peers
- .read()
- .unwrap()
- .get(&fd_b)
- .unwrap()
- .lock()
- .unwrap()
- .handshake_complete());
+ {
+ let peer_b_lock = peer_b.peers.read().unwrap();
+ let handshake_complete =
+ peer_b_lock.get(&fd_b).unwrap().lock().unwrap().handshake_complete();
+ assert!(!handshake_complete);
+ }
+
assert_eq!(peer_b.read_event(&mut fd_b, &init_a).unwrap(), false);
peer_b.process_events();
- assert!(peer_b
- .peers
- .read()
- .unwrap()
- .get(&fd_b)
- .unwrap()
- .lock()
- .unwrap()
- .handshake_complete());
+
+ {
+ let peer_b_lock = peer_b.peers.read().unwrap();
+ let handshake_complete =
+ peer_b_lock.get(&fd_b).unwrap().lock().unwrap().handshake_complete();
+ assert!(handshake_complete);
+ }
// Make sure we're still connected.
@@ -4565,17 +4557,10 @@ mod tests {
}
- assert_eq!(
- peers[0]
- .peers
- .read()
- .unwrap()
- .get(&fd_a)
- .unwrap()
- .lock()
- .unwrap()
- .gossip_broadcast_buffer
- .len(),
- OUTBOUND_BUFFER_LIMIT_DROP_GOSSIP
- );
+ {
+ let peer_a_lock = peers[0].peers.read().unwrap();
+ let buf_len =
+ peer_a_lock.get(&fd_a).unwrap().lock().unwrap().gossip_broadcast_buffer.len();
+ assert_eq!(buf_len, OUTBOUND_BUFFER_LIMIT_DROP_GOSSIP);
+ }
// Check that if a broadcast message comes in from the channel handler (i.e. it is an
@@ -4583,17 +4568,11 @@ mod tests {
cfgs[0].chan_handler.pending_events.lock().unwrap().push(msg_ev);
peers[0].process_events();
- assert_eq!(
- peers[0]
- .peers
- .read()
- .unwrap()
- .get(&fd_a)
- .unwrap()
- .lock()
- .unwrap()
- .gossip_broadcast_buffer
- .len(),
- OUTBOUND_BUFFER_LIMIT_DROP_GOSSIP + 1
- );
+
+ {
+ let peer_a_lock = peers[0].peers.read().unwrap();
+ let buf_len =
+ peer_a_lock.get(&fd_a).unwrap().lock().unwrap().gossip_broadcast_buffer.len();
+ assert_eq!(buf_len, OUTBOUND_BUFFER_LIMIT_DROP_GOSSIP + 1);
+ }
// Finally, deliver all the messages and make sure we got the right count. Note that there
@@ -4605,14 +4584,11 @@ mod tests {
drain_queues!();
- assert!(peers[0]
- .peers
- .read()
- .unwrap()
- .get(&fd_a)
- .unwrap()
- .lock()
- .unwrap()
- .gossip_broadcast_buffer
- .is_empty());
+ {
+ let peer_a_lock = peers[0].peers.read().unwrap();
+ let empty =
+ peer_a_lock.get(&fd_a).unwrap().lock().unwrap().gossip_broadcast_buffer.is_empty();
+ assert!(empty);
+ }
+
assert_eq!(
cfgs[1].routing_handler.chan_anns_recvd.load(Ordering::Relaxed), |
🔔 1st Reminder Hey @TheBlueMatt! This PR has been waiting for your review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Landing, but will open a quick PR to address the followups.
let their_node_id = peer.their_node_id.map(|p| p.0); | ||
let logger = WithContext::from(&self.logger, their_node_id, None, None); | ||
// `unwrap` SAFETY: `their_node_id` is guaranteed to be `Some` after the handshake | ||
let node_id = peer.their_node_id.unwrap().0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We map
then unwrap
the same thing two lines apart. Seems like we can just unwrap.
// Check that the peers map is consistent with the | ||
// node_id_to_descriptor map, as this has been broken | ||
// before. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment is about the debug_assert
so should come after the their_node_id
wipe.
if let Err(()) = self.message_handler.route_handler.peer_connected(their_node_id, &msg, peer_lock.inbound_connection) { | ||
log_debug!(logger, "Route Handler decided we couldn't communicate with peer {}", log_pubkey!(their_node_id)); | ||
return Err(PeerHandleError { }.into()); | ||
let connection = peer_lock.inbound_connection; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ha, inbound
may be a more readable name :)
@@ -2878,18 +3486,20 @@ impl<Descriptor: SocketDescriptor, CM: Deref, RM: Deref, OM: Deref, L: Deref, CM | |||
debug_assert!(peer.channel_encryptor.is_ready_for_encryption()); | |||
debug_assert!(peer.their_node_id.is_some()); | |||
|
|||
loop { // Used as a `goto` to skip writing a Ping message. | |||
loop { | |||
// Used as a `goto` to skip writing a Ping message. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is now ambiguous whether its referring to the loop
or the following line.
fn create_network<'a>( | ||
peer_count: usize, cfgs: &'a Vec<PeerManagerCfg>, | ||
) -> Vec< | ||
PeerManager< |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could use TestPeer
here.
No description provided.