Skip to content

Commit 248c45a

Browse files
TheBlueMattoptout21
authored andcommitted
Do not track HTLC IDs as separate MPP parts which need claiming
When we claim an MPP payment, we need to track which channels have had the preimage durably added to their `ChannelMonitor` to ensure we don't remove the preimage from any `ChannelMonitor`s until all `ChannelMonitor`s have the preimage. Previously, we tracked each MPP part, down to the HTLC ID, as a part which we needed to get the preimage on disk for. However, this is not necessary - once a `ChannelMonitor` has a preimage, it applies it to all inbound HTLCs with the same payment hash. Further, this can cause a channel to wait on itself in cases of high-latency synchronous persistence - * If we have receive an MPP payment for which multiple parts came to us over the same channel, * and claim the MPP payment, creating a `ChannelMonitorUpdate` for the first part but enqueueing the remaining HTLC claim(s) in the channel's holding cell, * and we receive a `revoke_and_ack` for the same channel before the `ChannelManager::claim_payment` method completes (as each claim waits for the `ChannelMonitorUpdate` persistence), * we will cause the `ChannelMonitorUpdate` for that `revoke_and_ack` to go into the blocked set, waiting on the MPP parts to be fully claimed, * but when `claim_payment` goes to add the next `ChannelMonitorUpdate` for the MPP claim, it will be placed in the blocked set, since the blocked set is non-empty. Thus, we'll end up with a `ChannelMonitorUpdate` in the blocked set which is needed to unblock the channel since it is a part of the MPP set which blocked the channel.
1 parent e6784c6 commit 248c45a

File tree

4 files changed

+290
-26
lines changed

4 files changed

+290
-26
lines changed

lightning/src/ln/chanmon_update_fail_tests.rs

Lines changed: 222 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3860,3 +3860,225 @@ fn test_claim_to_closed_channel_blocks_claimed_event() {
38603860
nodes[1].chain_monitor.complete_sole_pending_chan_update(&chan_a.2);
38613861
expect_payment_claimed!(nodes[1], payment_hash, 1_000_000);
38623862
}
3863+
3864+
#[test]
3865+
#[cfg(all(feature = "std", not(target_os = "windows")))]
3866+
fn test_single_channel_multiple_mpp() {
3867+
use std::sync::atomic::{AtomicBool, Ordering};
3868+
3869+
// Test what happens when we attempt to claim an MPP with many parts that came to us through
3870+
// the same channel with a synchronous persistence interface which has very high latency.
3871+
//
3872+
// Previously, if a `revoke_and_ack` came in while we were still running in
3873+
// `ChannelManager::claim_payment` we'd end up hanging waiting to apply a
3874+
// `ChannelMonitorUpdate` until after it completed. See the commit which introduced this test
3875+
// for more info.
3876+
let chanmon_cfgs = create_chanmon_cfgs(9);
3877+
let node_cfgs = create_node_cfgs(9, &chanmon_cfgs);
3878+
let configs = [None, None, None, None, None, None, None, None, None];
3879+
let node_chanmgrs = create_node_chanmgrs(9, &node_cfgs, &configs);
3880+
let mut nodes = create_network(9, &node_cfgs, &node_chanmgrs);
3881+
3882+
let node_7_id = nodes[7].node.get_our_node_id();
3883+
let node_8_id = nodes[8].node.get_our_node_id();
3884+
3885+
// Send an MPP payment in six parts along the path shown from top to bottom
3886+
// 0
3887+
// 1 2 3 4 5 6
3888+
// 7
3889+
// 8
3890+
//
3891+
// We can in theory reproduce this issue with fewer channels/HTLCs, but getting this test
3892+
// robust is rather challenging. We rely on having the main test thread wait on locks held in
3893+
// the background `claim_funds` thread and unlocking when the `claim_funds` thread completes a
3894+
// single `ChannelMonitorUpdate`.
3895+
// This thread calls `get_and_clear_pending_msg_events()` and `handle_revoke_and_ack()`, both
3896+
// of which require `ChannelManager` locks, but we have to make sure this thread gets a chance
3897+
// to be blocked on the mutexes before we let the background thread wake `claim_funds` so that
3898+
// the mutex can switch to this main thread.
3899+
// This relies on our locks being fair, but also on our threads getting runtime during the test
3900+
// run, which can be pretty competitive. Thus we do a dumb dance to be as conservative as
3901+
// possible - we have a background thread which completes a `ChannelMonitorUpdate` (by sending
3902+
// into the `write_blocker` mpsc) but it doesn't run until a mpsc channel sends from this main
3903+
// thread to the background thread, and then we let it sleep a while before we send the
3904+
// `ChannelMonitorUpdate` unblocker.
3905+
// Further, we give ourselves two chances each time, needing 4 HTLCs just to unlock our two
3906+
// `ChannelManager` calls. We then need a few remaining HTLCs to actually trigger the bug, so
3907+
// we use 6 HTLCs.
3908+
// Finaly, we do not run this test on Winblowz because it, somehow, in 2025, does not implement
3909+
// actual preemptive multitasking and thinks that cooperative multitasking somehow is
3910+
// acceptable in the 21st century, let alone a quarter of the way into it.
3911+
const MAX_THREAD_INIT_TIME: std::time::Duration = std::time::Duration::from_secs(1);
3912+
3913+
create_announced_chan_between_nodes_with_value(&nodes, 0, 1, 100_000, 0);
3914+
create_announced_chan_between_nodes_with_value(&nodes, 0, 2, 100_000, 0);
3915+
create_announced_chan_between_nodes_with_value(&nodes, 0, 3, 100_000, 0);
3916+
create_announced_chan_between_nodes_with_value(&nodes, 0, 4, 100_000, 0);
3917+
create_announced_chan_between_nodes_with_value(&nodes, 0, 5, 100_000, 0);
3918+
create_announced_chan_between_nodes_with_value(&nodes, 0, 6, 100_000, 0);
3919+
3920+
create_announced_chan_between_nodes_with_value(&nodes, 1, 7, 100_000, 0);
3921+
create_announced_chan_between_nodes_with_value(&nodes, 2, 7, 100_000, 0);
3922+
create_announced_chan_between_nodes_with_value(&nodes, 3, 7, 100_000, 0);
3923+
create_announced_chan_between_nodes_with_value(&nodes, 4, 7, 100_000, 0);
3924+
create_announced_chan_between_nodes_with_value(&nodes, 5, 7, 100_000, 0);
3925+
create_announced_chan_between_nodes_with_value(&nodes, 6, 7, 100_000, 0);
3926+
create_announced_chan_between_nodes_with_value(&nodes, 7, 8, 1_000_000, 0);
3927+
3928+
let (mut route, payment_hash, payment_preimage, payment_secret) = get_route_and_payment_hash!(&nodes[0], nodes[8], 50_000_000);
3929+
3930+
send_along_route_with_secret(&nodes[0], route, &[&[&nodes[1], &nodes[7], &nodes[8]], &[&nodes[2], &nodes[7], &nodes[8]], &[&nodes[3], &nodes[7], &nodes[8]], &[&nodes[4], &nodes[7], &nodes[8]], &[&nodes[5], &nodes[7], &nodes[8]], &[&nodes[6], &nodes[7], &nodes[8]]], 50_000_000, payment_hash, payment_secret);
3931+
3932+
let (do_a_write, blocker) = std::sync::mpsc::sync_channel(0);
3933+
*nodes[8].chain_monitor.write_blocker.lock().unwrap() = Some(blocker);
3934+
3935+
// Until we have std::thread::scoped we have to unsafe { turn off the borrow checker }.
3936+
// We do this by casting a pointer to a `TestChannelManager` to a pointer to a
3937+
// `TestChannelManager` with different (in this case 'static) lifetime.
3938+
// This is even suggested in the second example at
3939+
// https://doc.rust-lang.org/std/mem/fn.transmute.html#examples
3940+
let claim_node: &'static TestChannelManager<'static, 'static> =
3941+
unsafe { std::mem::transmute(nodes[8].node as &TestChannelManager) };
3942+
let thrd = std::thread::spawn(move || {
3943+
// Initiate the claim in a background thread as it will immediately block waiting on the
3944+
// `write_blocker` we set above.
3945+
claim_node.claim_funds(payment_preimage);
3946+
});
3947+
3948+
// First unlock one monitor so that we have a pending
3949+
// `update_fulfill_htlc`/`commitment_signed` pair to pass to our counterparty.
3950+
do_a_write.send(()).unwrap();
3951+
3952+
// Then fetch the `update_fulfill_htlc`/`commitment_signed`. Note that the
3953+
// `get_and_clear_pending_msg_events` will immediately hang trying to take a peer lock which
3954+
// `claim_funds` is holding. Thus, we release a second write after a small sleep in the
3955+
// background to give `claim_funds` a chance to step forward, unblocking
3956+
// `get_and_clear_pending_msg_events`.
3957+
let do_a_write_background = do_a_write.clone();
3958+
let block_thrd2 = AtomicBool::new(true);
3959+
let block_thrd2_read: &'static AtomicBool = unsafe { std::mem::transmute(&block_thrd2) };
3960+
let thrd2 = std::thread::spawn(move || {
3961+
while block_thrd2_read.load(Ordering::Acquire) {
3962+
std::thread::yield_now();
3963+
}
3964+
std::thread::sleep(MAX_THREAD_INIT_TIME);
3965+
do_a_write_background.send(()).unwrap();
3966+
std::thread::sleep(MAX_THREAD_INIT_TIME);
3967+
do_a_write_background.send(()).unwrap();
3968+
});
3969+
block_thrd2.store(false, Ordering::Release);
3970+
let first_updates = get_htlc_update_msgs(&nodes[8], &nodes[7].node.get_our_node_id());
3971+
thrd2.join().unwrap();
3972+
3973+
// Disconnect node 6 from all its peers so it doesn't bother to fail the HTLCs back
3974+
nodes[7].node.peer_disconnected(nodes[1].node.get_our_node_id());
3975+
nodes[7].node.peer_disconnected(nodes[2].node.get_our_node_id());
3976+
nodes[7].node.peer_disconnected(nodes[3].node.get_our_node_id());
3977+
nodes[7].node.peer_disconnected(nodes[4].node.get_our_node_id());
3978+
nodes[7].node.peer_disconnected(nodes[5].node.get_our_node_id());
3979+
nodes[7].node.peer_disconnected(nodes[6].node.get_our_node_id());
3980+
3981+
nodes[7].node.handle_update_fulfill_htlc(node_8_id, &first_updates.update_fulfill_htlcs[0]);
3982+
check_added_monitors(&nodes[7], 1);
3983+
expect_payment_forwarded!(nodes[7], nodes[1], nodes[8], Some(1000), false, false);
3984+
nodes[7].node.handle_commitment_signed(node_8_id, &first_updates.commitment_signed);
3985+
check_added_monitors(&nodes[7], 1);
3986+
let (raa, cs) = get_revoke_commit_msgs(&nodes[7], &node_8_id);
3987+
3988+
// Now, handle the `revoke_and_ack` from node 5. Note that `claim_funds` is still blocked on
3989+
// our peer lock, so we have to release a write to let it process.
3990+
// After this call completes, the channel previously would be locked up and should not be able
3991+
// to make further progress.
3992+
let do_a_write_background = do_a_write.clone();
3993+
let block_thrd3 = AtomicBool::new(true);
3994+
let block_thrd3_read: &'static AtomicBool = unsafe { std::mem::transmute(&block_thrd3) };
3995+
let thrd3 = std::thread::spawn(move || {
3996+
while block_thrd3_read.load(Ordering::Acquire) {
3997+
std::thread::yield_now();
3998+
}
3999+
std::thread::sleep(MAX_THREAD_INIT_TIME);
4000+
do_a_write_background.send(()).unwrap();
4001+
std::thread::sleep(MAX_THREAD_INIT_TIME);
4002+
do_a_write_background.send(()).unwrap();
4003+
});
4004+
block_thrd3.store(false, Ordering::Release);
4005+
nodes[8].node.handle_revoke_and_ack(node_7_id, &raa);
4006+
thrd3.join().unwrap();
4007+
assert!(!thrd.is_finished());
4008+
4009+
let thrd4 = std::thread::spawn(move || {
4010+
do_a_write.send(()).unwrap();
4011+
do_a_write.send(()).unwrap();
4012+
});
4013+
4014+
thrd4.join().unwrap();
4015+
thrd.join().unwrap();
4016+
4017+
expect_payment_claimed!(nodes[8], payment_hash, 50_000_000);
4018+
4019+
// At the end, we should have 7 ChannelMonitorUpdates - 6 for HTLC claims, and one for the
4020+
// above `revoke_and_ack`.
4021+
check_added_monitors(&nodes[8], 7);
4022+
4023+
// Now drive everything to the end, at least as far as node 7 is concerned...
4024+
*nodes[8].chain_monitor.write_blocker.lock().unwrap() = None;
4025+
nodes[8].node.handle_commitment_signed(node_7_id, &cs);
4026+
check_added_monitors(&nodes[8], 1);
4027+
4028+
let (updates, raa) = get_updates_and_revoke(&nodes[8], &nodes[7].node.get_our_node_id());
4029+
4030+
nodes[7].node.handle_update_fulfill_htlc(node_8_id, &updates.update_fulfill_htlcs[0]);
4031+
expect_payment_forwarded!(nodes[7], nodes[2], nodes[8], Some(1000), false, false);
4032+
nodes[7].node.handle_update_fulfill_htlc(node_8_id, &updates.update_fulfill_htlcs[1]);
4033+
expect_payment_forwarded!(nodes[7], nodes[3], nodes[8], Some(1000), false, false);
4034+
let mut next_source = 4;
4035+
if let Some(update) = updates.update_fulfill_htlcs.get(2) {
4036+
nodes[7].node.handle_update_fulfill_htlc(node_8_id, update);
4037+
expect_payment_forwarded!(nodes[7], nodes[4], nodes[8], Some(1000), false, false);
4038+
next_source += 1;
4039+
}
4040+
4041+
nodes[7].node.handle_commitment_signed(node_8_id, &updates.commitment_signed);
4042+
nodes[7].node.handle_revoke_and_ack(node_8_id, &raa);
4043+
if updates.update_fulfill_htlcs.get(2).is_some() {
4044+
check_added_monitors(&nodes[7], 5);
4045+
} else {
4046+
check_added_monitors(&nodes[7], 4);
4047+
}
4048+
4049+
let (raa, cs) = get_revoke_commit_msgs(&nodes[7], &node_8_id);
4050+
4051+
nodes[8].node.handle_revoke_and_ack(node_7_id, &raa);
4052+
nodes[8].node.handle_commitment_signed(node_7_id, &cs);
4053+
check_added_monitors(&nodes[8], 2);
4054+
4055+
let (updates, raa) = get_updates_and_revoke(&nodes[8], &node_7_id);
4056+
4057+
nodes[7].node.handle_update_fulfill_htlc(node_8_id, &updates.update_fulfill_htlcs[0]);
4058+
expect_payment_forwarded!(nodes[7], nodes[next_source], nodes[8], Some(1000), false, false);
4059+
next_source += 1;
4060+
nodes[7].node.handle_update_fulfill_htlc(node_8_id, &updates.update_fulfill_htlcs[1]);
4061+
expect_payment_forwarded!(nodes[7], nodes[next_source], nodes[8], Some(1000), false, false);
4062+
next_source += 1;
4063+
if let Some(update) = updates.update_fulfill_htlcs.get(2) {
4064+
nodes[7].node.handle_update_fulfill_htlc(node_8_id, update);
4065+
expect_payment_forwarded!(nodes[7], nodes[next_source], nodes[8], Some(1000), false, false);
4066+
}
4067+
4068+
nodes[7].node.handle_commitment_signed(node_8_id, &updates.commitment_signed);
4069+
nodes[7].node.handle_revoke_and_ack(node_8_id, &raa);
4070+
if updates.update_fulfill_htlcs.get(2).is_some() {
4071+
check_added_monitors(&nodes[7], 5);
4072+
} else {
4073+
check_added_monitors(&nodes[7], 4);
4074+
}
4075+
4076+
let (raa, cs) = get_revoke_commit_msgs(&nodes[7], &node_8_id);
4077+
nodes[8].node.handle_revoke_and_ack(node_7_id, &raa);
4078+
nodes[8].node.handle_commitment_signed(node_7_id, &cs);
4079+
check_added_monitors(&nodes[8], 2);
4080+
4081+
let raa = get_event_msg!(nodes[8], MessageSendEvent::SendRevokeAndACK, node_7_id);
4082+
nodes[7].node.handle_revoke_and_ack(node_8_id, &raa);
4083+
check_added_monitors(&nodes[7], 1);
4084+
}

lightning/src/ln/channelmanager.rs

Lines changed: 34 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -1136,7 +1136,7 @@ pub(crate) enum MonitorUpdateCompletionAction {
11361136
/// A pending MPP claim which hasn't yet completed.
11371137
///
11381138
/// Not written to disk.
1139-
pending_mpp_claim: Option<(PublicKey, ChannelId, u64, PendingMPPClaimPointer)>,
1139+
pending_mpp_claim: Option<(PublicKey, ChannelId, PendingMPPClaimPointer)>,
11401140
},
11411141
/// Indicates an [`events::Event`] should be surfaced to the user and possibly resume the
11421142
/// operation of another channel.
@@ -1238,10 +1238,16 @@ impl From<&MPPClaimHTLCSource> for HTLCClaimSource {
12381238
}
12391239
}
12401240

1241+
#[derive(Debug)]
1242+
pub(crate) struct PendingMPPClaim {
1243+
channels_without_preimage: Vec<(PublicKey, OutPoint, ChannelId)>,
1244+
channels_with_preimage: Vec<(PublicKey, OutPoint, ChannelId)>,
1245+
}
1246+
12411247
#[derive(Clone, Debug, Hash, PartialEq, Eq)]
12421248
/// The source of an HTLC which is being claimed as a part of an incoming payment. Each part is
1243-
/// tracked in [`PendingMPPClaim`] as well as in [`ChannelMonitor`]s, so that it can be converted
1244-
/// to an [`HTLCClaimSource`] for claim replays on startup.
1249+
/// tracked in [`ChannelMonitor`]s, so that it can be converted to an [`HTLCClaimSource`] for claim
1250+
/// replays on startup.
12451251
struct MPPClaimHTLCSource {
12461252
counterparty_node_id: PublicKey,
12471253
funding_txo: OutPoint,
@@ -1256,12 +1262,6 @@ impl_writeable_tlv_based!(MPPClaimHTLCSource, {
12561262
(6, htlc_id, required),
12571263
});
12581264

1259-
#[derive(Debug)]
1260-
pub(crate) struct PendingMPPClaim {
1261-
channels_without_preimage: Vec<MPPClaimHTLCSource>,
1262-
channels_with_preimage: Vec<MPPClaimHTLCSource>,
1263-
}
1264-
12651265
#[derive(Clone, Debug, PartialEq, Eq)]
12661266
/// When we're claiming a(n MPP) payment, we want to store information about that payment in the
12671267
/// [`ChannelMonitor`] so that we can replay the claim without any information from the
@@ -7184,8 +7184,15 @@ where
71847184
}
71857185
}).collect();
71867186
let pending_mpp_claim_ptr_opt = if sources.len() > 1 {
7187+
let mut channels_without_preimage = Vec::with_capacity(mpp_parts.len());
7188+
for part in mpp_parts.iter() {
7189+
let chan = (part.counterparty_node_id, part.funding_txo, part.channel_id);
7190+
if !channels_without_preimage.contains(&chan) {
7191+
channels_without_preimage.push(chan);
7192+
}
7193+
}
71877194
Some(Arc::new(Mutex::new(PendingMPPClaim {
7188-
channels_without_preimage: mpp_parts.clone(),
7195+
channels_without_preimage,
71897196
channels_with_preimage: Vec::new(),
71907197
})))
71917198
} else {
@@ -7196,7 +7203,7 @@ where
71967203
let this_mpp_claim = pending_mpp_claim_ptr_opt.as_ref().and_then(|pending_mpp_claim|
71977204
if let Some(cp_id) = htlc.prev_hop.counterparty_node_id {
71987205
let claim_ptr = PendingMPPClaimPointer(Arc::clone(pending_mpp_claim));
7199-
Some((cp_id, htlc.prev_hop.channel_id, htlc.prev_hop.htlc_id, claim_ptr))
7206+
Some((cp_id, htlc.prev_hop.channel_id, claim_ptr))
72007207
} else {
72017208
None
72027209
}
@@ -7529,7 +7536,7 @@ This indicates a bug inside LDK. Please report this error at https://github.com/
75297536
for action in actions.into_iter() {
75307537
match action {
75317538
MonitorUpdateCompletionAction::PaymentClaimed { payment_hash, pending_mpp_claim } => {
7532-
if let Some((counterparty_node_id, chan_id, htlc_id, claim_ptr)) = pending_mpp_claim {
7539+
if let Some((counterparty_node_id, chan_id, claim_ptr)) = pending_mpp_claim {
75337540
let per_peer_state = self.per_peer_state.read().unwrap();
75347541
per_peer_state.get(&counterparty_node_id).map(|peer_state_mutex| {
75357542
let mut peer_state = peer_state_mutex.lock().unwrap();
@@ -7540,24 +7547,17 @@ This indicates a bug inside LDK. Please report this error at https://github.com/
75407547
if *pending_claim == claim_ptr {
75417548
let mut pending_claim_state_lock = pending_claim.0.lock().unwrap();
75427549
let pending_claim_state = &mut *pending_claim_state_lock;
7543-
pending_claim_state.channels_without_preimage.retain(|htlc_info| {
7550+
pending_claim_state.channels_without_preimage.retain(|(cp, op, cid)| {
75447551
let this_claim =
7545-
htlc_info.counterparty_node_id == counterparty_node_id
7546-
&& htlc_info.channel_id == chan_id
7547-
&& htlc_info.htlc_id == htlc_id;
7552+
*cp == counterparty_node_id && *cid == chan_id;
75487553
if this_claim {
7549-
pending_claim_state.channels_with_preimage.push(htlc_info.clone());
7554+
pending_claim_state.channels_with_preimage.push((*cp, *op, *cid));
75507555
false
75517556
} else { true }
75527557
});
75537558
if pending_claim_state.channels_without_preimage.is_empty() {
7554-
for htlc_info in pending_claim_state.channels_with_preimage.iter() {
7555-
let freed_chan = (
7556-
htlc_info.counterparty_node_id,
7557-
htlc_info.funding_txo,
7558-
htlc_info.channel_id,
7559-
blocker.clone()
7560-
);
7559+
for (cp, op, cid) in pending_claim_state.channels_with_preimage.iter() {
7560+
let freed_chan = (*cp, *op, *cid, blocker.clone());
75617561
freed_channels.push(freed_chan);
75627562
}
75637563
}
@@ -14737,8 +14737,16 @@ where
1473714737
if payment_claim.mpp_parts.is_empty() {
1473814738
return Err(DecodeError::InvalidValue);
1473914739
}
14740+
let mut channels_without_preimage = payment_claim.mpp_parts.iter()
14741+
.map(|htlc_info| (htlc_info.counterparty_node_id, htlc_info.funding_txo, htlc_info.channel_id))
14742+
.collect::<Vec<_>>();
14743+
// If we have multiple MPP parts which were received over the same channel,
14744+
// we only track it once as once we get a preimage durably in the
14745+
// `ChannelMonitor` it will be used for all HTLCs with a matching hash.
14746+
channels_without_preimage.sort_unstable();
14747+
channels_without_preimage.dedup();
1474014748
let pending_claims = PendingMPPClaim {
14741-
channels_without_preimage: payment_claim.mpp_parts.clone(),
14749+
channels_without_preimage,
1474214750
channels_with_preimage: Vec::new(),
1474314751
};
1474414752
let pending_claim_ptr_opt = Some(Arc::new(Mutex::new(pending_claims)));
@@ -14771,7 +14779,7 @@ where
1477114779

1477214780
for part in payment_claim.mpp_parts.iter() {
1477314781
let pending_mpp_claim = pending_claim_ptr_opt.as_ref().map(|ptr| (
14774-
part.counterparty_node_id, part.channel_id, part.htlc_id,
14782+
part.counterparty_node_id, part.channel_id,
1477514783
PendingMPPClaimPointer(Arc::clone(&ptr))
1477614784
));
1477714785
let pending_claim_ptr = pending_claim_ptr_opt.as_ref().map(|ptr|

lightning/src/ln/functional_test_utils.rs

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -779,6 +779,26 @@ pub fn get_revoke_commit_msgs<CM: AChannelManager, H: NodeHolder<CM=CM>>(node: &
779779
})
780780
}
781781

782+
/// Gets a `UpdateHTLCs` and `revoke_and_ack` (i.e. after we get a responding `commitment_signed`
783+
/// while we have updates in the holding cell).
784+
pub fn get_updates_and_revoke<CM: AChannelManager, H: NodeHolder<CM=CM>>(node: &H, recipient: &PublicKey) -> (msgs::CommitmentUpdate, msgs::RevokeAndACK) {
785+
let events = node.node().get_and_clear_pending_msg_events();
786+
assert_eq!(events.len(), 2);
787+
(match events[0] {
788+
MessageSendEvent::UpdateHTLCs { ref node_id, ref updates } => {
789+
assert_eq!(node_id, recipient);
790+
(*updates).clone()
791+
},
792+
_ => panic!("Unexpected event"),
793+
}, match events[1] {
794+
MessageSendEvent::SendRevokeAndACK { ref node_id, ref msg } => {
795+
assert_eq!(node_id, recipient);
796+
(*msg).clone()
797+
},
798+
_ => panic!("Unexpected event"),
799+
})
800+
}
801+
782802
#[macro_export]
783803
/// Gets an RAA and CS which were sent in response to a commitment update
784804
///

0 commit comments

Comments
 (0)