Skip to content

Commit 313810e

Browse files
committed
Do not broadcast commitment txn on Permanent mon update failure
See doc updates for more info on the edge case this prevents, and there isn't really a strong reason why we would need to broadcast the latest state immediately. Specifically, in the case of HTLC claims (the most important reason to ensure we have state on chain if it cannot be persisted), we will still force-close if there are HTLCs which need claiming and are going to expire. Surprisingly, there were no tests which failed as a result of this change, but a new one has been added.
1 parent 48d21ba commit 313810e

File tree

4 files changed

+71
-53
lines changed

4 files changed

+71
-53
lines changed

lightning/src/chain/channelmonitor.rs

Lines changed: 20 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -76,12 +76,14 @@ pub struct ChannelMonitorUpdate {
7676
/// increasing and increase by one for each new update, with one exception specified below.
7777
///
7878
/// This sequence number is also used to track up to which points updates which returned
79-
/// ChannelMonitorUpdateErr::TemporaryFailure have been applied to all copies of a given
79+
/// [`ChannelMonitorUpdateErr::TemporaryFailure`] have been applied to all copies of a given
8080
/// ChannelMonitor when ChannelManager::channel_monitor_updated is called.
8181
///
8282
/// The only instance where update_id values are not strictly increasing is the case where we
8383
/// allow post-force-close updates with a special update ID of [`CLOSED_CHANNEL_UPDATE_ID`]. See
8484
/// its docs for more details.
85+
///
86+
/// [`ChannelMonitorUpdateErr::TemporaryFailure`]: super::ChannelMonitorUpdateErr::TemporaryFailure
8587
pub update_id: u64,
8688
}
8789

@@ -1314,14 +1316,20 @@ impl<Signer: Sign> ChannelMonitor<Signer> {
13141316
}
13151317

13161318
/// Used by ChannelManager deserialization to broadcast the latest holder state if its copy of
1317-
/// the Channel was out-of-date. You may use it to get a broadcastable holder toxic tx in case of
1318-
/// fallen-behind, i.e when receiving a channel_reestablish with a proof that our counterparty side knows
1319-
/// a higher revocation secret than the holder commitment number we are aware of. Broadcasting these
1320-
/// transactions are UNSAFE, as they allow counterparty side to punish you. Nevertheless you may want to
1321-
/// broadcast them if counterparty don't close channel with his higher commitment transaction after a
1322-
/// substantial amount of time (a month or even a year) to get back funds. Best may be to contact
1323-
/// out-of-band the other node operator to coordinate with him if option is available to you.
1324-
/// In any-case, choice is up to the user.
1319+
/// the Channel was out-of-date.
1320+
///
1321+
/// You may also use this to broadcast the latest local commitment transaction, either because
1322+
/// a monitor update failed with [`ChannelMonitorUpdateErr::PermanentFailure`] or because we've
1323+
/// fallen behind (i.e we've received proof that our counterparty side knows a revocation
1324+
/// secret we gave them that they shouldn't know).
1325+
///
1326+
/// Broadcasting these transactions in the second case is UNSAFE, as they allow counterparty
1327+
/// side to punish you. Nevertheless you may want to broadcast them if counterparty doesn't
1328+
/// close channel with their commitment transaction after a substantial amount of time. Best
1329+
/// may be to contact the other node operator out-of-band to coordinate other options available
1330+
/// to you. In any-case, the choice is up to you.
1331+
///
1332+
/// [`ChannelMonitorUpdateErr::PermanentFailure`]: super::ChannelMonitorUpdateErr::PermanentFailure
13251333
pub fn get_latest_holder_commitment_txn<L: Deref>(&self, logger: &L) -> Vec<Transaction>
13261334
where L::Target: Logger {
13271335
self.inner.lock().unwrap().get_latest_holder_commitment_txn(logger)
@@ -2248,7 +2256,9 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> {
22482256
if *should_broadcast {
22492257
self.broadcast_latest_holder_commitment_txn(broadcaster, logger);
22502258
} else if !self.holder_tx_signed {
2251-
log_error!(logger, "You have a toxic holder commitment transaction avaible in channel monitor, read comment in ChannelMonitor::get_latest_holder_commitment_txn to be informed of manual action to take");
2259+
log_error!(logger, "WARNING: You have a potentially-unsafe holder commitment transaction available to broadcast");
2260+
log_error!(logger, " in channel monitor for channel {}!", log_bytes!(self.funding_info.0.to_channel_id()));
2261+
log_error!(logger, " Read the docs for ChannelMonitor::get_latest_holder_commitment_txn and take manual action!");
22522262
} else {
22532263
// If we generated a MonitorEvent::CommitmentTxConfirmed, the ChannelManager
22542264
// will still give us a ChannelForceClosed event with !should_broadcast, but we

lightning/src/chain/mod.rs

Lines changed: 45 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -194,61 +194,67 @@ pub enum ChannelMonitorUpdateErr {
194194
/// our state failed, but is expected to succeed at some point in the future).
195195
///
196196
/// Such a failure will "freeze" a channel, preventing us from revoking old states or
197-
/// submitting new commitment transactions to the counterparty. Once the update(s) that failed
198-
/// have been successfully applied, a [`MonitorEvent::UpdateCompleted`] event should be returned
199-
/// via [`Watch::release_pending_monitor_events`] which will then restore the channel to an
200-
/// operational state.
197+
/// submitting new commitment transactions to the counterparty. Once the update(s) which failed
198+
/// have been successfully applied, a [`MonitorEvent::UpdateCompleted`] can be used to restore
199+
/// the channel to an operational state.
201200
///
202-
/// Note that a given ChannelManager will *never* re-generate a given ChannelMonitorUpdate. If
203-
/// you return a TemporaryFailure you must ensure that it is written to disk safely before
204-
/// writing out the latest ChannelManager state.
201+
/// Note that a given [`ChannelManager`] will *never* re-generate a [`ChannelMonitorUpdate`].
202+
/// If you return this error you must ensure that it is written to disk safely before writing
203+
/// the latest [`ChannelManager`] state, or you should return [`PermanentFailure`] instead.
205204
///
206-
/// Even when a channel has been "frozen" updates to the ChannelMonitor can continue to occur
207-
/// (eg if an inbound HTLC which we forwarded was claimed upstream resulting in us attempting
208-
/// to claim it on this channel) and those updates must be applied wherever they can be. At
209-
/// least one such updated ChannelMonitor must be persisted otherwise PermanentFailure should
210-
/// be returned to get things on-chain ASAP using only the in-memory copy. Obviously updates to
211-
/// the channel which would invalidate previous ChannelMonitors are not made when a channel has
212-
/// been "frozen".
205+
/// Even when a channel has been "frozen", updates to the [`ChannelMonitor`] can continue to
206+
/// occur (e.g. if an inbound HTLC which we forwarded was claimed upstream, resulting in us
207+
/// attempting to claim it on this channel) and those updates must still be persisted.
213208
///
214-
/// Note that even if updates made after TemporaryFailure succeed you must still provide a
215-
/// [`MonitorEvent::UpdateCompleted`] to ensure you have the latest monitor and re-enable
216-
/// normal channel operation. Note that this is normally generated through a call to
217-
/// [`ChainMonitor::channel_monitor_updated`].
218-
///
219-
/// Note that the update being processed here will not be replayed for you when you return a
220-
/// [`MonitorEvent::UpdateCompleted`] event via [`Watch::release_pending_monitor_events`], so
221-
/// you must store the update itself on your own local disk prior to returning a
222-
/// TemporaryFailure. You may, of course, employ a journaling approach, storing only the
223-
/// ChannelMonitorUpdate on disk without updating the monitor itself, replaying the journal at
224-
/// reload-time.
209+
/// No updates to the channel will be made which could invalidate other [`ChannelMonitor`]s
210+
/// until a [`MonitorEvent::UpdateCompleted`] is provided, even if you return no error on a
211+
/// later monitor update for the same channel.
225212
///
226213
/// For deployments where a copy of ChannelMonitors and other local state are backed up in a
227214
/// remote location (with local copies persisted immediately), it is anticipated that all
228215
/// updates will return TemporaryFailure until the remote copies could be updated.
229216
///
230-
/// [`ChainMonitor::channel_monitor_updated`]: chainmonitor::ChainMonitor::channel_monitor_updated
217+
/// [`PermanentFailure`]: ChannelMonitorUpdateErr::PermanentFailure
218+
/// [`ChannelManager`]: crate::ln::channelmanager::ChannelManager
231219
TemporaryFailure,
232-
/// Used to indicate no further channel monitor updates will be allowed (eg we've moved on to a
233-
/// different watchtower and cannot update with all watchtowers that were previously informed
234-
/// of this channel).
220+
/// Used to indicate no further channel monitor updates will be allowed (likely a disk failure
221+
/// or a remote copy of this [`ChannelMonitor`] is no longer reachable and thus not updatable).
222+
///
223+
/// When this is returned, [`ChannelManager`] will force-close the channel but *not* broadcast
224+
/// our current commitment transaction. This avoids a dangerous case where a local disk failure
225+
/// (e.g. the Linux-default remounting of the disk as read-only) causes [`PermanentFailure`]s
226+
/// for all monitor updates. If we were to broadcast our latest commitment transaction and then
227+
/// restart, we could end up reading a previous [`ChannelMonitor`] and [`ChannelManager`],
228+
/// revoking our now-broadcasted state before seeing it confirm and losing all our funds.
235229
///
236-
/// At reception of this error, ChannelManager will force-close the channel and return at
237-
/// least a final ChannelMonitorUpdate::ChannelForceClosed which must be delivered to at
238-
/// least one ChannelMonitor copy. Revocation secret MUST NOT be released and offchain channel
239-
/// update must be rejected.
230+
/// Note that this is somewhat of a tradeoff - if the disk is really gone and we may have lost
231+
/// the data permanently, we really should broadcast immediately. If the data can be recovered
232+
/// with manual intervention, we'd rather close the channel, rejecting future updates to it,
233+
/// and broadcast the latest state only if we have HTLCs to claim which are timing out (which
234+
/// we do as long as blocks are connected).
240235
///
241-
/// This failure may also signal a failure to update the local persisted copy of one of
242-
/// the channel monitor instance.
236+
/// In order to broadcast the latest local commitment transaction, you'll need to call
237+
/// [`ChannelMonitor::get_latest_holder_commitment_txn`] and broadcast the resulting
238+
/// transactions once you've safely ensured no further channel updates can be generated by your
239+
/// [`ChannelManager`].
243240
///
244-
/// Note that even when you fail a holder commitment transaction update, you must store the
245-
/// update to ensure you can claim from it in case of a duplicate copy of this ChannelMonitor
246-
/// broadcasts it (e.g distributed channel-monitor deployment)
241+
/// Note that at least one final [`ChannelMonitorUpdate`] may still be provided, which must
242+
/// still be processed by a running [`ChannelMonitor`]. This final update will mark the
243+
/// [`ChannelMonitor`] as finalized, ensuring no further updates (e.g. revocation of the latest
244+
/// commitment transaction) are allowed.
245+
///
246+
/// Note that even if you return a [`PermanentFailure`] due to unavailability of secondary
247+
/// [`ChannelMonitor`] copies, you should still make an attempt to store the update where
248+
/// possible to ensure you can claim HTLC outputs on the latest commitment transaction
249+
/// broadcasted later.
247250
///
248251
/// In case of distributed watchtowers deployment, the new version must be written to disk, as
249252
/// state may have been stored but rejected due to a block forcing a commitment broadcast. This
250253
/// storage is used to claim outputs of rejected state confirmed onchain by another watchtower,
251254
/// lagging behind on block processing.
255+
///
256+
/// [`PermanentFailure`]: ChannelMonitorUpdateErr::PermanentFailure
257+
/// [`ChannelManager`]: crate::ln::channelmanager::ChannelManager
252258
PermanentFailure,
253259
}
254260

@@ -278,7 +284,7 @@ pub trait Watch<ChannelSigner: Sign> {
278284
/// with any spends of outputs returned by [`get_outputs_to_watch`]. In practice, this means
279285
/// calling [`block_connected`] and [`block_disconnected`] on the monitor.
280286
///
281-
/// Note: this interface MUST error with `ChannelMonitorUpdateErr::PermanentFailure` if
287+
/// Note: this interface MUST error with [`ChannelMonitorUpdateErr::PermanentFailure`] if
282288
/// the given `funding_txo` has previously been registered via `watch_channel`.
283289
///
284290
/// [`get_outputs_to_watch`]: channelmonitor::ChannelMonitor::get_outputs_to_watch

lightning/src/ln/chanmon_update_fail_tests.rs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -65,6 +65,8 @@ fn test_simple_monitor_permanent_update_fail() {
6565
_ => panic!("Unexpected event"),
6666
};
6767

68+
assert!(nodes[0].tx_broadcaster.txn_broadcasted.lock().unwrap().is_empty());
69+
6870
// TODO: Once we hit the chain with the failure transaction we should check that we get a
6971
// PaymentPathFailed event
7072

lightning/src/ln/channelmanager.rs

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1171,8 +1171,8 @@ pub enum PaymentSendFailure {
11711171
///
11721172
/// Any entries which contain Err(APIError::MonitorUpdateFailed) or Ok(()) MUST NOT be retried
11731173
/// as they will result in over-/re-payment. These HTLCs all either successfully sent (in the
1174-
/// case of Ok(())) or will send once channel_monitor_updated is called on the next-hop channel
1175-
/// with the latest update_id.
1174+
/// case of Ok(())) or will send once a [`MonitorEvent::UpdateCompleted`] is provided for the
1175+
/// next-hop channel with the latest update_id.
11761176
PartialFailure {
11771177
/// The errors themselves, in the same order as the route hops.
11781178
results: Vec<Result<(), APIError>>,
@@ -1345,7 +1345,7 @@ macro_rules! handle_monitor_err {
13451345
// given up the preimage yet, so might as well just wait until the payment is
13461346
// retried, avoiding the on-chain fees.
13471347
let res: Result<(), _> = Err(MsgHandleErrInternal::from_finish_shutdown("ChannelMonitor storage failure".to_owned(), *$chan_id, $chan.get_user_id(),
1348-
$chan.force_shutdown(true), $self.get_channel_update_for_broadcast(&$chan).ok() ));
1348+
$chan.force_shutdown(false), $self.get_channel_update_for_broadcast(&$chan).ok() ));
13491349
(res, true)
13501350
},
13511351
ChannelMonitorUpdateErr::TemporaryFailure => {
@@ -4492,7 +4492,7 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
44924492
// We do not do a force-close here as that would generate a monitor update for
44934493
// a monitor that we didn't manage to store (and that we don't care about - we
44944494
// don't respond with the funding_signed so the channel can never go on chain).
4495-
let (_monitor_update, failed_htlcs) = chan.force_shutdown(true);
4495+
let (_monitor_update, failed_htlcs) = chan.force_shutdown(false);
44964496
assert!(failed_htlcs.is_empty());
44974497
return Err(MsgHandleErrInternal::send_err_msg_no_close("ChannelMonitor storage failure".to_owned(), funding_msg.channel_id));
44984498
},

0 commit comments

Comments
 (0)