Skip to content

Conversation

tnull
Copy link
Collaborator

@tnull tnull commented Oct 13, 2025

Closes #441

We use LDK's MonitorUpdatingPersister to persist (and read) differential updates, instead of always repersisting the full monitors.

This is mostly a simple rebase of #456 (whose original author moved on) on top of current main.

@tnull tnull requested a review from joostjager October 13, 2025 13:17
@ldk-reviews-bot
Copy link

ldk-reviews-bot commented Oct 13, 2025

👋 Thanks for assigning @joostjager as a reviewer!
I'll wait for their review and will help manage the review process.
Once they submit their review, I'll check if a second reviewer would be helpful.

@tnull tnull added this to the 0.7 milestone Oct 13, 2025
pub(crate) fn do_test_store<K: KVStoreSync + Sync>(store_0: &K, store_1: &K) {
// This value is used later to limit how many iterations we perform.
let persister_0_max_pending_updates = 7;
// Intentionally set this to a smaller value to test a different alignment.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've seen this also in rust-lightning, and also there I didn't understand what alignment there is to test because the nodes are independent?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, you'd have to ask the OP regarding the comment, but it's also of course not wrong to set a different value here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It isn't wrong, but thought it might be worth knowing a bit more about it now that you are taking over.

// Send a few more payments to try all the alignments of max pending updates with
// updates for a payment sent and received.
let mut sender = 0;
for i in 3..=persister_0_max_pending_updates * 2 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This also seems a bit overkill at the ldk node level?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mhh, a bit more test coverage doesn't hurt? FWIW, this could be made a proptest to have it be less deterministic, but apart from that there isn't much harm?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't hurt in terms over coverage, but it is probably duplicative, and code carries cost.

assert_eq!(node_txn.len(), 1);
let txn = vec![node_txn[0].clone(), node_txn[0].clone()];
let dummy_block = create_dummy_block(nodes[0].best_block_hash(), 42, txn);
connect_block(&nodes[1], &dummy_block);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some independent refactoring here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, doesn't hurt either though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant to say that it is better to isolate in a separate commit.

));

// Read ChannelMonitor state from store
let channel_monitors = match persister.read_all_channel_monitors_with_updates() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the code move?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You'd have to ask OP for the original reason, I left it in as grouping the channel monitor IO with setting up the persister and ChainMonitor made sense to me, but there might be different ways to think about it, probably. Let me know if you'd prefer to revert the change.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's ok. The origin of the comment was diff optimization.

const VSS_HARDENED_CHILD_INDEX: u32 = 877;
const VSS_LNURL_AUTH_HARDENED_CHILD_INDEX: u32 = 138;
const LSPS_HARDENED_CHILD_INDEX: u32 = 577;
const PERSISTER_MAX_PENDING_UPDATES: u64 = 100;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there science to the number 100?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, just handwaving currently. There has been some benchmarking over at lightningdevkit/rust-lightning#3834 but it never got anywhere super conclusive. Let me know if you have a better guess.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does not seem unreasonable. Going higher is probably not gaining much. Maybe it could be a bit lower. But also fine as is.

Copy link
Contributor

@joostjager joostjager left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replied. Nothing blocking.

We use LDK's `MonitorUpdatingPersister` to persist (and read)
differential updates, instead of always repersisting the full monitors.
@tnull
Copy link
Collaborator Author

tnull commented Oct 16, 2025

Squashed the fixup without further changes.. Going ahead landing this:

> git diff-tree -U2 c9f7a65e fc6a7ff1
>

@tnull tnull merged commit 8368fdd into lightningdevkit:main Oct 16, 2025
8 of 15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Switch to MonitorUpdatingPersister

4 participants