Skip to content

perf: don't derive node_id from signature unless required #278

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Feb 12, 2025

Conversation

KolbyML
Copy link
Contributor

@KolbyML KolbyML commented Feb 12, 2025

Description

Every time we send a talk_request node_address() is called which calls node_id()

   pub fn node_id(&self) -> NodeId {
        self.public_key.clone().into()
    }

Hence every time we send a TalkRequest we are deriving the node_id from self.public_key.clone().into() the signature which is very computationally expensive, especially as we are sending metric tons of talk requests

If the Enr is already present we don't need to re-derive the node_id so if it is present just use the node_id already derived

Notes & open questions

@AgeManning @jxs @ackintosh ping for review

@AgeManning
Copy link
Member

This makes sense. Have you tested or measured any difference here?

I don't think we are computing any signatures.

The computation you're worried about is here: https://github.com/sigp/enr/blob/master/src/node_id.rs#L54

Which just hashes the uncompressed key. I guess to find the uncompressed key, we need to find the y point on the secp256k1 curve, which is probably what you are optimizing out.

Still curious if this was consuming a bit of resources?

@KolbyML
Copy link
Contributor Author

KolbyML commented Feb 12, 2025

This makes sense. Have you tested or measured any difference here?

I don't think we are computing any signatures.

The computation you're worried about is here: https://github.com/sigp/enr/blob/master/src/node_id.rs#L54

Which just hashes the uncompressed key. I guess to find the uncompressed key, we need to find the y point on the secp256k1 curve, which is probably what you are optimizing out.

Still curious if this was consuming a bit of resources?

It was taking 7% of all my samples when I was benchmarking, after I did this it no longer took a noticeable amount of samples. I will do more thorough before and after testing in a bit and post the results, from my initial runs it look like it made a small improvement, but I will need to do more runs without profiling to get a more conclusive answer

@AgeManning AgeManning merged commit ac91ad4 into sigp:master Feb 12, 2025
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants