Replies: 1 comment
-
Did a bit more research on the topic, as i was curious why noone ever proposed this before.
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Currently working with eth_getLogs is painful in multiple ways. It has been painful 4 years ago when i first entered the space, and not much has improved since then.
With faster chains, the blockrange of e.g. alchemy(2_000) makes it essentially impossible to use eth_getLogs to index a contract in a reasonable amount of calls.
At the same time with bigger blocks some rpcs which implement limits based on nr of events, indexing all events could become impossible.
The big providers like alchemy and quicknode, now offer custom products (i think both call it streams) to index events and handle reorgs, but i feel like there should be an appropriate method on the node level not tied to a vendor lock-in.
Therefore, shouldn't it be possible to provide a paginated eth_getLogs not limited by an arbitrary block range?
I'm not suggesting any exact api here as i have far to little knowledge on the topic, but I guess some solution based on block-hash, should work?
Instead of providing a
fromBlock
number, i'd provide the last (exclusive)blockhash.If there was a reorg, it should be easily detectable on the node and i could receive the reorged events as
removed: true
- if i don't provide atoBlock
it return as many as possible and as part of the response i get a blockhash to continue with on the next call.Sorry if this is the wrong place for starting this discussion, as reth was the innovator behind pushing ethereum/execution-apis#639 forward i guess it's a good idea to start the discussion here.
Beta Was this translation helpful? Give feedback.
All reactions