Replies: 7 comments 3 replies
-
Beta Was this translation helpful? Give feedback.
-
Thanks! I can't get to my analysis until tomorrow, but for now I’ve tossed it into the arena 😁 |
Beta Was this translation helpful? Give feedback.
-
I tried to run it here and we also got In our debug log, we have something like:
The Interesting that it logs to remove the items ( |
Beta Was this translation helpful? Give feedback.
-
TSJam as well (which is one of the few that have not the fuzzer target not currently working.. my post root is and debug from my code with logs:
|
Beta Was this translation helpful? Give feedback.
-
Spent a bit more time on this. The two keys that were expected to be removed (0x007600000014004dbf99722c645a8d721815ae4fbe6b108d51b71ea717f666 and 0x007c004d009b0018685645d52c326005ea488c8456e725ff64e6a1759964d8), were actually removed in the first accumulation, but get added back in the second. The main question is should the second accumulation have occurred or not. |
Beta Was this translation helpful? Give feedback.
-
Wow, very interesting fuzz -- There there are 3 work reports all from service 0, the first 2 of which hit the G_T threshold exactly. ![]() We must win parallel service accumulation here -- we cannot accumulate 0 twice! As far as I can tell, there is no aspect of 12.17 or 12.18 that models the above constraint -- so everyone failed to incorporate it except for Turbojam -- so ... great fuzz! So, our new understanding after seeing we did not incorporate the above constraint is:
We are able to match state roots of Polkajam + Turbojam by accommodating the constraint. However, this constraint leaves the third work report (of service 0) hanging -- so sad! Could there be a way to put it back in the accumulation queue? |
Beta Was this translation helpful? Give feedback.
-
There was a bug on our side. Fixed. Thank you |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I'd like to draw your attention to the trace for version 0.7.0 published in this PR:
#36
It looks like almost all implementations (except those not yet working with the fuzzer) reach the same state root. However, I can only match my post-root with TurboJam.
There are several items in the state that do not match, but my attention goes first in some evident and strange diff.
All the aforementioned targets keep some keys in the service storage
(
0x007600000014004dbf99722c645a8d721815ae4fbe6b108d51b71ea717f666
and0x007c004d009b0018685645d52c326005ea488c8456e725ff64e6a1759964d8
) which as far as I can tell, should be removed.I haven’t gone too deep into the analysis yet, but perhaps someone has an easy explanation or can demonstrate that the Polkajam fuzzer is at fault.
Beta Was this translation helpful? Give feedback.
All reactions