-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
perf: decode logs #4228
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
perf: decode logs #4228
Conversation
🦋 Changeset detectedLatest commit: 68aaf3d The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
commit: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would be happy to merge this in if it makes things faster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For us, at least in local development it seems to speed things up quite a bit.
Wondering though if there are other things one could do.
Noble has a new major release: https://github.com/paulmillr/noble-curves/releases not sure if that helps much.
There is a pending pr that might improve things: paulmillr/noble-hashes#126
2999282 to
51c89c4
Compare
51c89c4 to
68aaf3d
Compare
We have an application that indexes a huge amount of logs and when benchmarking where time is spent, i noticed that the decoding alone takes roughly 10% of the time. Therefore, i started looking into how to improve performance here.
Running against this version locally, seems to improve performance by roughly 25% for us.
Still probably there is more things to optimize.
Opening this more for discussion than to necessarily merge.
Our usecase essentially is:
decoding a huge amounts of logs with a limited set of abiEvents (perhaps 20).As i understand the bench result it's around x3 as fast?
In our realworld example (which does more things than just decoding), these are the results:
Before:
After: