perf: remove indirection overhead caused by iface usage#326
Conversation
|
@seqbenchbot up main bulk |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #326 +/- ##
=======================================
Coverage 71.61% 71.61%
=======================================
Files 204 204
Lines 14767 14760 -7
=======================================
- Hits 10575 10570 -5
+ Misses 3439 3437 -2
Partials 753 753 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
🔴 Performance DegradationSome benchmarks have degraded compared to the previous run. Show table
|
1e30b30 to
30c396f
Compare
|
@seqbenchbot down f5bca65c |
|
Nice, @dkharms Your request was successfully served. Have a great time! |
|
@seqbenchbot up main bulk |
🔴 Performance DegradationSome benchmarks have degraded compared to the previous run. Show table
|
|
@seqbenchbot down 613dd2bc |
|
Nice, @dkharms Your request was successfully served. Have a great time! |
|
@seqbenchbot up 0-generic-sorting bulk |
|
@seqbenchbot down 24c3c0fa |
|
Nice, @dkharms Your request was successfully served. Have a great time! |
|
@seqbenchbot up main bulk |
|
@seqbenchbot down ce899c4b |
|
Nice, @dkharms Your request was successfully served. Have a great time! |
|
@seqbenchbot up main mixed |
|
@seqbenchbot down d9d26287 |
|
Nice, @dkharms Your request was successfully served. Have a great time! |
51f4e79 to
44c6b1b
Compare
Description
Return of the prodigal son (we've lost this change after tough battle with pull request conflicts resolving).
I got following results:
mixedscenario I observe decrease in CPU usage (on average) from4.44 CPUto4.30 CPU(4%) (grafana);bulkscenario I observe decrease in CPU usage (on average) from4.48 CPUto4.33 CPU(4%) (grafana);Although this change does not have a visible impact on bulk latency (in fact, the bulk latency in the comparison branch is lower by about
1 mson average), that’s fine — LIDs are sorted in the background.If you have used LLM/AI assistance please provide model name and full prompt: