-
Notifications
You must be signed in to change notification settings - Fork 215
Description
I have benchmarked jsonpath.value(obj, "$.a.b.c") with 100K objects. It took > 40 seconds to compute. This looked way too slow for such a simple lookup. Looking at the code it is obvious that it is computing the same values over and over again. E.g. functions: parse, stringify, normalize all could be cached (memoized) with the computed values reused.
A quick prototype showed > 10 times performance gains. Instead of 40+ seconds for 100K objects, the benchmark took < 4 seconds.
The only dependency I introduced was the package underscore for _.memoize() function.
I am going to submit a pull request with my changes and hope they would be quickly approved. One of my large customers is using this package and they could definitely use the speedup.