Fix memory leak in NodeCache instances (#2344)#2366
Fix memory leak in NodeCache instances (#2344)#2366kobie3717 wants to merge 1 commit intoWhiskeySockets:masterfrom
Conversation
- Add max size limits to all NodeCache instances in the codebase - msgRetryCache: limit to 10k entries (from unbounded) - callOfferCache: limit to 1k entries (from unbounded) - placeholderResendCache: limit to 5k entries (from unbounded) - identityAssertDebounce: limit to 1k entries (from unbounded) - userDevicesCache: limit to 5k entries (from unbounded) - peerSessionsCache: limit to 5k entries (from unbounded) - signal store cache: limit to 10k entries (from unbounded) These caches previously only had TTL cleanup but no size bounds, which could lead to unbounded memory growth in high-traffic scenarios. The size limits prevent memory leaks while maintaining performance. Fixes WhiskeySockets#2344
|
Thanks for opening this pull request and contributing to the project! The next step is for the maintainers to review your changes. If everything looks good, it will be approved and merged into the main branch. In the meantime, anyone in the community is encouraged to test this pull request and provide feedback. ✅ How to confirm it worksIf you’ve tested this PR, please comment below with: This helps us speed up the review and merge process. 📦 To test this PR locally:If you encounter any issues or have feedback, feel free to comment as well. |
|
That alone won't help identify the problem; I need a test before and after the pull request. |
|
Another situation in my code: I've already implemented this and it didn't solve the problem. 3-hour sessions increase heap size from 250MB to 400MB (forcing me to restart the application). It seems to be something related to receiving messages. |
|
@techwebsolucao Fair point — the NodeCache capping in this PR addresses one source of unbounded growth, but it's not the only one. If you're seeing 250MB → 400MB in 3 hours with capped caches, the leak is likely elsewhere. Based on my investigation, the most probable culprits in the message receiving path:
Quick diagnostic — can you take a heap snapshot at 250MB and another at 400MB, then compare? In Node: // Add to your code:
const v8 = require("v8");
const fs = require("fs");
setInterval(() => {
const usage = process.memoryUsage();
if (usage.heapUsed > 350 * 1024 * 1024) { // 350MB threshold
const snap = v8.writeHeapSnapshot();
console.log("Heap snapshot written to", snap);
}
}, 60000);That would tell us exactly what's accumulating. Happy to dig deeper if you share the snapshot analysis. |
Description
This PR fixes memory leaks in multiple NodeCache instances throughout the Baileys codebase by adding size limits to prevent unbounded growth.
Root Cause Analysis
The memory leak was caused by NodeCache instances that only had TTL (time-to-live) cleanup but no size bounds. In high-traffic scenarios or when TTL cleanup fails to run properly, these caches could grow indefinitely, leading to memory exhaustion.
Affected Caches
The following caches have been updated with appropriate size limits:
Solution
Added
maxproperty to NodeCache constructor options in:src/Socket/messages-recv.tssrc/Socket/messages-send.tssrc/Socket/chats.tssrc/Utils/auth-utils.tsImpact
Testing
The size limits are conservative enough to handle normal usage patterns while preventing memory leaks. The limits are based on typical usage patterns and provide sufficient headroom for high-traffic scenarios.
Closes #2344