Skip to content

Conversation

@Baliedge
Copy link
Contributor

LRU cache is based on a map. Each new unique key added expands map capacity. maps do not shrink even after those keys are deleted. For scenarios with high frequency adds and high cardinality this manifests over time as a memory leak.

@thrawn01
Copy link
Contributor

What if we auto prune after X number of evictions?

@Baliedge
Copy link
Contributor Author

What if we auto prune after X number of evictions?

That's worth a try.

@Baliedge
Copy link
Contributor Author

Abandoning as not necessary. I was not able to isolate the root cause reliably in tests. Moreover, there has been reports of a memory leak in Golang that has been fixed since 1.22 that may impact Groupcache peer communication.

After updating a test project from Golang 1.22 to 1.24 and an unnecessary call to debug.SetGCPercent() (unrelated to this PR's efforts), the memory leak appears to be mitigated.

@Baliedge Baliedge closed this Apr 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants