Skip to content

Conversation

@JasonWildMe
Copy link
Collaborator

Claude 4.5 suggested PostgreSQL and DataNucleus optimizations. Conservatism requested.

@JasonWildMe
Copy link
Collaborator Author

Codex CLI review of changes.

Low‑risk, conservative adjustments

  1. Connection scaling
  • If you have a pool (recommended: pgbouncer in transaction mode), set max_connections to 150–200. Otherwise keep 300
    but lower work_mem (next item) to contain RAM usage.
  1. Work memory
  • Drop work_mem to 16MB. This caps worst-case memory while still being generous for common sorts/aggregations. If you
    reduce max_connections, you can leave 32MB; with 300 keep 16MB.
  1. Checkpoint/WAL breathing room
  • Raise max_wal_size to 12–16GB (leave min_wal_size 2GB). This reduces checkpoint frequency and I/O spikes on busy
    write periods. checkpoint_timeout=15min and checkpoint_completion_target=0.9 are fine.
  1. JIT for OLTP
  • If you see CPU overhead on many short queries, set jit = off (can be toggled later). If you run heavier analytics
    inside PG, keep it on.
  1. Background writer
  • Optionally tone down bgwriter_lru_maxpages to 512 and bgwriter_lru_multiplier to 2.0 (defaults) if you observe write
    bursts; current values are aggressive but usually safe.
  1. Autovacuum thresholds
  • Current settings are good for large tables. If you notice autovacuum contention, consider
    autovacuum_vacuum_cost_limit = 800 instead of 1000; otherwise keep as-is.

Suggested concrete edits (if you want to change)

  • max_connections = 200 (with pooling) or keep 300 if you must.
  • work_mem = 16MB (safer with 300 connections).
  • max_wal_size = 16GB
  • Optional: jit = off
  • Optional: bgwriter_lru_maxpages = 512, bgwriter_lru_multiplier = 2.0

If you change nothing

  • The current file will run and is mostly sane for D8s v5. The biggest potential issue is memory over-commit from
    high max_connections + 32MB work_mem, and slightly tight WAL sizing leading to more frequent checkpoints. Addressing
    those gives the most benefit with minimal risk.

@codecov-commenter
Copy link

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 8.59%. Comparing base (94c7247) to head (ed3f69c).

Additional details and impacted files
@@           Coverage Diff           @@
##              main   #1308   +/-   ##
=======================================
  Coverage     8.59%   8.59%           
  Complexity     941     941           
=======================================
  Files          623     623           
  Lines        68089   68089           
  Branches     12530   12588   +58     
=======================================
  Hits          5853    5853           
+ Misses       61635   61630    -5     
- Partials       601     606    +5     
Flag Coverage Δ
backend 8.59% <ø> (ø)
frontend 8.59% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@JasonWildMe
Copy link
Collaborator Author

wildbook_indexes.sql does NOT match the table names and required format for our JDO implementation. It will need fixing to be useful after a code review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants