Skip to content

Async::Job and DB pool management #11

@agalloch

Description

@agalloch

Hello @ioquatix and thank you for building this set of wonderful machinery! ✨

I've got a question about using Async::Job in a standard Rails app which does both IO waiting and state tracking in the DB. I've built a prototype async job processing server into an existing app, adding ActiveJob, async-job-adapter-active_job and async-job-processor-redis to the mix so ActiveJob takes care of enqueuing the jobs to Redis and a server script dequeues and executes them. I've run different kinds of simulated enqueue/process loads on that app.

What I've found is that under certain conditions the job executor's Fibers get to exhaust the DB pool. This is expected since to my knowledge ActiveRecord is not built for async usage, and so I've started looking for a built-in way to control backpressure. Other than Async::Idler's max load parameter, it was to no avail.

What is the recommended/built-in way to keep consumer Fiber spawning under control? Of course, I could implement my custom Async::Job::Processor::Redis::Server with some sort of semaphore to achieve that but this seems expensive in the long run. Maybe I'm doing something wrong?

Cheers!


Test setup

Rails 7.2.2.2
Ruby 3.4.2
macOS Tahoe
Apple M3

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions