mod_auth_openidc redis implementation specifics around connection pooling, concurrency, tweaking #1340
Replies: 2 comments 4 replies
-
this is limited by mod_auth_openidc's dependency on hiredis not supporting connection pooling and not being multi-thread safe; every httpd process creates its own Redis connection (singleton!) so having more processes would allow for more concurrency, having more threads does not; your suggestion is the way to go, apart from commercial discussions about introducing pooling e.g. using a wrapper like https://github.com/aclisp/hiredispool/ |
Beta Was this translation helpful? Give feedback.
-
@zandbelt - thanks for the update. We did actually just push our fixes to production this morning and aren't seeing the performance increases under extreme load we expected, so that fits! Any idea when 2.4.18.1 is coming out? Likely in the next few week(s)? If that doesn't work - we may pursue the commercial offering, as connection pooling should be the "best" fix for sure! ~Jason Lang |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
We recently implemented the redis backend cache (from file cache) for mod_auth_openidc. This has been largely successful, except for one of our busier sites where we see some bottlenecks.
After going up and down the network layer, and redis, we are pretty certain that its not a network or redis issue, Redis is continuing to return all queries (read and write) in <1ms throughout, and the network latency between our apache server(s) and redis is also very low ~1ms. No other metrics on the redis side seem to indicate any issues, low CPU, free memory, network isnt saturated, etc. I've got plenty of other redis DB's doing far more overall IOPS than this without issue as well. We can also re-disable the redis cache, and go back to file caching, and the issue goes away as well.
I'm trying to better understand if there are any tumbles for connection pool size, concurrency, performance, etc.
If I'm using event MPM, with a ServerLimit of 8 - does each apache/httpd process spinning up create its own individual connection pool for redis? How many connections in the pool, and is that tweakable or adjustable per process? I know with some of our other busier apps with regard to redis, we need to increase connection pools quite a bit for more concurrency. In this case, our apache setup is purely a reverse proxy where we can see the server is processing ~100-120 requests/sec. When traffic per host goes to 150+ requests/sec, Apache "locks up" with all threads stuck "Writing". Currently we focus on low ServerLimit, with lots of threads (600 or so total) so we end up with 600 thread stuck writing. This recovers on its own and catches up with processing in 30-60 seconds and is back to normal, but we do drop some connections due to all threads being full in the meantime.
If there aren't any direct tweakables for Connection Pooling i can see - I'm wondering if i can hypothetically lower my threadcounts, and increase my servercounts (assuming i have enough memory to handle all the extra httpd processes) - or if the architecture and implementation here means I wouldn't in-fact get better redis performance overall? Is there anything else I should investigate or take a look at with regard to this?
Beta Was this translation helpful? Give feedback.
All reactions