Avoiding Apache Max Request Workers Errors
[Update 2025-10-09… turns out my first great solution doesn’t work after all, updated to reflect new options.]
Wow, I hate this error:
AH00484: server reached MaxRequestWorkers setting, consider raising the MaxRequestWorkers setting
Or, the variation:
AH03490: scoreboard is full, not at MaxRequestWorkers. Increase ServerLimit.
For starters, it means I have to relearn how MaxRequestWorkers functions in Apache:
For threaded and hybrid servers (e.g. event or worker), MaxRequestWorkers restricts the total number of threads that will be available to serve clients. For hybrid MPMs, the default value is 16 (ServerLimit) multiplied by the value of 25 (ThreadsPerChild). Therefore, to increase MaxRequestWorkers to a value that requires more than 16 processes, you must also raise ServerLimit.
Ok… remind me what ServerLimit refers to?
For the prefork MPM, this directive sets the maximum configured value for MaxRequestWorkers for the lifetime of the Apache httpd process. For the worker and event MPMs, this directive in combination with ThreadLimit sets the maximum configured value for MaxRequestWorkers for the lifetime of the Apache httpd process. For the event MPM, this directive also defines how many old server processes may keep running and finish processing open connections. Any attempts to change this directive during a restart will be ignored, but MaxRequestWorkers can be modified during a restart. Special care must be taken when using this directive. If ServerLimit is set to a value much higher than necessary, extra, unused shared memory will be allocated. If both ServerLimit and MaxRequestWorkers are set to values higher than the system can handle, Apache httpd may not start or the system may become unstable. With the prefork MPM, use this directive only if you need to set MaxRequestWorkers higher than 256 (default). Do not set the value of this directive any higher than what you might want to set MaxRequestWorkers to. With worker, use this directive only if your MaxRequestWorkers and ThreadsPerChild settings require more than 16 server processes (default). Do not set the value of this directive any higher than the number of server processes required by what you may want for MaxRequestWorkers and ThreadsPerChild. With event, increase this directive if the process number defined by your MaxRequestWorkers and ThreadsPerChild settings, plus the number of gracefully shutting down processes, is more than 16 server processes (default).
Got it? In other words, you can “consider” raising the MaxRequestWorkers
setting all you want, but you can’t just change that setting, you have to read
about several other compliated settings, do some math, and spend a lot of time
wondering if you are going to remember what you just did and how to undo it if
you blow up your server.
On the plus side, typically, nobody should increase this limit - because if the server runs out of connections, it usually means something else is wrong.
In our case, on a shared web server running Apache2 and PHP-FPM, it’s usually because a single web site has gone out of control.
But wait! How can that happen, we are using PHP-FPM’s max_children setting to prevent
a single PHP web site from taking down the server?
After years of struggling with this problem I have finally made some headway.
Our PHP pool configuration typically looks like this:
user = site342999writer
group = site342999writer
listen = /run/php/8.1-site342999.sock
listen.owner = www-data
listen.group = www-data
pm = ondemand
pm.max_children = 12
pm.max_requests = 500
php_admin_value[memory_limit] = 256M
And we invoke PHP-FPM via this apache snippet:
<FilesMatch \.php$>
SetHandler "proxy:unix:/var/run/php/8.1-site342999.sock|fcgi://localhost"
</FilesMatch>
With these settings in place, what happens when we use up all 12 max_children?
According to the docs:
By default,
mod_proxywill allow and retain the maximum number of connections that could be used simultaneously by that web server child process. Use the max parameter to reduce the number from the default. The pool of connections is maintained per web server child process, and max and other settings are not coordinated among all child processes, except when only one child process is allowed by configuration or MPM design.
The max parameter seems to default to the ThreadsPerChild, so it seems that
the default here is to allow any web site to consume ThreadsPerChild (25) x
ServerLimit (16), which is also the max number of over all connections. Not
great.
To make matter worse, there is another setting available which is mysteriously
called acquire:
If set, this will be the maximum time to wait for a free connection in the connection pool, in milliseconds. If there are no free connections in the pool, the Apache httpd will return
SERVER_BUSYstatus to the client.
By default this is not set which seems to suggest Apache will just hang on to connections forever until a free PHP process becomes available (or some other time out happens).
So, let’s try something different:
<Proxy "fcgi://localhost">
ProxySet acquire=1 max=12
</proxy>
This snippet seems to be the way you can configure the proxy configuration we
setup in the SetHandler statement above. It’s documented on the Apache
mod_proxy
page.
Unfortunately, this does not work. I tried all kinds of different combinations
but my best guess is that the use of max and acquire are reserved for tcp
connections not unix socket connections, so the only way to achieve this would
be to switch our PHP FPM configuration to work over 127.0.0.1 instead of unix
sockets, which would bring it’s own problems.
Now what?
I can see two options:
- We already pass all traffic via an nginx proxy before it even hits one of
our apache back end servers. So, rather than configure just one nginx
upstream,
we can assign each site their very own upstream with their very own
max_connsettings. It feels ugly and wasteful to have one upstream per site on a shared server, but it works. - Install an unsupported apache module. I found mod_vhost_limit, whose very existence seems to confirm my failed struggle at getting this to work. It was written for Redhat and hasn’t been touched in 5 years, but I managed to get it to work on Debian Trixie without much effort. And when I tested, it worked on the first try.