We are having a problem with individual apache processes utilizing large amounts of memory, depending on the request, and never releasing it back to the main system. Since these requests can happen at any time, over time the web server is pushed into swap, rendering it unresponsive even to SSH. Worse, after the request has finished, Python fails to release the memory back into the wild, which results in a number 500mb - 1gb Apache processes lying around.
We push very few requests per second, but each request has the potential to be very heavy.
What I would like to do is have a way to kill an individual apache process child after it has finished serving a request if its resident memory exceeds a certain threshold. I have tried several ways of actually doing this inside mod_python, but it appears that any form of system exit results in the response not completing to the client.
Outside of gracefuling all the processes (which we really want to avoid) whenever this happens, is there anyway to tell Apache to arbitrarily kill off a process after it has finished serving a request? All ideas are welcome.
As an additional caveat, due to the legacy nature of the system, we can’t upgrade to a later version of Python, so we can’t utilize the improved memory performance of 2.5. Similarly, we are stuck with our current OS.
Versions:
System: Red Hat Enterprise 4
Apache: 2.0.55
Python: 2.3.5
I'd say that even it is possible, it would be a tremendous hack (and instable) - you should set-up a process external to apache in this case, that would supervise running processes and kill an individual apache when it goes beyond memory/time predefined limits.
Such a script can be kept running continuously with a mainloop that is performs it's checks every few seconds, or could even be put in crontab to run every minute.
I see no reason to try to that from inside the serving processes themselves.
Related
I've installed Nginx + uWSGI + Django on a VDS with 3 CPU cores. uWSGI is configured for 6 processes and 5 threads per process. Now I want to tell uWSGI to use processes for load balancing until all processes are busy, and then to use threads if needed. It seems uWSGI prefer threads, and I have not found any config option to change this behaviour. First process takes over 100% CPU time, second one takes about 20%, and another processes are mostly not used.
Our site receives 40 r/s. Actually even having 3 processes without threads is anough to handle all requests usually. But request processing hangs from time to time for various reasons like locked shared resources, etc. In such cases we have -1 process. Users don't like to wait and click the link again and again. As a result all processes hangs and all users have to wait.
I'd add even more threads to make the server more robust. But the problem is probably python GIL. Threads wan't use all CPU cores. So multiple processes work much better for load balancing. But threads may help a lot in case of locked shared resources and i/o wait delays. A process may do much work while one of it's thread is locked.
I don't want to decrease time limits until there is no another solution. It is possible to solve this problem with threads in theory, and I don't want to show error messages to user or to make him waiting on every request until there is no another choice.
So, the solution is:
Upgrade uWSGI to recent stable version (as roberto suggested).
Use --thunder-lock option.
Now I'm running with 50 threads per process and all requests are distributed between processes equally.
Every process is effectively a thread, as threads are execution contexts of the same process.
For such a reason there is nothing like "a process executes it instead of a thread". Even without threads your process has 1 execution context (a thread). What i would investigate is why you get (perceived) poor performances when using multiple threads per process. Are you sure you are using a stable (with solid threading support) uWSGI release ? (1.4.x or 1.9.x)
Have you thought about dynamically spawning more processes when the server is overloaded ? Check the uWSGI cheaper modes, there are various algorithm available. Maybe one will fit your situation.
The GIL is not a problem for you, as from what you describe the problem is the lack of threads for managing new requests (even if from your numbers it looks you may have a too much heavy lock contention on something else)
I'm designing a long running process, triggered by a Django management command, that needs to run on a fairly frequent basis. This process is supposed to run every 5 min via a cron job, but I want to prevent it from running a second instance of the process in the rare case that the first takes longer than 5 min.
I've thought about creating a touch file that gets created when the management process starts and is removed when the process ends. A second management command process would then check to make sure the touch file didn't exist before running. But that seems like a problem if a process dies abruptly without properly removing the touch file. It seems like there's got to be a better way to do that check.
Does anyone know any good tools or patterns to help solve this type of issue?
For this reason I prefer to have a long-running process that gets its work off of a shared queue. By long-running I mean that its lifetime is longer than a single unit of work. The process is then controlled by some daemon service such as supervisord which can take over control of restarting the process when it crashes. This delegates the work appropriately to something that knows how to manage process lifecycles and frees you from having to worry about the nitty gritty of posix processes in the scope of your script.
If you have a queue, you also have the luxury of being able to spin up multiple processes that can each take jobs off of the queue and process them, but that sounds like it's out of scope of your problem.
It seems logical to me that processes should die and memory be cleared after the python scripts have run and the HTTP response has been sent.
however It seems I have four processes running, one of which is using over 100MB of memory.
It seems like way too much for what I am doing. Is there some garbage collector settings I need to configure or something?
mod_wsgi keeps loaded python process in memory to speed up any further requests. That is absolutely normal/
I'm running a Django application on Apache with mod_wsgi. Will there be any downtime during an upgrade?
Mod_wsgi is running in daemon mode, so I can reload my code by touching the .wsgi script file, as described in the "ReloadingSourceCode" document: http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode. Presumably, that reload requires some non-zero amount of time. What happens if a request comes in during the reload? Will Apache queue the request and then complete it once the wsgi daemon is ready?
The documentation includes the following statement:
So, if you are using Django in daemon mode and needed to change your 'settings.py' file, once you have made the required change, also touch the script file containing the WSGI application entry point. Having done that, on the next request the process will be restarted and your Django application reloaded.
To me, that suggests that Apache will gracefully handle every request, but I thought I would ask to be sure. My app isn't critical (a little downtime wouldn't be disastrous) so the question is mostly academic.
Thank you.
In daemon mode there is no concept of a graceful restart when WSGI script file is touched to force a download. That is, unlike Apache itself, which will start new Apache server child processes while waiting for old processes to finish up with current requests, for mod_wsgi daemon processes, the existing process must exit before a new one starts up.
The consequences of this are that mod_wsgi can't wait indefinitely for current requests to complete. If it did, then there is a risk that if all daemon processes are tied up waiting for current requests to finish, that clients would see a noticeable delay in being handled.
At the other end of the scale however, the daemon process can't be immediately killed as that would cause current requests to be interrupted.
A middle ground therefore exists. The daemon process will wait for requests to finish before exiting, but if they haven't completed within the shutdown period, then the daemon process will be forcibly quit and the active requests will be interrupted.
The period of this shutdown timeout defaults to 5 seconds. It can be overridden using the shutdown-timeout option to WSGIDaemonProcess directive, but due consideration should be given to the effects of changing it.
Thus, in respect of this specific issue, if you have long running requests still active when the first request comes in after you touched the WSGI script file, there is the risk that the active long requests will be interrupted.
The next notable thing you may see is that even if there are no long running requests and processes shutdown promptly, then it is still necessary to load up the WSGI application again within the new process. The time this takes will be seen as a delay in handling the request. How big that delay is will depend on the framework and your application. The worst offender as far as time taken to start up that I know of is TurboGears. Django somewhat better and the best as far as quick start up times being lightweight micro frameworks such as Flask.
Do note that any new requests which come in while these shutdown and startup delays occur should not be lost. This is because the HTTP listener socket has a certain depth and connections queue up in that waiting to be accepted. If the number of requests arriving is huge though and that queue fills up, then you will start to see connection refused errors in the browser.
No, there will be no downtime. Requests using the old code will complete, and new requests will use the new code.
There will be a small bit more load on the server as the new code loads but unless your application is colossal and your servers are already nearly overloaded this will be unnoticeable.
This is like the apachectl graceful command for Apache as a whole, which tells it to start a new configuration without downtime.
Im in the process of writing a python script to act as a "glue" between an application and some external devices. The script itself is quite straight forward and has three distinct processes:
Request data (from a socket connection, via UDP)
Receive response (from a socket connection, via UDP)
Process response and make data available to 3rd party application
However, this will be done repetitively, and for several (+/-200 different) devices. So once its reached device #200, it would start requesting data from device #001 again. My main concern here is not to bog down the processor whilst executing the script.
UPDATE:
I am using three threads to do the above, one thread for each of the above processes. The request/response is asynchronous as each response contains everything i need to be able to process it (including the senders details).
Is there any way to allow the script to run in the background and consume as little system resources as possible while doing its thing? This will be running on a windows 2003 machine.
Any advice would be appreciated.
If you are using blocking I/O to your devices, then the script won't consume any processor while waiting for the data. How much processor you use depends on what sorts of computation you are doing with the data.
Twisted -- the best async framework for Python -- would allow you do perform these tasks with the minimal hogging of system resources, most especially though not exclusively if you want to process several devices "at once" rather than just round-robin among the several hundreds (the latter might result in too long a cycle time, especially if there's a risk that some device will have very delayed answer or even fail to answer once in a while and result in a "timeout"; as a rule of thumb I'd suggest having at least half a dozens devices "in play" at any given time to avoid this excessive-delay risk).