Django: How to run a function when server exits? - python

I am writing a Django project where several processes are opened using Popen. Right now, when the server exits, these processes are orphaned. I have a function to terminate these processes, and I wish to organise it so that this function is called automatically when the server quits.
Any help would be greatly appreciated.

Since you haven't specified which HTTP server you are using (uWSGI, nginx, apache etc.), you can test this recipe out on a simple dev server.
What you can try is to register a cleanup function via atexit module that will be called at process termination. You can do this easily by overriding django's builtin runserver command.
Create a file named runserver.py and put that in $PATH_TO_YOUR_APP/management/commands/ directory.
Assuming PROCESSES_TO_KILL is a global list holding references to orphan processes that will be killed upon server termination.
import atexit
import signal
import sys
from django.core.management.commands.runserver import BaseRunserverCommand
class NewRunserverCommand(BaseRunserverCommand):
def __init__(self, *args, **kwargs):
atexit.register(self._exit)
signal.signal(signal.SIGINT, self._handle_SIGINT)
super(Command, self).__init__(*args, **kwargs)
def _exit(self):
for process in PROCESSES_TO_KILL:
process.terminate()
def _handle_SIGINT(signal, frame):
self._exit()
sys.exit(0)
Just be aware that this works great for normal termination of the script, but it won't get called in all cases (e.g. fatal internal errors).
Hope this helps.

First of all "When the server quits" is ambiguous. Does this stuff run when responding to a request? Does this stuff run during a management command?
Let's assume for the sake of argument, that you are running this somewhere in a view, so you want to have something that runs after each view returns in order to clean up junk that the view left hanging around.
Most likely, what you are looking to do is to write some Middleware. Even more specifically, some sort of process_response.
However, based on the short description of what you have so far, it sounds far more likely that you should be using some task manager, such as Celery to manage asynchronous tasks and processes.

Related

How can I use CoreBluetooth for Python without giving up the main thread

I am trying to implement a generic BLE interface that will run on OS/X and talk to a BLE peripheral device. The peripheral is very complex: It can be queried, sent hundreds of different commands, offers notifications, etc. I need to be able to connect to it, send it commands, read responses, get updates, etc.
I have all of the code I need but am being frustrated by one thing: From the limited information I can find online, it looks like the only way to make CoreBluetooth's delegate callbacks get called is by running:
from PyObjCTools import AppHelper
# [functional CoreBluetooth code that scans for peripherals]
AppHelper.runConsoleEventLoop()
The problem is that AppHelper.runConsoleEventLoop blocks the main thread from continuing, so I cannot execute code to interact with the peripheral.
I have tried running the event loop:
From a different thread ---> Delegate callbacks not called
From a subprocess ---> Delegate callbacks not called
From a forked child ---> Python crashes with error message: The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec().
From multiprocessing.Pool(1).apply_async(f) ---> Python crashes with error message: The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec().
all without success.
I do not understand the nature of AppHelper.runConsoleEventLoop. Why does it need to be run in order for the CoreBluetooth delegate callbacks to be called? Is there some other version that can be called that doesn't have to be run on the main thread? I read something on the web about it being GUI related and therefore had to be run on the main thread but my python application does not have any GUI elements. Is there a flag or API that is less concerned with GUI that I could use?
Any help would be enormously appreciated. Thanks for your time!
Update:
I spoke with an iOS/CoreBluetooth expert at work and found out that Dispatch Queues are probably the solution. I dug further and found that the author of the PyObjC package recently released a v4.1 that adds support for dispatch queues that was heretofore missing.
I've been reading Apple developer documentation for hours now and I understand that it's possible to create Dispatch Source objects that monitor certain system events (such as BLE peripheral events that I am interested in) and that configuring them involves creating and assigning a Dispatch Queue, which is the class that calls my CBCentralManager delegate callback methods. The one piece of the puzzle that I am still missing is how to connect the Dispatch Source/Queue stuff to the AppHelper.runConsoleEventLoop, which calls Foundation.NSRunLoop.currentRunLoop(). If I put the call to AppHelper on a separate thread, how do I tell it which Dispatch Source/Queue to work with in order to get event info?
So I finally figured it out. If you want to run an event loop on a separate thread so that you don't lose control of the main thread, you must create a new dispatch queue and initialize your CBCentralManager with it.
import CoreBluetooth
import libdispatch
class CentralManager(object):
def __init__(self):
central_manager = CoreBluetooth.CBCentralManager.alloc()
dispatch_queue = libdispatch.dispatch_queue_create('<queue name>', None)
central_manager.initWithDelegate_queue_options_(delegate, dispatch_queue, None)
def do_the_things(args):
# scan, connect, send messages, w/e
class EventLoopThread(threading.Thread):
def __init__(self):
super(EventLoopThread, self).__init__()
self.setDaemon(True)
self.should_stop = False
def run(self):
logging.info('Starting event loop on background thread')
AppHelper.runConsoleEventLoop(installInterrupt=True)
def stop(self):
logging.info('Stop the event loop')
AppHelper.stopEventLoop()
event_loop_thread = EventLoopThread()
event_loop_thread.start()
central_device = BLECentralDevice(service_uuid_list)
central_device.do_the_things('woo hoo')
event_loop_thread.stop()

python web thread

So I have a simple python cgi script. The web front end is used to add stuff to a database, and I have update() function that does some cleanup.
I want to run the update() function every time something is added to site, but it needs to be in the background. That is, the webpage should finish loading without waiting for the update() function to finish.
Now I use:
-add stuff to db
Thread(target=update).start()
-redirect to index page
The problem seems to be that python does not want to finish the request (redirect) until the update() thread is done.
Any ideas?
That is, the webpage should finish loading without waiting for the update() function to finish
CGI has to wait for the process -- as a whole -- to finish. Threads aren't helpful.
You have three choices.
subprocess. Spawn a separate "no wait" subprocess to do the update. Provide all the information as command-line parameters.
multiprocessing. Have your CGI connect place a work request in a Queue. You'd start a separate listener which handles the update requests from a Queue.
celery. Download Celery and use it to manage the separate worker process that does the background processing.
you could add a database trigger to update db in response to an event e.g., if a specific column has changed
start a subprocess e.g., subprocess.Popen([sys.executable, '-c', "from m import update; update()"]). It might not work depending on your cgi environment
or just touch update file to be picked up by an inotify script to run necessary updates in a separate process
switch to a different execution environment, e.g., some multithreaded wsgi-server
as a heave-weight option you could use celery if it is easy to deploy in your environment

Python:Django: Signal handler and main thread

I am building a django application which depends on a python module where a SIGINT signal handler has been implemented.
Assuming I cannot change the module I am dependent from, how can I workaround the "signal only works in main thread" error I get integrating it in Django ?
Can I run it on the Django main thread?
Is there a way to inhibit the handler to allow the module to run on non-main threads ?
Thanks!
Django's built-in development server has auto-reload feature enabled by default which spawns a new thread as a means of reloading code. To work around this you can simply do the following, although you'd obviously lose the convenience of auto-reloading:
python manage.py runserver --noreload
You'll also need to be mindful of this when choosing your production setup. At least some of the deployment options (such as threaded fastcgi) are certain to execute your code outside main thread.
I use Python 3.5 and Django 1.8.5 with my project, and I met a similar problem recently. I can easily run my xxx.py code with SIGNAL directly, but it can't be executed on Django as a package just because of the error "signal only works in main thread".
Firstly, runserver with --noreload --nothreading is usable but it runs my multi-thread code too slow for me.
Secondly, I found that code in __init__.py of my package ran in the main thread. But, of course, only the main thread can catch this signal, my code in package can't catch it at all. It can't solve my problem, although, it may be a solution for you.
Finally, I found that there is a built-in module named subprocess in Python. It means you can run a sub real complete process with it, that is to say, this process has its own main thread, so you can run your code with SIGNAL easily here. Though I don't know the performance with using it, it works well for me. PS, you can find all details about subprocess in Python Documentation.
Thank you~
There is a cleaner way, that doesn't break your ability to use threads and processes.
Put your registration calls in manage.py:
def handleKill(signum, frame):
print "Killing Thread."
# Or whatever code you want here
ForceTerminate.FORCE_TERMINATE = True
print threading.active_count()
exit(0)
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings")
from django.core.management import execute_from_command_line
signal.signal(signal.SIGINT, handleKill)
signal.signal(signal.SIGTERM, handleKill)
execute_from_command_line(sys.argv)
Although the question does not describe exactly the situation you are in, here is some more generic advice:
The signal is only sent to the main thread. For this reason, the signal handler should be in the main thread.
From that point on, the action that the signal triggers, needs to be communicated to the other threads. I usually do this using Events. The signal handler sets the event, which the other threads will read, and then realize that action X has been triggered. Obviously this implies that the event attribute should be shared among the threads.

Python signals hosted on WSGI

I'm using the python signals library to kill a function if it runs longer than a set period of time.
It works well in my tests but when hosted on the server I get the following error
"signal only works in main thread"
I have set the WSGI signals restriction to be off in my httpd.conf
WSGIRestrictSignal Off
as described
http://code.google.com/p/modwsgi/wiki/ApplicationIssues#Registration_Of_Signal_Handlers
I'm using the functions from the recipe described here
http://code.activestate.com/recipes/307871/
Not sure what I'm doing wrong. Is there a way to ensure that the signals are called in the main thread?
The only time any code under Apache/mod_wsgi runs as main thread is when a WSGI script file is being imported via WSGIImportScript or equivalent methods. Although one could use that method to register the signal handler from the main thread, it will be of no use as all subsequent requests are serviced via secondary threads and never the main thread. As a consequence, I don't think your signal handler will ever run as recollect that it can only run if the main thread is actually doing something, and that will not be the case as the main thread, will be blocked in Apache/mod_wsgi simply waiting for the process to be shutdown.
What is the operation doing that you are trying to kill? Doing it within context of web application, especially if web application is multi threaded probably isn't a good idea.

Twisted network client with multiprocessing workers?

So, I've got an application that uses Twisted + Stomper as a STOMP client which farms out work to a multiprocessing.Pool of workers.
This appears to work ok when I just use a python script to fire this up, which (simplified) looks something like this:
# stompclient.py
logging.config.fileConfig(config_path)
logger = logging.getLogger(__name__)
# Add observer to make Twisted log via python
twisted.python.log.PythonLoggingObserver().start()
# initialize the process pool. (child processes get forked off immediately)
pool = multiprocessing.Pool(processes=processes)
StompClientFactory.username = username
StompClientFactory.password = password
StompClientFactory.destination = destination
reactor.connectTCP(host, port, StompClientFactory())
reactor.run()
As this gets packaged for deployment, I thought I would take advantage of the twistd script and run this from a tac file.
Here's my very-similar-looking tac file:
# stompclient.tac
logging.config.fileConfig(config_path)
logger = logging.getLogger(__name__)
# Add observer to make Twisted log via python
twisted.python.log.PythonLoggingObserver().start()
# initialize the process pool. (child processes get forked off immediately)
pool = multiprocessing.Pool(processes=processes)
StompClientFactory.username = username
StompClientFactory.password = password
StompClientFactory.destination = destination
application = service.Application('myapp')
service = internet.TCPClient(host, port, StompClientFactory())
service.setServiceParent(application)
For the sake of illustration, I have collapsed or changed a few details; hopefully they were not the essence of the problem. For example, my app has a plugin system, the pool is initialized by a separate method, and then work is delegated to the pool using pool.apply_async() passing one of my plugin's process() methods.
So, if I run the script (stompclient.py), everything works as expected.
It also appears to work OK if I run twist in non-daemon mode (-n):
twistd -noy stompclient.tac
however, it does not work when I run in daemon mode:
twistd -oy stompclient.tac
The application appears to start up OK, but when it attempts to fork off work, it just hangs. By "hangs", I mean that it appears that the child process is never asked to do anything and the parent (that called pool.apply_async()) just sits there waiting for the response to return.
I'm sure that I'm doing something stupid with Twisted + multiprocessing, but I'm really hoping that someone can explain to my the flaw in my approach.
Thanks in advance!
Since the difference between your working invocation and your non-working invocation is only the "-n" option, it seems most likely that the problem is caused by the daemonization process (which "-n" prevents from happening).
On POSIX, one of the steps involved in daemonization is forking and having the parent exit. Among of things, this has the consequence of having your code run in a different process than the one in which the .tac file was evaluated. This also re-arranges the child/parent relationship of processes which were started in the .tac file - as your pool of multiprocessing processes were.
The multiprocessing pool's processes start off with a parent of the twistd process you start. However, when that process exits as part of daemonization, their parent becomes the system init process. This may cause some problems, although probably not the hanging problem you described. There are probably other similarly low-level implementation details which normally allow the multiprocessing module to work but which are disrupted by the daemonization process.
Fortunately, avoiding this strange interaction should be straightforward. Twisted's service APIs allow you to run code after daemonization has completed. If you use these APIs, then you can delay the initialization of the multiprocessing module's process pool until after daemonization and hopefully avoid the problem. Here's an example of what that might look like:
from twisted.application.service import Service
class MultiprocessingService(Service):
def startService(self):
self.pool = multiprocessing.Pool(processes=processes)
MultiprocessingService().setServiceParent(application)
Now, separately, you may also run into problems relating to clean up of the multiprocessing module's child processes, or possibly issues with processes created with Twisted's process creation API, reactor.spawnProcess. This is because part of dealing with child processes correctly generally involves handling the SIGCHLD signal. Twisted and multiprocessing aren't going to be cooperating in this regard, though, so one of them is going to get notified of all children exiting and the other will never be notified. If you don't use Twisted's API for creating child processes at all, then this may be okay for you - but you might want to check to make sure any signal handler the multiprocessing module tries to install actually "wins" and doesn't get replaced by Twisted's own handler.
A possible idea for you...
When running in daemon mode twistd will close stdin, stdout and stderr. Does something that your clients do read or write to these?

Categories

Resources