How do I manage a Python based daemon on Linux? - python

I have a working Python based program that I want to run as a daemon. Currently I'm doing it in a very hackish manner of starting it in with screen-d -m name session and killing it with pkill -9 -f name.
Eventually I'm doing to have to move this to the better system we use here (thus I'm not willing to modify the program) but in the interim, I'm looking for a cleaner way to do this.
My current thinking is kick it off as a background task from an inti.d script but how do I write the part to bring it back down?

On linux there is a start-stop-daemon utility as part of the init.d tools.
It is very flexible and allows different ways for capturing the pid of your server.
There is also a file /etc/init.d/skeleton which can serve as a basis for your own init.d script.
If your target platform is debian based, it makes sense to create a debina package to deploy it as it also helps getting a daemon properly integrated in the rest of the system. And it is not too complicated (if you have done it ten times before ;-)

See PEP 3143 -- Standard daemon process library

If you want to do it with code in python, this is a pretty standard C-method that was ported to python that I use. It works flawlessly, and you can even choose a file output.
import os
import signal
def daemonize(workingdir='.', umask=0,outfile='/dev/null'):
#Put in background
pid = os.fork()
if pid == 0:
#First child
os.setsid()
pid = os.fork() #fork again
if pid == 0:
os.chdir(workingdir)
os.umask(umask)
else:
os._exit(0)
else:
os._exit(0)
#Close all open resources
try:
os.close(0)
os.close(1)
os.close(2)
except:
raise Exception("Unable to close standard output. Try running with 'nodaemon'")
os._exit(1)
#Redirect output
os.open(outfile, os.O_RDWR | os.O_CREAT)
os.dup2(0,1)
os.dup2(0,2)
Then, you can use signals to catch when a kill-signal was sent to the program and exit nicely. Example from Python Docs
import signal, os
def handler(signum, frame):
print 'Signal handler called with signal', signum
raise IOError("Couldn't open device!")
# Set the signal handler and a 5-second alarm
signal.signal(signal.SIGALRM, handler)
signal.alarm(5)
# This open() may hang indefinitely
fd = os.open('/dev/ttyS0', os.O_RDWR)
signal.alarm(0) # Disable the alarm

There are modules that could be used to daemonize a python script.
python-daemon implements the well-behaved daemon specification (PEP 3143).
Also this module recently came up on github which seems more pythonic and easy to use.

Starting it with an init.d style script is a good way. You take it down with POSIX Signals ... See StackOverflow, Signal handling in Python.

Try this question or more exactly accepted solution.

Related

python: Create a detached process and communicate with it via command line

I want to create a command-line tool which is a countdown timer with some custom features I need.
My idea is to use a python script to fire up a process which does the work in the background (e.g. play sound when close to the end). Once the timer process is running I would like to communicate with it via the command line (send inquiries like 'remaining' or commands like 'start XXmin' and 'stop'). There should be only a single instance of the timer process, of course.
Usage might look like
>>> timer start 25min
>>> timer remaining
17:34 min remaining
>>> timer stop
timer stopped.
>>> timer start 90sec
What would the timer process need to look like to do its work while waiting for messages to arrive? What, in turn, would the interface script need to do to fire up the process and to communicate with it later? Is using a separate process the best idea to achieve my goal?
I have no clue how to go about it. My idea sounds very simple yet almost all of what I found is concerned with the concurrency of child processes of a parent script, which is not what I want.
Thank you.
What you're looking for here is a basic client-server architecture. You'll need two programs - one which runs in the background and listens for messages (the server), and a second that sends messages to the server, and does something with the responses (the client).
There are a lot of ways to do this, and the area is legitimately complex, so don't expect it to be super easy. For just starting out, I'd recommend you just try to use a simple http.server server using the standard library (http server module). For the client side, I'd recommend the requests library. HTTP is definitely not the best possible choice for a local client-server setup, but with the existing libraries it's going to be by far the easiest to get up and running, and once you're comfortable with that, you can look into other approaches if you want to.
The easy way would be to use & in the shell to execute your script in the background. And then communicate with the process with just USR1 and USR2 signals.
In Python, I guess easiest way would be to use daemon module.
import daemon
def do_something():
pass
if __name__ == "__main__":
with daemon.DaemonContext():
do_something()
Or you could fork() your own daemon process.
import os
def doSomething():
pass
def createDaemon():
try:
# Store the Fork PID
pid = os.fork()
if pid > 0:
print 'PID: %d' % pid
os._exit(0)
os.chdir("/")
os.setsid()
os.umask(0)
except OSError, error:
print 'Unable to fork. Error: %d (%s)' % (error.errno, error.strerror)
os._exit(1)
doSomething()
Then, for example, you could use os.pipe to communicate with that daemon process. On in this simple case, on a *nix system, even just signals.
Another option is to use multiprocessing module to create the daemon process and also to communicate with it.

How to debug when using multiprocessing in pycharm

I am debugging a multiprocess program with anaconda2 in pycharm community edition.
It has several background worker processes. The worker process will check the input Queue to retrieve the task without sleep until a task received. In fact, I'm only interested in the main process. But the pycharm debugger always step into the subprocess, it seems that the main process hasn't been working, and the task never sent out. How can I make the debugger out of the subprocess?
The worker subprocess looks like this:
class ILSVRC_worker:
...
def run(self):
cfg_parser = ConfigParser.ConfigParser()
cfg_parser.read(self.cfg_path)
data_factory = ILSVRC_DataFactory(cfg_parser)
logger = mp.log_to_stderr(logging.INFO)
while True:
try:
annotation_path = self.que_in.get(True,0.1)
except Queue.Empty:
continue
if annotation_path is None:
# to exit the subprocess
logger.info('exit the worker process')
break
...
I could think of two ways to achieve this but unfortunately I think it won't be possible with the community edition.
If you have the PID of the process you could try attaching to it by using the Tools>Attach to Process.. functionality (I don't know if that is available in the community edition). This is difficult if you use a Pool because you don't know which process the job is assigned to.
Another way would be to use a remote debugger and connect to it in the dispatched python process. This is only available in the professional edition
I ended up testing my code without any multiprocessing

Python: GUI for continuously running script

I am writing a script which will run continuously on a computer. As it has to run on a computer without python installation, I am planning to convert it to executable. I also want to have a GUI to start and stop this application but I don't want this GUI to be opened all the time. I mean if the GUI is closed, I don't want the executable to stop running. It should stop only if user presses stop button on GUI. This GUI is just a interface for users to start and stop the executable.
How can I achieve this behavior?
The obvious solution is to have two separate programs: a backgrounder/daemon/agent/service that just chugs along in the background detached from user input and output, and a GUI program to control it. A nice advantage of this design is that you can also have a command-line program to control it, if you ever want to ssh in remotely, or control it from a script.
The traditional Unix way of handling this is to use a daemon designed like a system service (even if it's run like a normal user): it writes its pid to a file when it starts up, and the control program reads that file and sends a signal to the pid that it finds to kill it.
So, the control program has functions something like this:
def is_running():
try:
with open(PID_PATH) as f:
pid = int(f.read())
os.kill(pid, 0)
except Exception:
return False
else:
return True
def stop():
with open(PID_PATH) as f:
pid = int(f.read())
os.kill(pid, signal.SIGUSR1)
def start():
subprocess.check_call(DAEMON_PATH)
Of course in real life, you'll want some better error handling. Also, which signal you use depends on whether you want the daemon to die hard and instantly, or to gracefully shut down. And so on.
An alternative is to have the background process listen on a socket—whether TCP with a known port, or a Unix socket with a known filename—and communicate with it that way. This allows you to do fancier things that just start and stop.
On Windows, the details aren't quite the same, but you can do something similar.
Finally, Windows, OS X, and various linux distros also all have platform-specific ways of wrapping this kind of thing up at a higher level, so you might want to build a Windows Service, LaunchAgent, etc.
Thanks #abarnert.I used your method and converted your code for windows. Please see below my solution which works. It's starting and stopping helloworld.exe. I have removed error handling to keep it simple.
import subprocess
import time
def startprocess():
#start helloworld.exe
process = subprocess.Popen(['helloworld.exe'])
#Write down the prog id into a file for later use
f = open('progid.txt','w')
f.writelines(str(int(process._handle)))
f.close()
def endprocess():
f = open('progid.txt','r')
progid = int(f.read())
f.close()
# Kill the process using pywin32
import win32api
win32api.TerminateProcess(progid, -1)
startprocess()
time.sleep(60) #wait for 60 second before kill
endprocess()

How to correctly handle autorun start & stop on linux with python

I have two scripts: "autorun.py" and "main.py". I added "autorun.py" as service to the autorun in my linux system. works perfectly!
Now my question is: When I want to launch "main.py" from my autorun script, and "main.py" will run forever, "autorun.py" never terminates as well! So when I do
sudo service autorun-test start
the command also never finishes!
How can I run "main.py" and then exit, and to finish it up, how can I then stop "main.py" when "autorun.py" is launched with the parameter "stop" ? (this is how all other services work I think)
EDIT:
Solution:
if sys.argv[1] == "start":
print "Starting..."
with daemon.DaemonContext(working_directory="/home/pi/python"):
execfile("main.py")
else:
pid = int(open("/home/pi/python/main.pid").read())
try:
os.kill(pid, 9)
print "Stopped!"
except:
print "No process with PID "+str(pid)
First, if you're trying to create a system daemon, you almost certainly want to follow PEP 3143, and you almost certainly want to use the daemon module to do that for you.
When I want to launch "main.py" from my autorun script, and "main.py" will run forever, "autorun.py" never terminates as well!
You didn't say how you're running it. If you're doing anything that launches main.py as a child and waits (or, worse, tries to import/execfile/etc. in the same process), you can't do that. Either autorun.py has to launch and detach main.py (or do so indirectly via some external tool), or main.py has to daemonize when launched.
how can I then stop "main.py" when "autorun.py" is launched with the parameter "stop" ?
You need some form of inter-process communication (IPC), and some way for autorun to find the right IPC channel to use.
If you're building a network server, the right answer might be to connect to it as a client. But otherwise, the simplest thing to do is kill the process with a signal.
If you're using the daemon module, it can easily map signals to callbacks. Or, if you don't need any cleanup, just use SIGTERM, which by default will abruptly terminate. If neither of those applies, you will have to set up a custom signal handler (and within that handler do something useful—e.g., set a flag that your main code checks periodically).
How do you know what process to send the signal to? The standard way to do this is to have main.py record its PID in a pidfile at startup. You read that pidfile, and signal whatever process is specified there. (If you get an error because there is no process with that PID, that just means the daemon already quit for some reason—possibly because of an unhandled exception, or even a segfault. You may want to log that, but treat the "stop" as successful otherwise.) Again, if you're using daemon, it does the pidfile stuff for you; if not, you have to do it yourself.
You may want to take a look at the service scripts for daemons that came with your computer. They're probably all written in bash rather than Python, but it's not that hard to figure out what they're doing. Or… just use one of them as a skeleton, in which case you don't really need any bash knowledge; it's just search-and-replace on the name.
If your distro has LSB-style init functions, you can use something like this example. That one does a whole lot more than you need to, but it's a good example of all of the details. Or do it all from scratch with something like this example. This one is doing the pidfile management and the backgrounding from the service script (turning a non-daemon program into a daemon), which you don't need if you're using daemon properly, and it's using SIGHUP instead of SIGTERM. You can google yourself for other examples of init.d service scripts.
But again, if you're just trying to do this for your own system, the best thing to do is look inside the /etc/init.d on your distro. There will be dozens of examples there, and 90% of them will be exactly the same except for the name of the daemon.

How to stop a multi-threaded Python script from outside

I have a Python program that uses multiple daemon threads. I want to stop the program from outside, preferably from another Python script.
I've tried with kill <pid> from shell, just for a test, but it doesn't work with multi-threaded scripts.
One way would be to make the program check some file every n-seconds as a flag for termination. I'm sure there's some better way I can do this.
Note that I'd like to stop the program cleanly, so some message from outside in a form of an exception would be ideal, I think.
EDIT:
Here's an example of how I did it at the moment:
try:
open('myprog.lck', 'w').close()
while True:
time.sleep(1)
try:
open('myprog.lck').close()
except IOError:
raise KeyboardInterrupt
except KeyboardInterrupt:
print 'MyProgram terminated.'
Deleting file myprog.lck will cause the script to stop. Is the example above bad way to do this?
Use the poison pill technique. Upon receipt of a pill (a special message) your program must handle it and die.. The way you're doing it its ok, but for something more elegant, you should implement a kind of communication between your "killing script" and your main program. For a start, have a look in the standard library for Interprocess Communication and Networking.
I would install a signal handler as described in http://www.doughellmann.com/PyMOTW/signal/index.html#signals-and-threads
You can enter kill -l in your shell to get a list of available signals.
You should be able to kill it from the shell with kill -9 <pid>.

Categories

Resources