python: Create a detached process and communicate with it via command line - python

I want to create a command-line tool which is a countdown timer with some custom features I need.
My idea is to use a python script to fire up a process which does the work in the background (e.g. play sound when close to the end). Once the timer process is running I would like to communicate with it via the command line (send inquiries like 'remaining' or commands like 'start XXmin' and 'stop'). There should be only a single instance of the timer process, of course.
Usage might look like
>>> timer start 25min
>>> timer remaining
17:34 min remaining
>>> timer stop
timer stopped.
>>> timer start 90sec
What would the timer process need to look like to do its work while waiting for messages to arrive? What, in turn, would the interface script need to do to fire up the process and to communicate with it later? Is using a separate process the best idea to achieve my goal?
I have no clue how to go about it. My idea sounds very simple yet almost all of what I found is concerned with the concurrency of child processes of a parent script, which is not what I want.
Thank you.

What you're looking for here is a basic client-server architecture. You'll need two programs - one which runs in the background and listens for messages (the server), and a second that sends messages to the server, and does something with the responses (the client).
There are a lot of ways to do this, and the area is legitimately complex, so don't expect it to be super easy. For just starting out, I'd recommend you just try to use a simple http.server server using the standard library (http server module). For the client side, I'd recommend the requests library. HTTP is definitely not the best possible choice for a local client-server setup, but with the existing libraries it's going to be by far the easiest to get up and running, and once you're comfortable with that, you can look into other approaches if you want to.

The easy way would be to use & in the shell to execute your script in the background. And then communicate with the process with just USR1 and USR2 signals.
In Python, I guess easiest way would be to use daemon module.
import daemon
def do_something():
pass
if __name__ == "__main__":
with daemon.DaemonContext():
do_something()
Or you could fork() your own daemon process.
import os
def doSomething():
pass
def createDaemon():
try:
# Store the Fork PID
pid = os.fork()
if pid > 0:
print 'PID: %d' % pid
os._exit(0)
os.chdir("/")
os.setsid()
os.umask(0)
except OSError, error:
print 'Unable to fork. Error: %d (%s)' % (error.errno, error.strerror)
os._exit(1)
doSomething()
Then, for example, you could use os.pipe to communicate with that daemon process. On in this simple case, on a *nix system, even just signals.
Another option is to use multiprocessing module to create the daemon process and also to communicate with it.

Related

How to integrate killable processes/thread in Python GUI?

Kind all, I'm really new to python and I'm facing a task which I can't completely grasp.
I've created an interface with Tkinter which should accomplish a couple of apparently easy feats.
By clicking a "Start" button two threads/processes will be started (each calling multiple subfunctions) which mainly read data from a serial port (one port per process, of course) and write them to file.
The I/O actions are looped within a while loop with a very high counter to allow them to go onward almost indefinitely.
The "Stop" button should stop the acquisition and essentially it should:
Kill the read/write Thread
Close the file
Close the serial port
Unfortunately I still do not understand how to accomplish point 1, i.e.: how to create killable threads without killing the whole GUI. Is there any way of doing this?
Thank you all!
First, you have to choose whether you are going to use threads or processes.
I will not go too much into differences, google it ;) Anyway, here are some things to consider: it is much easier to establish communication between threads than betweeween processes; in Python, all threads will run on the same CPU core (see Python GIL), but subprocesses may use multiple cores.
Processes
If you are using subprocesses, there are two ways: subprocess.Popen and multiprocessing.Process. With Popen you can run anything, whereas Process gives a simpler thread-like interface to running python code which is part of your project in a subprocess.
Both can be killed using terminate method.
See documentation for multiprocessing and subprocess
Of course, if you want a more graceful exit, you will want to send an "exit" message to the subprocess, rather than just terminate it, so that it gets a chance to do the clean-up. You could do that e.g. by writing to its stdin. The process should read from stdin and when it gets message "exit", it should do whatever you need before exiting.
Threads
For threads, you have to implement your own mechanism for stopping, rather than using something as violent as process.terminate().
Usually, a thread runs in a loop and in that loop you check for a flag which says stop. Then you break from the loop.
I usually have something like this:
class MyThread(Thread):
def __init__(self):
super(Thread, self).__init__()
self._stop_event = threading.Event()
def run(self):
while not self._stop_event.is_set():
# do something
self._stop_event.wait(SLEEP_TIME)
# clean-up before exit
def stop(self, timeout):
self._stop_event.set()
self.join(timeout)
Of course, you need some exception handling etc, but this is the basic idea.
EDIT: Answers to questions in comment
thread.start_new_thread(your_function) starts a new thread, that is correct. On the other hand, module threading gives you a higher-level API which is much nicer.
With threading module, you can do the same with:
t = threading.Thread(target=your_function)
t.start()
or you can make your own class which inherits from Thread and put your functionality in the run method, as in the example above. Then, when user clicks the start button, you do:
t = MyThread()
t.start()
You should store the t variable somewhere. Exactly where depends on how you designed the rest of your application. I would probably have some object which hold all active threads in a list.
When user clicks stop, you should:
t.stop(some_reasonable_time_in_which_the_thread_should_stop)
After that, you can remove the t from your list, it is not usable any more.
First you can use subprocess.Popen() to spawn child processes, then later you can use Popen.terminate() to terminate them.
Note that you could also do everything in a single Python thread, without subprocesses, if you want to. It's perfectly possible to "multiplex" reading from multiple ports in a single event loop.

Python: GUI for continuously running script

I am writing a script which will run continuously on a computer. As it has to run on a computer without python installation, I am planning to convert it to executable. I also want to have a GUI to start and stop this application but I don't want this GUI to be opened all the time. I mean if the GUI is closed, I don't want the executable to stop running. It should stop only if user presses stop button on GUI. This GUI is just a interface for users to start and stop the executable.
How can I achieve this behavior?
The obvious solution is to have two separate programs: a backgrounder/daemon/agent/service that just chugs along in the background detached from user input and output, and a GUI program to control it. A nice advantage of this design is that you can also have a command-line program to control it, if you ever want to ssh in remotely, or control it from a script.
The traditional Unix way of handling this is to use a daemon designed like a system service (even if it's run like a normal user): it writes its pid to a file when it starts up, and the control program reads that file and sends a signal to the pid that it finds to kill it.
So, the control program has functions something like this:
def is_running():
try:
with open(PID_PATH) as f:
pid = int(f.read())
os.kill(pid, 0)
except Exception:
return False
else:
return True
def stop():
with open(PID_PATH) as f:
pid = int(f.read())
os.kill(pid, signal.SIGUSR1)
def start():
subprocess.check_call(DAEMON_PATH)
Of course in real life, you'll want some better error handling. Also, which signal you use depends on whether you want the daemon to die hard and instantly, or to gracefully shut down. And so on.
An alternative is to have the background process listen on a socket—whether TCP with a known port, or a Unix socket with a known filename—and communicate with it that way. This allows you to do fancier things that just start and stop.
On Windows, the details aren't quite the same, but you can do something similar.
Finally, Windows, OS X, and various linux distros also all have platform-specific ways of wrapping this kind of thing up at a higher level, so you might want to build a Windows Service, LaunchAgent, etc.
Thanks #abarnert.I used your method and converted your code for windows. Please see below my solution which works. It's starting and stopping helloworld.exe. I have removed error handling to keep it simple.
import subprocess
import time
def startprocess():
#start helloworld.exe
process = subprocess.Popen(['helloworld.exe'])
#Write down the prog id into a file for later use
f = open('progid.txt','w')
f.writelines(str(int(process._handle)))
f.close()
def endprocess():
f = open('progid.txt','r')
progid = int(f.read())
f.close()
# Kill the process using pywin32
import win32api
win32api.TerminateProcess(progid, -1)
startprocess()
time.sleep(60) #wait for 60 second before kill
endprocess()

Do not exit python program, but keep running

I've seen a few of these questions, but haven't found a real answer yet.
I have an application that launches a gstreamer pipe, and then listens to the data it sends back.
In the example application I based mine one, it ends with this piece of code:
gtk.main()
there is no gtk window, but this piece of code does cause it to keep running. Without it, the program exits.
Now, I have read about constructs using while True:, but they include the sleep command, and if I'm not mistaken that will cause my application to freeze during the time of the sleep so ...
Is there a better way, without using gtk.main()?
gtk.main() runs an event loop. It doesn't exit, and it doesn't just freeze up doing nothing, because inside it has code kind of like this:
while True:
timeout = timers.earliest() - datetime.now()
try:
message = wait_for_next_gui_message(timeout)
except TimeoutError:
handle_any_expired_timers()
else:
handle_message(message)
That wait_for_next_gui_message function is a wrapper around different platform-specific functions that wait for X11, WindowServer, the unnamed thing in Windows, etc. to deliver messages like "user clicked your button" or "user hit ctrl-Q".
If you call http.serve_forever() or similar on a twisted, HTTPServer, etc., it's doing exactly the same thing, except it's a wait_for_next_network_message(sources, timeout) function, which wraps something like select.select, where sources is a list of all of your sockets.
If you're listening on a gstreamer pipe, your sources can just be that pipe, and the wait_for_next function just select.select.
Or, of course, you could use a networking framework like twisted.
However, you don't need to design your app this way. If you don't need to wait for multiple sources, you can just block:
while True:
data = pipe.read()
handle_data(data)
Just make sure the pipe is not set to nonblocking. If you're not sure, you can use setblocking on a socket, fcntl on a Unix pipe, or something I can't remember off the top of my head on a Windows pipe to make sure.
In fact, even if you need to wait for multiple sources, you can do this, by putting a blocking loop for each source into a separate thread (or process). This won't work for thousands of sockets (although you can use greenlets instead of threads for that case), but it's fine for 3, or 30.
I've become a fan of the Cmd class. It gives you a shell prompt for your programs and will stay in the loop while waiting for input. Here's the link to the docs. It might do what you want.

How do I manage a Python based daemon on Linux?

I have a working Python based program that I want to run as a daemon. Currently I'm doing it in a very hackish manner of starting it in with screen-d -m name session and killing it with pkill -9 -f name.
Eventually I'm doing to have to move this to the better system we use here (thus I'm not willing to modify the program) but in the interim, I'm looking for a cleaner way to do this.
My current thinking is kick it off as a background task from an inti.d script but how do I write the part to bring it back down?
On linux there is a start-stop-daemon utility as part of the init.d tools.
It is very flexible and allows different ways for capturing the pid of your server.
There is also a file /etc/init.d/skeleton which can serve as a basis for your own init.d script.
If your target platform is debian based, it makes sense to create a debina package to deploy it as it also helps getting a daemon properly integrated in the rest of the system. And it is not too complicated (if you have done it ten times before ;-)
See PEP 3143 -- Standard daemon process library
If you want to do it with code in python, this is a pretty standard C-method that was ported to python that I use. It works flawlessly, and you can even choose a file output.
import os
import signal
def daemonize(workingdir='.', umask=0,outfile='/dev/null'):
#Put in background
pid = os.fork()
if pid == 0:
#First child
os.setsid()
pid = os.fork() #fork again
if pid == 0:
os.chdir(workingdir)
os.umask(umask)
else:
os._exit(0)
else:
os._exit(0)
#Close all open resources
try:
os.close(0)
os.close(1)
os.close(2)
except:
raise Exception("Unable to close standard output. Try running with 'nodaemon'")
os._exit(1)
#Redirect output
os.open(outfile, os.O_RDWR | os.O_CREAT)
os.dup2(0,1)
os.dup2(0,2)
Then, you can use signals to catch when a kill-signal was sent to the program and exit nicely. Example from Python Docs
import signal, os
def handler(signum, frame):
print 'Signal handler called with signal', signum
raise IOError("Couldn't open device!")
# Set the signal handler and a 5-second alarm
signal.signal(signal.SIGALRM, handler)
signal.alarm(5)
# This open() may hang indefinitely
fd = os.open('/dev/ttyS0', os.O_RDWR)
signal.alarm(0) # Disable the alarm
There are modules that could be used to daemonize a python script.
python-daemon implements the well-behaved daemon specification (PEP 3143).
Also this module recently came up on github which seems more pythonic and easy to use.
Starting it with an init.d style script is a good way. You take it down with POSIX Signals ... See StackOverflow, Signal handling in Python.
Try this question or more exactly accepted solution.

Threads in twisted... how to use them properly?

I need to write a simple app that runs two threads:
- thread 1: runs at timed periods, let's say every 1 minute
- thread 2: just a 'normal' while True loop that does 'stuff'
if not the requirement to run at timed interval I would have not looked at twisted at all, but simple sleep(60) is not good enough and construction like:
l = task.LoopingCall(timed_thread)
l.start(60.0)
reactor.run()
Looked really simple to achieve what I wanted there.
Now, how do I 'properly' add another thread?
I see two options here:
Use threading library and run two 'python threads' one executing my while loop, and another running reactor.run(). But Google seems to object this approach and suggests using twisted threading
Use twisted threading. That's what I've tried, but somehow this looks bit clumsy to me.
Here's what I came up with:
def timed_thread():
print 'i will be called every 1 minute'
return
def normal_thread():
print 'this is a normal thread'
time.sleep(30)
return
l = task.LoopingCall(timed_thread)
l.start(60.0)
reactor.callInThread(normal_thread)
reactor.run()
That seems to work, but! I can't stop the app. If I press ^C it wouldn't do anything (without 'callInThread' it just stops as you'd expect it to). ^Z bombs out to shell, and if I then do 'kill %1' it seems to kill the process (shell reports that), but the 'normal' thread keeps on running. kill PID wouldn't get rid of it, and the only cure is kill -9. Really strange.
So. What am I doing wrong? Is it a correct approach to implement two threads in twisted? Should I not bother with twisted? What other 'standard' alternatives are to implement timed calls? ('Standard' I mean I can easy_install or yum install them, I don't want to start downloading and using some random scripts from random web pages).
You didn't explain why you actually need threads here. If you had, I might have been able to explain why you don't need them. ;)
That aside, I can confirm that your basic understanding of things is correct. One possible misunderstanding I can clear up, though, is the notion that "python threads" and "Twisted threads" are at all different from each other. They're not. Python provides a threading library. All of Twisted's thread APIs are implemented in terms of Python's threading library. Only the API is different.
As far as shutdown goes, you have two options.
Start your run-forever thread using Python's threading APIs directly and make the thread a daemon. Your process can exit even while daemon threads are still running. A possible problem with this solution is that some versions of Python have issues with daemon threads that will lead to a crash at shutdown time.
Create your thread using either Twisted's APIs or the stdlib threading APIs but also add a Twisted shutdown hook using reactor.addSystemEventTrigger('before', 'shutdown', f). In that hook, communicate with the work thread and tell it to shut down. For example, you could share a threading.Event between the Twisted thread and your work thread and have the hook set it. The work thread can periodically check to see if it has been set and exit when it notices that it has been. Aside from not crashing, this gives another advantage over daemon threads - it will let you run some cleanup or finalization code in your work thread before the process exits.
Assuming that your main is relatively non-blocking:
import random
from twisted.internet import task
class MyProcess:
def __init__(self):
self.stats = []
self.lp = None
def myloopingCall(self):
print "I have %s stats" % len(self.stats)
def myMainFunction(self,reactor):
self.stats.append(random.random())
reactor.callLater(0,self.myMainFunction,reactor)
def start(self,reactor):
self.lp = task.LoopingCall(self.myloopingCall)
self.lp.start(2)
reactor.callLater(0,self.myMainFunction,reactor)
def stop(self):
if self.lp is not None:
self.lp.stop()
print "I'm done"
if __name__ == '__main__':
myproc = MyProcess()
from twisted.internet import reactor
reactor.callWhenRunning(myproc.start,reactor)
reactor.addSystemEventTrigger('during','shutdown',myproc.stop)
reactor.callLater(10,reactor.stop)
reactor.run()
$ python bleh.py
I have 0 stats
I have 33375 stats
I have 66786 stats
I have 100254 stats
I have 133625 stats
I'm done

Categories

Resources