How to stop a multi-threaded Python script from outside - python

I have a Python program that uses multiple daemon threads. I want to stop the program from outside, preferably from another Python script.
I've tried with kill <pid> from shell, just for a test, but it doesn't work with multi-threaded scripts.
One way would be to make the program check some file every n-seconds as a flag for termination. I'm sure there's some better way I can do this.
Note that I'd like to stop the program cleanly, so some message from outside in a form of an exception would be ideal, I think.
EDIT:
Here's an example of how I did it at the moment:
try:
open('myprog.lck', 'w').close()
while True:
time.sleep(1)
try:
open('myprog.lck').close()
except IOError:
raise KeyboardInterrupt
except KeyboardInterrupt:
print 'MyProgram terminated.'
Deleting file myprog.lck will cause the script to stop. Is the example above bad way to do this?

Use the poison pill technique. Upon receipt of a pill (a special message) your program must handle it and die.. The way you're doing it its ok, but for something more elegant, you should implement a kind of communication between your "killing script" and your main program. For a start, have a look in the standard library for Interprocess Communication and Networking.

I would install a signal handler as described in http://www.doughellmann.com/PyMOTW/signal/index.html#signals-and-threads
You can enter kill -l in your shell to get a list of available signals.

You should be able to kill it from the shell with kill -9 <pid>.

Related

How to make the Python subprocess wait for some input when running through SLURM script?

I am running some Python code using a SLURM script on a remote server accessed through SSH. At some point, issues related to licenses on the SLURM platform may happen, generating errors in Python and ending the subprocess. I want to use try-except to let the Python subprocess wait until the issue is fixed, after that it can keep running from where it stopped.
What are some smart implementations for that?
My most obvious solution is just keeping Python inside a loop if the error occurs and letting it read a file every X seconds, when I finally fix the error and want it to keep running from where it stopped, I would write something on the file and break the loop. I wonder if there is a smarter way to provide input to the Python subprocess while it is running through the SLURM script.
One idea might be to add a signal handler for signal USR1 to your Python script like this.
In the signal handler function, you can set a global variable or send a message or set a threading.Event that the main process is waiting on.
Then you can signal the process with:
kill -USR1 <PID>
or with the Python os.kill() equivalent.
Though I do have to agree there is something to be said for the simplicity of your process doing:
touch /tmp/blocked.$$
and your program waiting in a loop with a 1s sleep for that file to be removed. This way you can tell which process id is blocked.

python: Create a detached process and communicate with it via command line

I want to create a command-line tool which is a countdown timer with some custom features I need.
My idea is to use a python script to fire up a process which does the work in the background (e.g. play sound when close to the end). Once the timer process is running I would like to communicate with it via the command line (send inquiries like 'remaining' or commands like 'start XXmin' and 'stop'). There should be only a single instance of the timer process, of course.
Usage might look like
>>> timer start 25min
>>> timer remaining
17:34 min remaining
>>> timer stop
timer stopped.
>>> timer start 90sec
What would the timer process need to look like to do its work while waiting for messages to arrive? What, in turn, would the interface script need to do to fire up the process and to communicate with it later? Is using a separate process the best idea to achieve my goal?
I have no clue how to go about it. My idea sounds very simple yet almost all of what I found is concerned with the concurrency of child processes of a parent script, which is not what I want.
Thank you.
What you're looking for here is a basic client-server architecture. You'll need two programs - one which runs in the background and listens for messages (the server), and a second that sends messages to the server, and does something with the responses (the client).
There are a lot of ways to do this, and the area is legitimately complex, so don't expect it to be super easy. For just starting out, I'd recommend you just try to use a simple http.server server using the standard library (http server module). For the client side, I'd recommend the requests library. HTTP is definitely not the best possible choice for a local client-server setup, but with the existing libraries it's going to be by far the easiest to get up and running, and once you're comfortable with that, you can look into other approaches if you want to.
The easy way would be to use & in the shell to execute your script in the background. And then communicate with the process with just USR1 and USR2 signals.
In Python, I guess easiest way would be to use daemon module.
import daemon
def do_something():
pass
if __name__ == "__main__":
with daemon.DaemonContext():
do_something()
Or you could fork() your own daemon process.
import os
def doSomething():
pass
def createDaemon():
try:
# Store the Fork PID
pid = os.fork()
if pid > 0:
print 'PID: %d' % pid
os._exit(0)
os.chdir("/")
os.setsid()
os.umask(0)
except OSError, error:
print 'Unable to fork. Error: %d (%s)' % (error.errno, error.strerror)
os._exit(1)
doSomething()
Then, for example, you could use os.pipe to communicate with that daemon process. On in this simple case, on a *nix system, even just signals.
Another option is to use multiprocessing module to create the daemon process and also to communicate with it.

How to force terminate a Python 3 program in Windows?

I am running Python 3 code in a windows Command line prompt.
The program has an infinite loop that I use. (While (1))
Sounds like bad design but it's meant to be like this.
Is there a way to force close the program without having to close the command prompt.
In the terminal, Ctrl + C often works for this.
You can use sys.exit(exit_code) or raise SystemExit(string_to_print_before_exiting).
https://docs.python.org/3/library/sys.html#sys.exit
https://docs.python.org/3/library/exceptions.html#SystemExit
Ctrl-C generates KeyboardInterrupt Exception, so unless you blindly catch all exceptions and ignore them, it should do just that. (If you do catch all, can add an exception for KeyboardInterrupt). Interestingly, this does not work for me because I am using Cygwin.
You can force-terminate the program using task manager. However, if you have more than one python process running, this can be tricky. To do this, have your process print PID in the first lines of the log file (you do have a log file, right?)
print("started process", os.getpid())
To see process: tasklist /FI "PID eq 1234"
To kill process: taskkill /PID 1234 /F
Advanced process interruption:
Have your program wait for command on a socket.

Python: GUI for continuously running script

I am writing a script which will run continuously on a computer. As it has to run on a computer without python installation, I am planning to convert it to executable. I also want to have a GUI to start and stop this application but I don't want this GUI to be opened all the time. I mean if the GUI is closed, I don't want the executable to stop running. It should stop only if user presses stop button on GUI. This GUI is just a interface for users to start and stop the executable.
How can I achieve this behavior?
The obvious solution is to have two separate programs: a backgrounder/daemon/agent/service that just chugs along in the background detached from user input and output, and a GUI program to control it. A nice advantage of this design is that you can also have a command-line program to control it, if you ever want to ssh in remotely, or control it from a script.
The traditional Unix way of handling this is to use a daemon designed like a system service (even if it's run like a normal user): it writes its pid to a file when it starts up, and the control program reads that file and sends a signal to the pid that it finds to kill it.
So, the control program has functions something like this:
def is_running():
try:
with open(PID_PATH) as f:
pid = int(f.read())
os.kill(pid, 0)
except Exception:
return False
else:
return True
def stop():
with open(PID_PATH) as f:
pid = int(f.read())
os.kill(pid, signal.SIGUSR1)
def start():
subprocess.check_call(DAEMON_PATH)
Of course in real life, you'll want some better error handling. Also, which signal you use depends on whether you want the daemon to die hard and instantly, or to gracefully shut down. And so on.
An alternative is to have the background process listen on a socket—whether TCP with a known port, or a Unix socket with a known filename—and communicate with it that way. This allows you to do fancier things that just start and stop.
On Windows, the details aren't quite the same, but you can do something similar.
Finally, Windows, OS X, and various linux distros also all have platform-specific ways of wrapping this kind of thing up at a higher level, so you might want to build a Windows Service, LaunchAgent, etc.
Thanks #abarnert.I used your method and converted your code for windows. Please see below my solution which works. It's starting and stopping helloworld.exe. I have removed error handling to keep it simple.
import subprocess
import time
def startprocess():
#start helloworld.exe
process = subprocess.Popen(['helloworld.exe'])
#Write down the prog id into a file for later use
f = open('progid.txt','w')
f.writelines(str(int(process._handle)))
f.close()
def endprocess():
f = open('progid.txt','r')
progid = int(f.read())
f.close()
# Kill the process using pywin32
import win32api
win32api.TerminateProcess(progid, -1)
startprocess()
time.sleep(60) #wait for 60 second before kill
endprocess()

How do I manage a Python based daemon on Linux?

I have a working Python based program that I want to run as a daemon. Currently I'm doing it in a very hackish manner of starting it in with screen-d -m name session and killing it with pkill -9 -f name.
Eventually I'm doing to have to move this to the better system we use here (thus I'm not willing to modify the program) but in the interim, I'm looking for a cleaner way to do this.
My current thinking is kick it off as a background task from an inti.d script but how do I write the part to bring it back down?
On linux there is a start-stop-daemon utility as part of the init.d tools.
It is very flexible and allows different ways for capturing the pid of your server.
There is also a file /etc/init.d/skeleton which can serve as a basis for your own init.d script.
If your target platform is debian based, it makes sense to create a debina package to deploy it as it also helps getting a daemon properly integrated in the rest of the system. And it is not too complicated (if you have done it ten times before ;-)
See PEP 3143 -- Standard daemon process library
If you want to do it with code in python, this is a pretty standard C-method that was ported to python that I use. It works flawlessly, and you can even choose a file output.
import os
import signal
def daemonize(workingdir='.', umask=0,outfile='/dev/null'):
#Put in background
pid = os.fork()
if pid == 0:
#First child
os.setsid()
pid = os.fork() #fork again
if pid == 0:
os.chdir(workingdir)
os.umask(umask)
else:
os._exit(0)
else:
os._exit(0)
#Close all open resources
try:
os.close(0)
os.close(1)
os.close(2)
except:
raise Exception("Unable to close standard output. Try running with 'nodaemon'")
os._exit(1)
#Redirect output
os.open(outfile, os.O_RDWR | os.O_CREAT)
os.dup2(0,1)
os.dup2(0,2)
Then, you can use signals to catch when a kill-signal was sent to the program and exit nicely. Example from Python Docs
import signal, os
def handler(signum, frame):
print 'Signal handler called with signal', signum
raise IOError("Couldn't open device!")
# Set the signal handler and a 5-second alarm
signal.signal(signal.SIGALRM, handler)
signal.alarm(5)
# This open() may hang indefinitely
fd = os.open('/dev/ttyS0', os.O_RDWR)
signal.alarm(0) # Disable the alarm
There are modules that could be used to daemonize a python script.
python-daemon implements the well-behaved daemon specification (PEP 3143).
Also this module recently came up on github which seems more pythonic and easy to use.
Starting it with an init.d style script is a good way. You take it down with POSIX Signals ... See StackOverflow, Signal handling in Python.
Try this question or more exactly accepted solution.

Categories

Resources