I am running Python 3 code in a windows Command line prompt.
The program has an infinite loop that I use. (While (1))
Sounds like bad design but it's meant to be like this.
Is there a way to force close the program without having to close the command prompt.
In the terminal, Ctrl + C often works for this.
You can use sys.exit(exit_code) or raise SystemExit(string_to_print_before_exiting).
https://docs.python.org/3/library/sys.html#sys.exit
https://docs.python.org/3/library/exceptions.html#SystemExit
Ctrl-C generates KeyboardInterrupt Exception, so unless you blindly catch all exceptions and ignore them, it should do just that. (If you do catch all, can add an exception for KeyboardInterrupt). Interestingly, this does not work for me because I am using Cygwin.
You can force-terminate the program using task manager. However, if you have more than one python process running, this can be tricky. To do this, have your process print PID in the first lines of the log file (you do have a log file, right?)
print("started process", os.getpid())
To see process: tasklist /FI "PID eq 1234"
To kill process: taskkill /PID 1234 /F
Advanced process interruption:
Have your program wait for command on a socket.
Related
When running Python from a Linux shell (same behavior observed in both bash and ksh), and generating a SIGINT with a Ctl-C keypress, I have discovered behavior that I am unable to understand, and which has frustrated me considerably.
When I press Ctl-C, the Python process appropriately terminates, but the shell continues to the next command on the line, as shown in the following console capture:
$ python -c "import time; time.sleep(100)"; echo END
^CTraceback (most recent call last):
File "<string>", line 1, in <module>
KeyboardInterrupt
END
In contrast, I had expected, and would like, that the shell processes the signal in such a way that execution does not continue to the next command on the line, as I see when I call the sleep function from a bash subshell instead of from Python.
For example, I would expect the above capture to appear more similar to the following:
$ bash -c "sleep 100"; echo END
^C
Python 2 and 3 are installed on my system, and while the above capture was generated running Python 2, both behave the same way.
My best explanation is that when I press Ctl-C while the Python process is running, the signal somehow goes directly to the Python process, whereas normally it is handled by the calling shell, then propagated to the subprocess. However, I have no idea why or how Python is causing this difference.
The examples above are trivial tests but the behavior is also observed in real-world uses. Installing custom signal handlers does not resolve the issue.
After considerable digging I found a few loosely related questions on Stack Overflow that eventually led me to an article describing the proper handling of SIGINT. (The most relevant section is How to be a proper program.)
From this information, I was able to solve the problem. Without it, I would have never have come close.
The solution is best illustrated by beginning with a Bash script that cannot be terminated by a keyboard interrupt, but which does hide the ugly stack trace from Python's KeyboardInterrupt exception.
A basic example might appear as follows:
#!/usr/bin/env bash
echo "Press Ctrl-C to stop... No sorry it won't work."
while true
do
python -c '
import time, signal
signal.signal(signal.SIGINT, signal.SIG_IGN)
time.sleep(100)
'
done
For the outer script to process the interrupt, the following change is required:
echo "Press Ctrl-C to stop..."
while true
do
python -c '
import time, signal, os
signal.signal(signal.SIGINT, signal.SIG_DFL)
time.sleep(100)
'
done
However, the solution makes it impossible to use a custom handler (for example, to perform cleanup). If doing so is required, then a more sophisticated approach is needed.
The required change is illustrated as follows:
#!/usr/bin/env bash
echo "Press [CTRL+C] to stop ..."
while true
do
python -c '
import time, sys, signal, os
def handle_int(signum, frame):
# Cleanup code here
signal.signal(signum, signal.SIG_DFL)
os.kill(os.getpid(), signum)
signal.signal(signal.SIGINT, handle_int)
time.sleep(100)
'
done
The reason appears to be that unless the inner process terminates through executing the default SIGINT handler provided by the system, the parent bash process does not realize that the child has terminated because of a keyboard interrupt, and does not itself terminate.
I have not fully understood all the ancillary issues quite yet, such as whether the parent process is not receiving the SIGINT from the system, or is receiving a signal, but ignoring it. I also have no idea what the default handler does or how the parent detects that it was called. If I am able to learn more, I will offer an update.
I must advance the question of whether the current behavior of Python should be considered a design flaw in Python. I have seen various manifestations of this issue over the years when calling Python from a shell script, but have not had the luxury of investigation until now. I have not found a single article through a web search, however, on the topic. If the issue does represent a flaw, it surprised me to observe that not many developers are affected.
The behavior of any program that gets a CTRL+C is up to that program. Usually the behavior is to exit, but some programs might just abort some internal procedure instead of stopping the whole program. It's even possible (though it may be considered bad manners) for a program to ignore the keystroke completely.
The behavior of the program is defined by the signal handlers it has set up. The C library provides default signal handlers (which do things like exit on SIGTERM and SIGINT), but a program can provide its own handlers that will run instead. Not all signals allow arbitrary responses. For instance, SIGSEGV (a seg-fault) requires the program to exit, though it can configure its signal handlers to make a core dump or not. SIGKILL can't be handled at all (the OS kernel takes care of it).
To customize signal handlers in Python, you'll want to use the signal module from the standard library. You can call signal.signal to set your own signal handler function for any of the signals defined by your system's C library. Typing CTRL+C is going to send SIGINT on any UNIX-based system, so that's probably what you'll want to handle if you want your own behavior.
Try something like this:
import signal
import sys
import time
def interrupt_handler(sig, frame):
sys.exit(1)
signal.signal(signal.SIGINT, interrupt_handler)
time.sleep(100)
If you run this script and interrupt it with CTRL+C, it should exit silently, just like your bash script does.
You could explicitly handle it on the bash side in a script file like this:
if python -c "import time; time.sleep(100)"; then
echo END
fi
or, more aggressively,
python -c "import time; time.sleep(100)"
[[ $? -ne 0 ]] && exit
echo END
$? is the return status code of the previous command. Where a status code of 0 means it exited fine, and anything else was an error. So, we use the short-circuit nature of && to succinctly exit if the previous command fails.
(See https://unix.stackexchange.com/questions/186826/parent-script-continues-when-child-exits-with-non-zero-exit-code for more info on that)
Note: this will exit the bash script for any kind of python failure, not just ctrl+c, e.g. IndexError, AssertionError, etc
Small nagging issue:
I have a python script that is working as expected, except when I select a menu option to Popen another python script:
myPath = r"c:\Python27\myScript.py"
cmd = r"c:\Python27\python.exe '{}'".format(myPath)
py_process = Popen(cmd, stdout=PIPE, stdin=PIPE, stderr=STDOUT)
When I run that snippet (in windows), the child process is kicked-off in the background as expected, but when I attempt to exit the primary script, but leave the child process running in the background:
raise SystemExit
...an empty window "c:\python27\python.exe" remains. I've tried other EXIT methods with a similar result. Note: When I exit the primary script without running that snippet, the python window disappears as desired.
My goal is to leave no trace/window once the primary script is exited in all cases, but child process should remain running in background.
Any suggestions to accomplish this goal?
Thanks!
If you want to first communicate to the started process and then leave it alone to run further, you have a few options:
Handle SIGPIPE in your long-running process, do not die on it. Live without stdin after the launcher process exits.
Pass whatever you wanted using arguments, environment, or a temporary file.
If you want bidirectional communication, consider using a named pipe (man mkfifo) or a socket, or writing a proper server.
Make the long-running process fork after the initial bi-direcional communication phase is done.
It does not create "a completely independent process" (that what python-daemon package does). In other cases you should redirect to os.devnull child's stdin/stdout/stderr to avoid waiting for input and/or a spurious output to the terminal
Source
I know there are a bunch of similar questions on SO like this one or this one and maybe a couple more, but none of them seem to apply in my particular situation. My lack of understanding on how subprocess.Popen() works doesn't help either.
What i want to achieve is: launch a subprocess (a command line radio player) that also outputs data to the terminal and can also receive input -- wait for a while -- terminate the subprocess -- exit the shell. I am running python 2.7 on OSX 10.9
Case 1.
This launches the radio player (but audio only!), terminates the process, exits.
import subprocess
import time
p = subprocess.Popen(['/bin/bash', '-c', 'mplayer http://173.239.76.147:8090'],
stdin=subprocess.PIPE, stdout=subprocess.PIPE, shell=False,
stderr=subprocess.STDOUT)
time.sleep(5)
p.kill()
Case 2.
This launches the radio player, outputs information like radio name, song, bitrate, etc and also accepts input. It terminates the subprocess but it never exists the shell and the terminal becomes unusable even after using 'Ctrl-C'.
p = subprocess.Popen(['/bin/bash', '-c', 'mplayer http://173.239.76.147:8090'],
shell=False)
time.sleep(5)
p.kill()
Any ideas on how to do it? I was even thinking at the possibility of opening a slave-shell for the subprocess if there is no other choice (of course it is also something that I don't have a clue about). Thanks!
It seems like mplayer uses the curses library and when kill()ing it or terminate()ing it, for some reason, it doesn't clean the library state correctly.
To restore the terminal state you can use the reset command.
Demo:
import subprocess, time
p = subprocess.Popen(['mplayer', 'http://173.239.76.147:8090'])
time.sleep(5)
p.terminate()
p.wait() # important!
subprocess.Popen(['reset']).wait()
print('Hello, World!')
In principle it should be possible to use stty sane too, but it doesn't work well for me.
As Sebastian points out, there was a missing wait() call in the above code (now added). With this wait() call and using terminate() the terminal doesn't get messed up (and so there shouldn't be any need for reset).
Without the wait() I sometimes do have problems of mixed output between the python process and mplayer.
Also, a solution specific to mplayer, as pointed out by Sebastian, is to send a q to the stdin of mplayer to quit it.
I leave the code that uses reset because it works with any program that uses the curses library, whether it correctly tears down the library or not, and thus it might be useful in other situations where a clean exit isn't possible.
What i want to achieve is: launch a subprocess (a command line radio player) that also outputs data to the terminal and can also receive input -- wait for a while -- terminate the subprocess -- exit the shell. I am running python 2.7 on OSX 10.9
On my system, mplayer accepts keyboard commands e.g., q to stop playing and quit:
#!/usr/bin/env python
import shlex
import time
from subprocess import Popen, PIPE
cmd = shlex.split("mplayer http://www.swissradio.ch/streams/6034.m3u")
p = Popen(cmd, stdin=PIPE)
time.sleep(5)
p.communicate(b'q')
It starts mplayer tuned to public domain classical; waits 5 seconds; asks mplayer to quit and waits for it to exit. The output is going to terminal (the same place where the python script's output goes).
I've also tried p.kill(), p.terminate(), p.send_signal(signal.SIGINT) (Ctrl + C). p.kill() creates the impression that the process hangs. Possible explanation: p.kill() leaves some pipes open e.g., if stdout=PIPE then your Python script might hang at p.stdout.read() i.e., it kills the parent mplayer process but there might be a child process that holds the pipes open. Nothing hangs with p.terminate(), p.send_signal(signal.SIGINT) -- mplayer exits in an orderly manner. None of the variants I've tried require reset.
how should I go about having both input from Python and keyboard? Do I need two different subprocesses and how to redirect the keyboard input to PIPE?
It would be much simpler just to drop stdin=PIPE and call p.terminate(); p.wait() instead of p.communicate(b'q').
If you want to keep stdin=PIPE then the general principle is: read from sys.stdin, write to p.stdin until timeout happens. Given that mplayer expects one letter commands, you need to be able to read one character at at time from sys.stdin. The write part is easy: p.stdin.write(c) (set bufsize=0 to avoid buffering on Python side. mplayer doesn't buffer its stdin so you don't need to worry about it).
You don't need two different subprocesses. To implement timeout, you could use threading.Timer(5, p.stdin.write, [b'q']).start() or select.select on sys.stdin with timeout.
I guess something using the good old raw_input has nothing to do with it, or?
raw_input() is not suitable for mplayer because it reads the full lines but mplayer expects one character at a time.
I have two scripts: "autorun.py" and "main.py". I added "autorun.py" as service to the autorun in my linux system. works perfectly!
Now my question is: When I want to launch "main.py" from my autorun script, and "main.py" will run forever, "autorun.py" never terminates as well! So when I do
sudo service autorun-test start
the command also never finishes!
How can I run "main.py" and then exit, and to finish it up, how can I then stop "main.py" when "autorun.py" is launched with the parameter "stop" ? (this is how all other services work I think)
EDIT:
Solution:
if sys.argv[1] == "start":
print "Starting..."
with daemon.DaemonContext(working_directory="/home/pi/python"):
execfile("main.py")
else:
pid = int(open("/home/pi/python/main.pid").read())
try:
os.kill(pid, 9)
print "Stopped!"
except:
print "No process with PID "+str(pid)
First, if you're trying to create a system daemon, you almost certainly want to follow PEP 3143, and you almost certainly want to use the daemon module to do that for you.
When I want to launch "main.py" from my autorun script, and "main.py" will run forever, "autorun.py" never terminates as well!
You didn't say how you're running it. If you're doing anything that launches main.py as a child and waits (or, worse, tries to import/execfile/etc. in the same process), you can't do that. Either autorun.py has to launch and detach main.py (or do so indirectly via some external tool), or main.py has to daemonize when launched.
how can I then stop "main.py" when "autorun.py" is launched with the parameter "stop" ?
You need some form of inter-process communication (IPC), and some way for autorun to find the right IPC channel to use.
If you're building a network server, the right answer might be to connect to it as a client. But otherwise, the simplest thing to do is kill the process with a signal.
If you're using the daemon module, it can easily map signals to callbacks. Or, if you don't need any cleanup, just use SIGTERM, which by default will abruptly terminate. If neither of those applies, you will have to set up a custom signal handler (and within that handler do something useful—e.g., set a flag that your main code checks periodically).
How do you know what process to send the signal to? The standard way to do this is to have main.py record its PID in a pidfile at startup. You read that pidfile, and signal whatever process is specified there. (If you get an error because there is no process with that PID, that just means the daemon already quit for some reason—possibly because of an unhandled exception, or even a segfault. You may want to log that, but treat the "stop" as successful otherwise.) Again, if you're using daemon, it does the pidfile stuff for you; if not, you have to do it yourself.
You may want to take a look at the service scripts for daemons that came with your computer. They're probably all written in bash rather than Python, but it's not that hard to figure out what they're doing. Or… just use one of them as a skeleton, in which case you don't really need any bash knowledge; it's just search-and-replace on the name.
If your distro has LSB-style init functions, you can use something like this example. That one does a whole lot more than you need to, but it's a good example of all of the details. Or do it all from scratch with something like this example. This one is doing the pidfile management and the backgrounding from the service script (turning a non-daemon program into a daemon), which you don't need if you're using daemon properly, and it's using SIGHUP instead of SIGTERM. You can google yourself for other examples of init.d service scripts.
But again, if you're just trying to do this for your own system, the best thing to do is look inside the /etc/init.d on your distro. There will be dozens of examples there, and 90% of them will be exactly the same except for the name of the daemon.
I have a Python program that uses multiple daemon threads. I want to stop the program from outside, preferably from another Python script.
I've tried with kill <pid> from shell, just for a test, but it doesn't work with multi-threaded scripts.
One way would be to make the program check some file every n-seconds as a flag for termination. I'm sure there's some better way I can do this.
Note that I'd like to stop the program cleanly, so some message from outside in a form of an exception would be ideal, I think.
EDIT:
Here's an example of how I did it at the moment:
try:
open('myprog.lck', 'w').close()
while True:
time.sleep(1)
try:
open('myprog.lck').close()
except IOError:
raise KeyboardInterrupt
except KeyboardInterrupt:
print 'MyProgram terminated.'
Deleting file myprog.lck will cause the script to stop. Is the example above bad way to do this?
Use the poison pill technique. Upon receipt of a pill (a special message) your program must handle it and die.. The way you're doing it its ok, but for something more elegant, you should implement a kind of communication between your "killing script" and your main program. For a start, have a look in the standard library for Interprocess Communication and Networking.
I would install a signal handler as described in http://www.doughellmann.com/PyMOTW/signal/index.html#signals-and-threads
You can enter kill -l in your shell to get a list of available signals.
You should be able to kill it from the shell with kill -9 <pid>.