I have a python code as following:
import threading
import time
import subprocess, os, sys, psutil, signal
from signal import SIGKILL
def processing():
global p_2
global subp_2
.
.
.
if condition1: #loop again
threading.Timer(10,processing).start()
if condition2:
signal.signal(signal.SIGINT, signal_handler)
#signal.signal(signal.SIGTERM, signal_handler)
subp_2.terminate()
#os.kill(subp_2.pid, 0)
#subp_2.kill()
print " Status p_2: ", p_2.status
def signal_handler(signal, frame):
print('Exiting')
sys.exit(0)
def function():
global p_1
global subp_1
.
.
.
if condition1: #loop again
threading.Timer(5,function).start()
if condition2:
signal.signal(signal.SIGINT, signal_handler)
#signal.signal(signal.SIGTERM, signal_handler)
subp_1.terminate()
#os.kill(subp_1.pid, 0)
#subp_1.kill()
print " Status p_1: ", p_1.status
threading.Timer(10,processing).start()
subp_2 = subprocess.Popen('./myScript2.sh %s %s' % (arg0, arg1), shell=True)
p_2 = psutil.Process(subp_2.pid)
if __name__ == '__main__':
global p_1
global subp_1
.
.
.
subp_1 = subprocess.Popen(["/.../myScript1.sh"], shell=True)
p_1 = psutil.Process(subp_1.pid)
threading.Timer(5,function).start()
I could not kill the processe subp_1 and subp_2. Whatever I tried: .terminate(), .kill() or os.kill() I am still getting process status running. Could anyone tell me please what am I missing ? Any hint is appreciated.
When you use shell=True, first a subprocess is spawned which runs the shell. Then the shell spawns a subprocess which runs myScript2.sh. The subprocess running the shell can be terminated without terminating the myScript2.sh subprocess.
If you can avoid using shell=True, then that would be one way to avoid this problem. If using user input to form the command, shell=True should definitely be avoided, since it is a security risk.
On Unix, by default, the subprocess spawned by subprocess.Popen is not a session leader. When you send a signal to a session leader, it is propagated to all processes with the same session id. So to have the shell pass the SIGTERM to myScript2.sh, make the shell a session leader.
For Python versions < 3.2 on Unix, that can be done by having the shell process run os.setsid():
import os
subp_2 = subprocess.Popen('./myScript2.sh %s %s' % (arg0, arg1),
shell=True,
preexec_fn=os.setsid)
# To send SIGTERM to the process group:
os.killpg(subp_2.pid, signal.SIGTERM)
For Python versions >= 3.2 on Unix, pass start_new_session=True to Popen.
For Windows, see J.F. Sebastian's solution.
Related
I have some GPU test software i'm trying to automate using python3, The test would normally be run for 3 minutes then cancelled by a user using ctrl+c generating the following output
After exiting with ctrl+c the test can then be run again with no issue
When trying to automate this with subprocess popen and sending SIGINT or SIGTERM i'm not getting the same as if keyboard entry was used. The script exits abruptly and on subsequent runs cant find the gpus (assume its not unloading the driver properly)
from subprocess import Popen, PIPE
from signal import SIGINT
from time import time
def check_subproc_alive(subproc):
return subproc.poll() is None
def print_subproc(subproc, timer=True):
start_time = time()
while check_subproc_alive(subproc):
line = subproc.stdout.readline().decode('utf-8')
print(line, end="")
if timer and (time() - start_time) > 10:
break
subproc = Popen(['./gpu_test.sh', '-t', '1'], stdin=PIPE, stdout=PIPE, stderr=PIPE, shell=False)
print_subproc(subproc)
subproc.send_signal(SIGINT)
print_subproc(subproc, False)
How can I send ctrl+c to a subprocess as if a user typed it?
**UPDATE
import subprocess
def start(executable_file):
return subprocess.Popen(
executable_file,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
def read(process):
return process.stdout.readline().decode("utf-8").strip()
def write(process):
process.stdin.write('\x03'.encode())
process.stdin.flush()
def terminate(process):
process.stdin.close()
process.terminate()
process.wait(timeout=0.2)
process = start("./test.sh")
write(process)
for x in range(100):
print(read(process))
terminate(process)
Tried the above code and can get characters to register with dummy sh script however sending the \x03 command just sends an empty char and doesn't end script
I think you can probably use something like this:
import signal
try:
p=subprocess...
except KeyboardInterrupt:
p.send_signal(signal.SIGINT)
The following solution is the only one I could find that works for windows and is the closest resemblance to sending a Ctrl+C event.
import signal
os.kill(self.p.pid, signal.CTRL_C_EVENT)
Is there any argument or options to setup a timeout for Python's subprocess.Popen method?
Something like this:
subprocess.Popen(['..'], ..., timeout=20) ?
I would advise taking a look at the Timer class in the threading module. I used it to implement a timeout for a Popen.
First, create a callback:
def timeout( p ):
if p.poll() is None:
print 'Error: process taking too long to complete--terminating'
p.kill()
Then open the process:
proc = Popen( ... )
Then create a timer that will call the callback, passing the process to it.
t = threading.Timer( 10.0, timeout, [proc] )
t.start()
t.join()
Somewhere later in the program, you may want to add the line:
t.cancel()
Otherwise, the python program will keep running until the timer has finished running.
EDIT: I was advised that there is a race condition that the subprocess p may terminate between the p.poll() and p.kill() calls. I believe the following code can fix that:
import errno
def timeout( p ):
if p.poll() is None:
try:
p.kill()
print 'Error: process taking too long to complete--terminating'
except OSError as e:
if e.errno != errno.ESRCH:
raise
Though you may want to clean the exception handling to specifically handle just the particular exception that occurs when the subprocess has already terminated normally.
subprocess.Popen doesn't block so you can do something like this:
import time
p = subprocess.Popen(['...'])
time.sleep(20)
if p.poll() is None:
p.kill()
print 'timed out'
else:
print p.communicate()
It has a drawback in that you must always wait at least 20 seconds for it to finish.
import subprocess, threading
class Command(object):
def __init__(self, cmd):
self.cmd = cmd
self.process = None
def run(self, timeout):
def target():
print 'Thread started'
self.process = subprocess.Popen(self.cmd, shell=True)
self.process.communicate()
print 'Thread finished'
thread = threading.Thread(target=target)
thread.start()
thread.join(timeout)
if thread.is_alive():
print 'Terminating process'
self.process.terminate()
thread.join()
print self.process.returncode
command = Command("echo 'Process started'; sleep 2; echo 'Process finished'")
command.run(timeout=3)
command.run(timeout=1)
The output of this should be:
Thread started
Process started
Process finished
Thread finished
0
Thread started
Process started
Terminating process
Thread finished
-15
where it can be seen that, in the first execution, the process finished correctly (return code 0), while the in the second one the process was terminated (return code -15).
I haven't tested in windows; but, aside from updating the example command, I think it should work since I haven't found in the documentation anything that says that thread.join or process.terminate is not supported.
You could do
from twisted.internet import reactor, protocol, error, defer
class DyingProcessProtocol(protocol.ProcessProtocol):
def __init__(self, timeout):
self.timeout = timeout
def connectionMade(self):
#defer.inlineCallbacks
def killIfAlive():
try:
yield self.transport.signalProcess('KILL')
except error.ProcessExitedAlready:
pass
d = reactor.callLater(self.timeout, killIfAlive)
reactor.spawnProcess(DyingProcessProtocol(20), ...)
using Twisted's asynchronous process API.
A python subprocess auto-timeout is not built in, so you're going to have to build your own.
This works for me on Ubuntu 12.10 running python 2.7.3
Put this in a file called test.py
#!/usr/bin/python
import subprocess
import threading
class RunMyCmd(threading.Thread):
def __init__(self, cmd, timeout):
threading.Thread.__init__(self)
self.cmd = cmd
self.timeout = timeout
def run(self):
self.p = subprocess.Popen(self.cmd)
self.p.wait()
def run_the_process(self):
self.start()
self.join(self.timeout)
if self.is_alive():
self.p.terminate() #if your process needs a kill -9 to make
#it go away, use self.p.kill() here instead.
self.join()
RunMyCmd(["sleep", "20"], 3).run_the_process()
Save it, and run it:
python test.py
The sleep 20 command takes 20 seconds to complete. If it doesn't terminate in 3 seconds (it won't) then the process is terminated.
el#apollo:~$ python test.py
el#apollo:~$
There is three seconds between when the process is run, and it is terminated.
As of Python 3.3, there is also a timeout argument to the blocking helper functions in the subprocess module.
https://docs.python.org/3/library/subprocess.html
Unfortunately, there isn't such a solution. I managed to do this using a threaded timer that would launch along with the process that would kill it after the timeout but I did run into some stale file descriptor issues because of zombie processes or some such.
No there is no time out. I guess, what you are looking for is to kill the sub process after some time. Since you are able to signal the subprocess, you should be able to kill it too.
generic approach to sending a signal to subprocess:
proc = subprocess.Popen([command])
time.sleep(1)
print 'signaling child'
sys.stdout.flush()
os.kill(proc.pid, signal.SIGUSR1)
You could use this mechanism to terminate after a time out period.
Yes, https://pypi.python.org/pypi/python-subprocess2 will extend the Popen module with two additional functions,
Popen.waitUpTo(timeout=seconds)
This will wait up to acertain number of seconds for the process to complete, otherwise return None
also,
Popen.waitOrTerminate
This will wait up to a point, and then call .terminate(), then .kill(), one orthe other or some combination of both, see docs for full details:
http://htmlpreview.github.io/?https://github.com/kata198/python-subprocess2/blob/master/doc/subprocess2.html
For Linux, you can use a signal. This is platform dependent so another solution is required for Windows. It may work with Mac though.
def launch_cmd(cmd, timeout=0):
'''Launch an external command
It launchs the program redirecting the program's STDIO
to a communication pipe, and appends those responses to
a list. Waits for the program to exit, then returns the
ouput lines.
Args:
cmd: command Line of the external program to launch
time: time to wait for the command to complete, 0 for indefinitely
Returns:
A list of the response lines from the program
'''
import subprocess
import signal
class Alarm(Exception):
pass
def alarm_handler(signum, frame):
raise Alarm
lines = []
if not launch_cmd.init:
launch_cmd.init = True
signal.signal(signal.SIGALRM, alarm_handler)
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
signal.alarm(timeout) # timeout sec
try:
for line in p.stdout:
lines.append(line.rstrip())
p.wait()
signal.alarm(0) # disable alarm
except:
print "launch_cmd taking too long!"
p.kill()
return lines
launch_cmd.init = False
I'm trying to write a small script which will use plink.exe (from the same folder) to create a ssh tunnel (on windows).
I'm basically using os.system to launch the the command:
import time
import threading
from os.path import join, dirname, realpath
pc_tunnel_command = '-ssh -batch -pw xxxx -N -L 1234:host1:5678 user#host2'
if __name__ == '__main__':
t = threading.Thread(target = os.system, \
args = (join(dirname(realpath(__file__)), 'plink.exe ') + \
pc_tunnel_command,))
t.daemon = True
t.start()
#without this line it will die. I guess that plink doesn't have enough time to start.
time.sleep(5)
print 'Should die now'
However, it seems that the thread (and plink.exe) keep running. Why is this happening? Any way to force the thread to close? Better way to launch plink?
I want plink.exe to die when my program ends. Using a daemon thread was my plan of having the tunnel run in the background, and then dying when my main code exits.
BTW - same thing happens with subprocess.call.
You can use the atexit and signal modules to register calls back that will explicitly kill the process when your program exits normally or receives SIGTERM, respectively:
import sys
import time
import atexit
import signal
import subprocess
from functools import partial
from os.path import join, dirname, realpath
pc_tunnel_command = '-ssh -batch -pw xxxx -N -L 1234:host1:5678 user#host2'
def handle_exit(p, *args):
print("killing it")
p.terminate()
sys.exit(0)
if __name__ == '__main__':
p = subprocess.Popen(join(dirname(realpath(__file__)), 'plink.exe ') + pc_tunnel_command, shell=True)
func = partial(handle_exit, p)
signal.signal(signal.SIGTERM, func)
atexit.register(func)
print 'Should die now'
The one thing that is odd about the behavior your desrcibed is that I would have expected your program to exit after your sleep call, but leave plink running in the background, rather than having your program hang until the os.system call completes. That's the behavior I see on Linux, at least. In any case, explicitly terminating the child process should solve the issue for you.
os.system does not return until the child process exits. The same is true for subprocess.call. That's why your thread is sitting there, waiting for plink to finish. You can probably use subprocess.Popen to launch the process asynchronously and then exit. In any case, the additional thread you are creating is unnecessary.
Here is my problem: a user needs to log in via remotedesktop to a Windows server during given periods of the day. I have a working bit of code however I believe threads are never closed correctly since after a given time the program is Stopped.
I would like to close this thread started by AP Scheduler, can someone tell me how I would do this properly? I have tried joining the thread and exiting as well as ._Exit() but neither work (or really should work) I am lost.
import sys
import os
import subprocess
import signal
import time
from apscheduler.scheduler import Scheduler
from pykeyboard import PyKeyboard
from threading import Thread
def rdp_start():
os.system('rdesktop -d domain -u username -p password -g 1600x1050 -a 16 123.123.123.123')
def rdp_check():
p = subprocess.Popen(['ps', '-A'], stdout=subprocess.PIPE)
out, err = p.communicate()
for line in out.splitlines():
if 'rdesktop' in str(line):
print("Rdesktop is running!")
else:
print("Starting rdesktop!")
rdp_job = Thread(target = rdp_start, args = ())
rdp_job.start()
time.sleep(5)
k = PyKeyboard()
k.tap_key(k.enter_key)
#time.sleep(600)
#Where I would like to kill rdp_job, and remove rdp_kill scheduling
def rdp_kill():
p = subprocess.Popen(['ps', '-A'], stdout=subprocess.PIPE)
out, err = p.communicate()
for line in out.splitlines():
if 'rdesktop' in str(line):
pid = int(line.split(None, 1)[0])
os.kill(pid, signal.SIGKILL)
print("Killed RDP")
def idle():
# Stop from sleepin
k = PyKeyboard()
k.tap_key(k.scroll_lock_key)
k.tap_key(k.scroll_lock_key)
sched = Scheduler()
sched.daemonic = False
sched.start()
# Fix screen issues with PyUserInput
os.system('xhost + > /etc/null')
sched.add_cron_job(rdp_check, hour=15)
sched.add_cron_job(rdp_kill, hour=15, minute=8)
sched.add_cron_job(rdp_check, hour=23)
sched.add_cron_job(rdp_kill, hour=23, minute=8)
sched.add_cron_job(rdp_check, hour=7)
sched.add_cron_job(rdp_kill, hour=7, minute=8)
sched.add_cron_job(idle, second='*/60')
I know that killing threads is generally bad practice, however I really need this program to run for any given amount of time, can anyone point me in the right direction?
If you're on Linux, consider the following changes:
1) instead of using Thread, just run the rdesktop command in the background:
os.system('rdesktop ... &')
2) the killall command finds running programs and optionally sends them a signal.
To see if a rdesktop command is running, send it signal #0. It'll return status 0 if it found something, or status > 0 if no such process exists:
$ killall -0 sleep
$ echo $?
0
$ killall -0 beer
beer: no process found
3) to kill rdesktop:
os.system('killall rdesktop')
Note the above assumes that you have at most one rdesktop process running, and that you started it therefore you can probe it with killall -0.
I'm struggling with some processes I started with Popen and which start subprocesses. When I start these processes manually in a terminal every process terminates as expected if I send CTRL+C. But running inside a python program using subprocess.Popen any attempt to terminate the process only gets rid of the parent but not of its children.
I tried .terminate() ..kill() as well as ..send_signal() with signal.SIGBREAK, signal.SIGTERM, but in every case I just terminate the parent process.
With this parent process I can reproduce the misbehavior:
#!/usr/bin/python
import time
import sys
import os
import subprocess
import signal
if __name__ == "__main__":
print os.getpid(), "MAIN: start a process.."
p = subprocess.Popen([sys.executable, 'process_to_shutdown.py'])
print os.getpid(), "MAIN: started process", p.pid
time.sleep(2)
print os.getpid(), "MAIN: kill the process"
# these just terminate the parent:
#p.terminate()
#p.kill()
#os.kill(p.pid, signal.SIGINT)
#os.kill(p.pid, signal.SIGTERM)
os.kill(p.pid, signal.SIGABRT)
p.wait()
print os.getpid(), "MAIN: job done - ciao"
The real life child process is manage.py from Django which spawns a few subprocesses and waits for CRTL-C. But the following example seems to work, too:
#!/usr/bin/python
import time
import sys
import os
import subprocess
if __name__ == "__main__":
timeout = int(sys.argv[1]) if len(sys.argv) >= 2 else 0
if timeout == 0:
p = subprocess.Popen([sys.executable, '-u', __file__, '13'])
print os.getpid(), "just waiting..."
p.wait()
else:
for i in range(timeout):
time.sleep(1)
print os.getpid(), i, "alive!"
sys.stdout.flush()
print os.getpid(), "ciao"
So my question in short: how do I kill the process in the first example and get rid of the child processes as well? On windows os.kill(p.pid, signal.CTRL_C_EVENT) seems to work in some cases, but what's the right way to do it? And how does a Terminal do it?
Like Henri Korhonen mentioned in a comment, grouping processes should help. Additionally, if you are on Windows and this is Cygwin Python that starts Windows applications, it appears Cygwin Python can not kill the children. For those cases you would need to run TASKKILL. TASKKILL also takes a group parameter.