I'm trying to write a small script which will use plink.exe (from the same folder) to create a ssh tunnel (on windows).
I'm basically using os.system to launch the the command:
import time
import threading
from os.path import join, dirname, realpath
pc_tunnel_command = '-ssh -batch -pw xxxx -N -L 1234:host1:5678 user#host2'
if __name__ == '__main__':
t = threading.Thread(target = os.system, \
args = (join(dirname(realpath(__file__)), 'plink.exe ') + \
pc_tunnel_command,))
t.daemon = True
t.start()
#without this line it will die. I guess that plink doesn't have enough time to start.
time.sleep(5)
print 'Should die now'
However, it seems that the thread (and plink.exe) keep running. Why is this happening? Any way to force the thread to close? Better way to launch plink?
I want plink.exe to die when my program ends. Using a daemon thread was my plan of having the tunnel run in the background, and then dying when my main code exits.
BTW - same thing happens with subprocess.call.
You can use the atexit and signal modules to register calls back that will explicitly kill the process when your program exits normally or receives SIGTERM, respectively:
import sys
import time
import atexit
import signal
import subprocess
from functools import partial
from os.path import join, dirname, realpath
pc_tunnel_command = '-ssh -batch -pw xxxx -N -L 1234:host1:5678 user#host2'
def handle_exit(p, *args):
print("killing it")
p.terminate()
sys.exit(0)
if __name__ == '__main__':
p = subprocess.Popen(join(dirname(realpath(__file__)), 'plink.exe ') + pc_tunnel_command, shell=True)
func = partial(handle_exit, p)
signal.signal(signal.SIGTERM, func)
atexit.register(func)
print 'Should die now'
The one thing that is odd about the behavior your desrcibed is that I would have expected your program to exit after your sleep call, but leave plink running in the background, rather than having your program hang until the os.system call completes. That's the behavior I see on Linux, at least. In any case, explicitly terminating the child process should solve the issue for you.
os.system does not return until the child process exits. The same is true for subprocess.call. That's why your thread is sitting there, waiting for plink to finish. You can probably use subprocess.Popen to launch the process asynchronously and then exit. In any case, the additional thread you are creating is unnecessary.
Related
I'm trying to make something like supervisor for my python daemon process and found out that same code works in python2 and doesn't work in python3.
Generally, I've come to this minimal example code.
daemon.py
#!/usr/bin/env python
import signal
import sys
import os
def stop(*args, **kwargs):
print('daemon exited', os.getpid())
sys.exit(0)
signal.signal(signal.SIGTERM, stop)
print('daemon started', os.getpid())
while True:
pass
supervisor.py
import os
import signal
import subprocess
from time import sleep
parent_pid = os.getpid()
commands = [
[
'./daemon.py'
]
]
popen_list = []
for command in commands:
popen = subprocess.Popen(command, preexec_fn=os.setsid)
popen_list.append(popen)
def stop_workers(*args, **kwargs):
for popen in popen_list:
print('send_signal', popen.pid)
popen.send_signal(signal.SIGTERM)
while True:
popen_return_code = popen.poll()
if popen_return_code is not None:
break
sleep(5)
signal.signal(signal.SIGTERM, stop_workers)
for popen in popen_list:
print('wait_main', popen.wait())
If you run supervisor.py and then call kill -15 on its pid, then it will hang in infinite loop, because popen_return_code will never be not None. I discovered, that it's basically because of adding threading.Lock for wait_pid operation (source), but how can I rewrite code so it'll handle child exit correctly?
This is an interesting case.
I've spent few hours trying to figure out the reason why this happens and the only thing I came up with at this moment is that the implementation of wait() and poll() have been changed in python3 versus python2.7.
Looking into the source code of python3/suprocess.py implementation, we can see that there is a lock acquire happens when you call wait() method of Popen object, see
https://github.com/python/cpython/blob/master/Lib/subprocess.py#L1402.
This lock prevents further poll() calls to work as expected until the lock acquired by wait() will be released, see
https://github.com/python/cpython/blob/master/Lib/subprocess.py#L1355
and comment there
Something else is busy calling waitpid. Don't allow two
at once. We know nothing yet.
There is no such a lock in python2.7/subprocess.py so this looks like a reason why it works in python2.7 and doesn't work in python3.
However I don't see a reason why are you trying to poll() inside the signal handler, try rewrite your supervisor.py as following, this should work as expected both on python3 and python2.7
supervisor.py
import os
import signal
import subprocess
from time import sleep
parent_pid = os.getpid()
commands = [
[
'./daemon.py'
]
]
popen_list = []
for command in commands:
popen = subprocess.Popen(command, preexec_fn=os.setsid)
popen_list.append(popen)
def stop_workers(*args, **kwargs):
for popen in popen_list:
print('send_signal', popen.pid)
popen.send_signal(signal.SIGTERM)
signal.signal(signal.SIGTERM, stop_workers)
for popen in popen_list:
print('wait_main', popen.wait())
Hope this helps
Generally, I agree with answer from #risboo6909, but also have some thoughts, how to fix this situation.
You can change subproccess.Popen to psutil.Popen.
In main loop instead of popen.wait() you can just do infinite loop, because process will exit in signal handler.
While I try to kill a python process, the child process started via os.system won't be terminated at the same time.
Killing child process when parent crashes in python and
Python Process won't call atexit
(atexit looks like not work with signal)
Does that mean I need to handle this situation by myself? If so, what is the preferred way to do so?
> python main.py
> ps
4792 ttys002 0:00.03 python run.py
4793 ttys002 0:00.03 python loop.py
> kill -15 4792
> ps
4793 ttys002 0:00.03 python loop.py
Sample Code:
main.py
import os
os.system('python loop.py')
loop.py
import time
while True:
time.sleep(1000)
UPDATE1
I did some experiment, and find out a workable version but still confuse about the logic.
import os
import sys
import signal
import subprocess
def sigterm_handler(_signo, _stack_frame):
# it raises SystemExit(0):
print 'go die'
sys.exit(0)
signal.signal(signal.SIGTERM, sigterm_handler)
try:
# os.system('python loop.py')
# use os.system won't work, it will even ignore the SIGTERM entirely for some reason
subprocess.call(['python', 'loop.py'])
except:
os.killpg(0, signal.SIGKILL)
kill -15 4792 sends SIGTERM to run.py in your example -- it sends nothing to loop.py (or its parent shell). SIGTERM is not propagated to other processes in the process tree by default.
os.system('python loop.py') starts at least two processes the shell and python process. You don't need it; use subprocess.check_call(), to run a single child process without the implicit shell. btw, if your subprocess is a Python script; consider importing it and running corresponding functions instead.
os.killpg(0, SIGKILL) sends SIGKILL signal to the current process group. A shell creates a new process group (a job) for each pipeline and therefore the os.killpg() in the parent has no effect on the child (see the update). See How to terminate a python subprocess launched with shell=True.
#!/usr/bin/env python
import subprocess
import sys
try:
p = subprocess.Popen([executable, 'loop'])
except EnvironmentError as e: #
sys.exit('failed to start %r, reason: %s' % (executable, e))
else:
try: # wait for the child process to finish
p.wait()
except KeyboardInterrupt: # on Ctrl+C (SIGINT)
#NOTE: the shell sends SIGINT (on CtrL+C) to the executable itself if
# the child process is in the same foreground process group as its parent
sys.exit("interrupted")
Update
It seems os.system(cmd) doesn't create a new process group for cmd:
>>> import os
>>> os.getpgrp()
16180
>>> import sys
>>> cmd = sys.executable + ' -c "import os; print(os.getpgrp())"'
>>> os.system(cmd) #!!! same process group
16180
0
>>> import subprocess
>>> import shlex
>>> subprocess.check_call(shlex.split(cmd))
16180
0
>>> subprocess.check_call(cmd, shell=True)
16180
0
>>> subprocess.check_call(cmd, shell=True, preexec_fn=os.setpgrp) #!!! new
18644
0
and therefore os.system(cmd) in your example should be killed by the os.killpg() call.
Though if I run it in bash; it does create a new process group for each pipeline:
$ python -c "import os; print(os.getpgrp())"
25225
$ python -c "import os; print(os.getpgrp())"
25248
I am using python 2.7 and Python thread doesn't kill its process after the main program exits. (checking this with the ps -ax command on ubuntu machine)
I have the below thread class,
import os
import threading
class captureLogs(threading.Thread):
'''
initialize the constructor
'''
def __init__(self, deviceIp, fileTag):
threading.Thread.__init__(self)
super(captureLogs, self).__init__()
self._stop = threading.Event()
self.deviceIp = deviceIp
self.fileTag = fileTag
def stop(self):
self._stop.set()
def stopped(self):
return self._stop.isSet()
'''
define the run method
'''
def run(self):
'''
Make the thread capture logs
'''
cmdTorun = "adb logcat > " + self.deviceIp +'_'+self.fileTag+'.log'
os.system(cmdTorun)
And I am creating a thread in another file sample.py,
import logCapture
import os
import time
c = logCapture.captureLogs('100.21.143.168','somefile')
c.setDaemon(True)
c.start()
print "Started the log capture. now sleeping. is this a dameon?", c.isDaemon()
time.sleep(5)
print "Sleep tiime is over"
c.stop()
print "Calling stop was successful:", c.stopped()
print "Thread is now completed and main program exiting"
I get the below output from the command line:
Started the log capture. now sleeping. is this a dameon? True
Sleep tiime is over
Calling stop was successful: True
Thread is now completed and main program exiting
And the sample.py exits.
But when I use below command on a terminal,
ps -ax | grep "adb"
I still see the process running. (I am killing them manually now using the kill -9 17681 17682)
Not sure what I am missing here.
My question is,
1) why is the process still alive when I already killed it in my program?
2) Will it create any problem if I don't bother about it?
3) is there any other better way to capture logs using a thread and monitor the logs?
EDIT: As suggested by #bug Killer, I added the below method in my thread class,
def getProcessID(self):
return os.getpid()
and used os.kill(c.getProcessID(), SIGTERM) in my sample.py . The program doesn't exit at all.
It is likely because you are using os.system in your thread. The spawned process from os.system will stay alive even after the thread is killed. Actually, it will stay alive forever unless you explicitly terminate it in your code or by hand (which it sounds like you are doing ultimately) or the spawned process exits on its own. You can do this instead:
import atexit
import subprocess
deviceIp = '100.21.143.168'
fileTag = 'somefile'
# this is spawned in the background, so no threading code is needed
cmdTorun = "adb logcat > " + deviceIp +'_'+fileTag+'.log'
proc = subprocess.Popen(cmdTorun, shell=True)
# or register proc.kill if you feel like living on the edge
atexit.register(proc.terminate)
# Here is where all the other awesome code goes
Since all you are doing is spawning a process, creating a thread to do it is overkill and only complicates your program logic. Just spawn the process in the background as shown above and then let atexit terminate it when your program exits. And/or call proc.terminate explicitly; it should be fine to call repeatedly (much like close on a file object) so having atexit call it again later shouldn't hurt anything.
I'm struggling with some processes I started with Popen and which start subprocesses. When I start these processes manually in a terminal every process terminates as expected if I send CTRL+C. But running inside a python program using subprocess.Popen any attempt to terminate the process only gets rid of the parent but not of its children.
I tried .terminate() ..kill() as well as ..send_signal() with signal.SIGBREAK, signal.SIGTERM, but in every case I just terminate the parent process.
With this parent process I can reproduce the misbehavior:
#!/usr/bin/python
import time
import sys
import os
import subprocess
import signal
if __name__ == "__main__":
print os.getpid(), "MAIN: start a process.."
p = subprocess.Popen([sys.executable, 'process_to_shutdown.py'])
print os.getpid(), "MAIN: started process", p.pid
time.sleep(2)
print os.getpid(), "MAIN: kill the process"
# these just terminate the parent:
#p.terminate()
#p.kill()
#os.kill(p.pid, signal.SIGINT)
#os.kill(p.pid, signal.SIGTERM)
os.kill(p.pid, signal.SIGABRT)
p.wait()
print os.getpid(), "MAIN: job done - ciao"
The real life child process is manage.py from Django which spawns a few subprocesses and waits for CRTL-C. But the following example seems to work, too:
#!/usr/bin/python
import time
import sys
import os
import subprocess
if __name__ == "__main__":
timeout = int(sys.argv[1]) if len(sys.argv) >= 2 else 0
if timeout == 0:
p = subprocess.Popen([sys.executable, '-u', __file__, '13'])
print os.getpid(), "just waiting..."
p.wait()
else:
for i in range(timeout):
time.sleep(1)
print os.getpid(), i, "alive!"
sys.stdout.flush()
print os.getpid(), "ciao"
So my question in short: how do I kill the process in the first example and get rid of the child processes as well? On windows os.kill(p.pid, signal.CTRL_C_EVENT) seems to work in some cases, but what's the right way to do it? And how does a Terminal do it?
Like Henri Korhonen mentioned in a comment, grouping processes should help. Additionally, if you are on Windows and this is Cygwin Python that starts Windows applications, it appears Cygwin Python can not kill the children. For those cases you would need to run TASKKILL. TASKKILL also takes a group parameter.
can I use Popen from python subprocess to close started process? For example, from popen I run some application. In some part of my code I have to close that ran app.
For example, from console in Linux I do:
./some_bin
... It works and logs stdout here ...
Ctrl + C and it breaks
I need something like Ctrl + C but in my program code.
from subprocess import Popen
process = Popen(['slow', 'running', 'program'])
while process.poll():
if raw_input() == 'Kill':
if process.poll(): process.kill()
kill() will kill a process. See more here: Python subprocess module
Use the subprocess module.
import subprocess
# all arguments must be passed one at a time inside a list
# they must all be string elements
arguments = ["sleep", "3600"] # first argument is the program's name
process = subprocess.Popen(arguments)
# do whatever you want
process.terminate()
Some time ago I needed a 'gentle' shutdown for a process by sending CTRL+C in Windows console.
Here's what I have:
import win32api
import win32con
import subprocess
import time
import shlex
cmdline = 'cmd.exe /k "timeout 60"'
args = shlex.split(cmdline)
myprocess = subprocess.Popen(args)
pid = myprocess.pid
print(myprocess, pid)
time.sleep(5)
win32api.GenerateConsoleCtrlEvent(win32con.CTRL_C_EVENT, pid)
# ^^^^^^^^^^^^^^^^^^^^ instead of myprocess.terminate()