kill the sub processes by signal, but position affects? - python

I use the signal function to kill all sub-processes in the mul-process program, the code is shown blow, save as a file named mul_process.py:
import time
import os
import signal
from multiprocessing import Process
processes = []
def fun(x):
print 'current sub-process pid is %s' % os.getpid()
while True:
print 'args is %s' % x
time.sleep(100)
def term(sig_num, frame):
print 'terminate process %d' % os.getpid()
for p in processes:
print p.pid
try:
for p in processes:
print 'process %d terminate' % p.pid
p.terminate()
p.join()
except Exception as e:
print str(e)
if __name__ == '__main__':
print 'current main-process pid is %s' % os.getpid()
for i in range(3):
t = Process(target=fun, args=(str(i),))
t.start()
processes.append(t)
signal.signal(signal.SIGTERM, term)
try:
for p in processes:
p.join()
except Exception as e:
print str(e)
Using 'python mul_process.py' to launch the program on Ubuntu 10.04.4 and Python 2.6, when it start running, in another tab, I use kill -15 with the main process pid to send signal SIGTERM to kill all processes, when the main process receive the signal SIGTERM, it exit after terminate all sub processes, but when I use kill -15 with the sub process pid, it does not work, the program still alive and running as before, and does not print the sentence defined in the function term, seems that the subprocess doesn't receive the SIGTERM.As I know, the sub process will inherit the signal handler, but it doesn`t work, here is the first question.
And then I move the line 'signal.signal(signal.SIGTERM, term)' to position after line 'if name == 'main':', like this:
if __name__ == '__main__':
signal.signal(signal.SIGTERM, term)
print 'current main-process pid is %s' % os.getpid()
for i in range(3):
t = Process(target=fun, args=(str(i),))
t.start()
processes.append(t)
try:
for p in processes:
p.join()
except Exception as e:
print str(e)
Launch the program, and use kill -15 with the main process pid to send the signal SIGTERM, the program receive the signal and call the function term but also doesn't kill any subprocessed and exit itself, this is the second question.

Few problems in your program- Agree that subprocess will inherit signal handler in your 2nd code snippet, But global variable "processes" list won't be shared. So list of process would be available with main process only. "process" would be empty list for other sub process.
You can use queue or pipe kind of mechanism to pass list of process to sub processes. But it will bring another problem
You terminate process1 and handler of process1 try to terminate process2 to process4.
Now process 2 also has same handler,
So Process 2 handler again try to terminate all other process
which will push your program into infinite loop.

Related

How to kill all the multiprocessing?

I have a python function which calls another function in a multiprocessing.How can I kill all the multiprocessing using python?
This is the outer function which is also a multiprocess
p = Process(target=api.queue_processor, args=(process_queue_in, process_queue_out, process_lock,
command_queue_in, command_queue_out, api.OBJECTIVE_0_5NA, quit_event,
log_file))
p.start()
This is the function which is getting called
def queue_processor(process_queue_in, process_queue_out, process_lock,command_queue_in, command_queue_out, objective_type_NA, quit_event,log_file=None):
slide, roi_coordinates = process_obj[0]
logger.info("Received for imaging {} with ROI {} and type {}".format(slide.tile_dir,
roi_coordinates,
slide.slide_type))
p = Process(target=iad.image_slide,
args=(slide.tile_dir, slide.slide_type, roi_coordinates, exception_queue,
objective_type_NA, logger, clog_path))
p.start() #process to kill
I want to kill the second multiprocess.(commented)
We can kill or terminate a process immediately by using the terminate() method. We will use this method to terminate the child process, which has been created with the help of function, immediately before completing its execution.
import multiprocessing
import time
def Child_process():
print ('Starting function')
time.sleep(5)
print ('Finished function')
P = multiprocessing.Process(target = Child_process)
P.start()
print("My Process has terminated, terminating main thread")
print("Terminating Child Process")
P.terminate()
print("Child Process successfully terminated")

Python parent process is not catching SIGTERM/SIGINT signals when launching subprocess using os.sytem()

I have two python scripts as below -
parent.py
import os
import signal
shutdown = False
def sigterm_handler(signum, frame):
global shutdown
shutdown = True
if __name__ == '__main__':
signal.signal(signal.SIGTERM, sigterm_handler)
signal.signal(signal.SIGINT, sigterm_handler)
os.chdir(os.path.dirname(os.path.abspath(__file__)))
cmd = 'python child.py'
while True:
if shutdown == True:
break
print 'executing %s' % cmd
exit_code = os.system(cmd)
print 'Exit Code from %s > %s' % (cmd, exit_code)
print 'Exiting Parent'
child.py
import signal
import time
shutdown = False
def sigterm_handler(signum, frame):
global shutdown
shutdown = True
if __name__ == '__main__':
signal.signal(signal.SIGTERM, sigterm_handler)
signal.signal(signal.SIGINT, sigterm_handler)
while True:
if shutdown == True:
break
print 'Child Process Running !!'
time.sleep(1)
If I run parent.py and press ctrl + c on the terminal the child process exits and gets restarted by the parent as the parent is not processing the SIGINT is not being processed by the parent. I want to terminate both parent and the child if ctrl + c is pressed on the terminal. But for cases where the child exits because of some error instead of ctrl + c event, I want the parent to continue executing. I could have handled SIGCHLD in the parent but that doesn't indicate if the child was exited because of a ctrl + c event or something else. How would I achieve this behavior ?
below is the output I get if I run the parent -
executing python child.py
Child Process Running !!
Child Process Running !!
Child Process Running !!
Child Process Running !!
^CExit Code from python child.py > 2
executing python child.py
Child Process Running !!
Child Process Running !!
Child Process Running !!
Child Process Running !!
Child Process Running !!
^CExit Code from python child.py > 2
executing python child.py
Child Process Running !!
Child Process Running !!
Child Process Running !!
Child Process Running !!
^CExit Code from python child.py > 2
............................
............................
I think you'll have better luck with subprocess than os.system. In particular, I think you'll want to use subprocess with shell=False so that your child command is executed without a subshell (which might interfere with your ability to handle these kinds of signal-handling scenarios).
The code below does what you want, if I understand you correctly: CTRL-C causes both child and parent to stop; but if child dies for some other reason, parent will run the child again.
Here's a parent program similar to yours:
import signal
import subprocess
shutdown = False
def sigterm_handler(signum, frame):
print 'parent got shutdown'
global shutdown
shutdown = True
if __name__ == '__main__':
signal.signal(signal.SIGTERM, sigterm_handler)
signal.signal(signal.SIGINT, sigterm_handler)
cmd_args = ['python', 'child.py']
while not shutdown:
print 'executing', cmd_args
try:
subprocess.check_call(cmd_args)
except subprocess.CalledProcessError:
print 'child died'
pass
print 'Exiting Parent'
And here is a child program that runs for a while and then dies with a ZeroDivisionError.
import signal
import sys
import time
def sigterm_handler(signum, frame):
print 'child got shutdown'
sys.exit(0)
if __name__ == '__main__':
signal.signal(signal.SIGTERM, sigterm_handler)
signal.signal(signal.SIGINT, sigterm_handler)
for i in range(3, -1, -1):
print 'Child Process Running', i, i/i
time.sleep(3)

Python in Linux: kill processes and sub-processes using the shell

Q: Given an ever-running python program that runs another python program as its child, how can one kill the processes using python shell [i.e. by fetching the processes pids and then execute kill -9 <pid>]?
In more details:
I have a script as follows:
from subprocess import *
while True:
try:
Popen("python ...").wait() # some scrpipt
except:
exit(1)
try:
Popen("python ...").wait() # some scrpipt
except:
exit(1)
Now when I want to kill this process and its children, I:
Run "ps -ef | grep python" to fetch the pids.
Run kill -9 <pid> to kill the processes.
The result: The processes keeps on running after being assign with new pids.
Is there a graceful way to enable the processes to gracefully exit when killed?
Is there a graceful way to enable the processes to gracefully exit when killed?
There isn't when you kill -9. Kill with SIGINT (-2) or SIGTERM (-15), and catch that using the signal module by registering a cleanup function that handles the graceful exit.
import sys
import signal
def cleanup_function(signal, frame):
# clean up all resources
sys.exit(0)
signal.signal(signal.SIGINT, cleanup_function)
In this code parent will wait for child's exit status. If parent is getting its exist status, then only it will proceed to next iteration.
Also, you can't catch SIGKILL (SIGKILL and SIGSTOP are uncaught-able signals )
-9 means SIGKILL
You can implement SIGNAL handler incase of any other signals
import os
import time
def my_job():
print 'I am {0}, son/daughter of {1}'.format(os.getpid(), os.getppid())
time.sleep(50)
pass
if __name__ == '__main__':
while True:
pid = os.fork()
if pid > 0:
expired_child = os.wait() # if child is getting killed, will return a tuple containing its pid and exit status indication
if expired_child:
continue
else:
my_job()

Start and Stop external process from python

Is there a way to start and stop a process from python? I'm talking about a continues process that I stop with ctrl+z when normally running. I want to start the process, wait for some time and then kill it. I'm using linux.
this question is not like mine because there, the user only needs to run the process. I need to run it and also stop it.
I want to start the process, wait for some time and then kill it.
#!/usr/bin/env python3
import subprocess
try:
subprocess.check_call(['command', 'arg 1', 'arg 2'],
timeout=some_time_in_seconds)
except subprocess.TimeoutExpired:
print('subprocess has been killed on timeout')
else:
print('subprocess has exited before timeout')
See Using module 'subprocess' with timeout
You can use the os.kill function to send -SIGSTOP (-19) and -SIGCONT (-18)
Example (unverified):
import signal
from subprocess import check_output
def get_pid(name):
return check_output(["pidof",name])
def stop_process(name):
pid = get_pid(name)
os.kill(pid, signal.SIGSTOP)
def restart_process(name):
pid = get_pid(name)
os.kill(pid, signal.SIGCONT)
Maybe you can use Process module:
from multiprocessing import Process
import os
import time
def sleeper(name, seconds):
print "Sub Process %s ID# %s" % (name, os.getpid())
print "Parent Process ID# %s" % (os.getppid())
print "%s will sleep for %s seconds" % (name, seconds)
time.sleep(seconds)
if __name__ == "__main__":
child_proc = Process(target=sleeper, args=('bob', 5))
child_proc.start()
time.sleep(2)
child_proc.terminate()
#child_proc.join()
#time.sleep(2)
#print "in parent process after child process join"
#print "the parent's parent process: %s" % (os.getppid())

terminate a process and its subprocesses started with subprocess.popen the right way (windows and linux)

I'm struggling with some processes I started with Popen and which start subprocesses. When I start these processes manually in a terminal every process terminates as expected if I send CTRL+C. But running inside a python program using subprocess.Popen any attempt to terminate the process only gets rid of the parent but not of its children.
I tried .terminate() ..kill() as well as ..send_signal() with signal.SIGBREAK, signal.SIGTERM, but in every case I just terminate the parent process.
With this parent process I can reproduce the misbehavior:
#!/usr/bin/python
import time
import sys
import os
import subprocess
import signal
if __name__ == "__main__":
print os.getpid(), "MAIN: start a process.."
p = subprocess.Popen([sys.executable, 'process_to_shutdown.py'])
print os.getpid(), "MAIN: started process", p.pid
time.sleep(2)
print os.getpid(), "MAIN: kill the process"
# these just terminate the parent:
#p.terminate()
#p.kill()
#os.kill(p.pid, signal.SIGINT)
#os.kill(p.pid, signal.SIGTERM)
os.kill(p.pid, signal.SIGABRT)
p.wait()
print os.getpid(), "MAIN: job done - ciao"
The real life child process is manage.py from Django which spawns a few subprocesses and waits for CRTL-C. But the following example seems to work, too:
#!/usr/bin/python
import time
import sys
import os
import subprocess
if __name__ == "__main__":
timeout = int(sys.argv[1]) if len(sys.argv) >= 2 else 0
if timeout == 0:
p = subprocess.Popen([sys.executable, '-u', __file__, '13'])
print os.getpid(), "just waiting..."
p.wait()
else:
for i in range(timeout):
time.sleep(1)
print os.getpid(), i, "alive!"
sys.stdout.flush()
print os.getpid(), "ciao"
So my question in short: how do I kill the process in the first example and get rid of the child processes as well? On windows os.kill(p.pid, signal.CTRL_C_EVENT) seems to work in some cases, but what's the right way to do it? And how does a Terminal do it?
Like Henri Korhonen mentioned in a comment, grouping processes should help. Additionally, if you are on Windows and this is Cygwin Python that starts Windows applications, it appears Cygwin Python can not kill the children. For those cases you would need to run TASKKILL. TASKKILL also takes a group parameter.

Categories

Resources