b.py
import subprocess
f = subprocess.Popen(['python', 'a.py'])
time.sleep(3000)
a.py
import time
time.sleep(1000)
Run python b.py, Press CTRL+C, both processes will terminate.
However send the signal SIGINT to the parent process b.py, kill -2 xxxx, but the child process a.py remains.
Ctrl-C at your terminal typically sends SIGINT to all processes in the foreground process group. Both your parent and your child process are in this process group.
For a more detailed explanation, see for example The TTY demystified or the more technical version by Kirk McKusick at Process Groups and Sessions
If you just kill the parent process, the child is parentless and thus gets reparented to PID 1 (init). You can see this too, in the output of ps. Since your subprocess never receives a signal, it simply continues running.
Related
Normally, when a process fork a child process, it will receive a SIGCHLD signal if that child terminates. But, if this fork happens in a thread other than the main thread of the application, parent won't receive anything.
I test it in Python, on different GNU/Linux machines. All was x86_64.
My question is: Is it a Python behaviour, or is it a defined behaviour of POSIX Standard? And in both cases, why it is so?
Here is a sample code to re-produce this behaviour.
import signal
import multiprocessing
import threading
import time
def signal_handler(*args):
print "Signal received"
def child_process():
time.sleep(100000)
def child_thread():
p = multiprocessing.Process(target=child_process)
p.start()
time.sleep(100000)
signal.signal(signal.SIGCHLD, signal_handler)
p = multiprocessing.Process(target=child_process)
p.start()
t = threading.Thread(target=child_thread)
t.start()
time.sleep(100000)
print "Waked"
time.sleep(100000)
Then, send a SIGKILL to each child. When first child (the one forked in main thread) terminates, signal_handler will be called. But when second child terminates, nothing will happen.
I also test the same scenario with os.fork instead of multiprocessing.Process. Same result.
It is documented python behavior:
only the main thread can set a new signal handler, and the main
thread will be the only one to receive signals (this is enforced by
the Python signal module, even if the underlying thread implementation
supports sending signals to individual threads). This means that
signals can’t be used as a means of inter-thread communication. Use
locks instead.
Using python3/linux/bash:
gnr#localhost: cat my_script
#!/usr/bin/python3
import time, pexpect
p = pexpect.spawn('sleep 123')
p.sendintr()
time.sleep(1000)
This works fine when run as is (i.e. my_script starts a sleep 123 child process and then sends it a SIGINT which kills sleep 123). However, when I background my_script as a grandchild process, it no longer is able to kill the sleep 123 command:
gnr#localhost: (my_script &> /dev/null &)
Anyone know what's going on here/how to change my_script or pexpect to be able to still send SIGINT to it's child process?
I'm thinking this is has something to do with the backgrounding causing there to be no controlling terminal, and maybe I need to create a new pty?
Update: Never figured out how to create a pty (though ssh'ing into localhost with a -t option worked) - ended up doing an os.fork() to background a child process rather than the (my_script &> /dev/null &) which works because (I'm guessing) the controlling terminal is not immediately closed.
Are you sure the process isn't being killed? I would expect it to show <defunct> in the process list as the process that spawned is now sitting in a sleep and proper cleanup can't complete until sleep finishes. <defunct> processes have been killed, just their parents haven't done the cleanup.
If you can somehow modify your code so that the parent actually goes through the normal processing and shuts down the child (spawn) then it should work. Although clumsy this might work:
import time, pexpect, os
newpid = os.fork()
if newpid == 0:
# Child
p = pexpect.spawn('sleep 123')
p.sendintr()
else:
# parent
time.sleep(1000)
In this case we fork our own child who handles the spawn and does the kill. Since our child isn't blocking on its own sleep it exits gracefully which includes properly cleaning up the process it killed. In the mean time the main (parent) thread is waiting on a sleep
After your comment it occurred to me that although I was placing my script in the background at the bash prompt, I wasn't doing it the same as yours.
I was using
(expecttest.py > /dev/null 2>&1 &)
This redirects stdin and stdout to >/dev/null and puts the process in the background.
If I take your original code and rather than doing a sendintr and instead do a terminate using your invocation from the command shell it works. It seems that sleep 123 doesn't respond to what pexpect is doing in that case.
How can I run a command from a python script and delegate to it signals like Ctrl+C?
I mean when I run e.g:
from subprocess import call
call(["child_proc"])
I want child_proc to handle Ctrl+C
I'm guessing that your problem is that you want the subprocess to receive Ctrl-C and not have the parent Python process terminate? If your child process initialises its own signal handler for Ctrl-C (SIGINT) then this might do the trick:
import signal, subprocess
old_action = signal.signal(signal.SIGINT, signal.SIG_IGN)
subprocess.call(['less', '/etc/passwd'])
signal.signal(signal.SIGINT, old_action) # restore original signal handler
Now you can hit Ctrl-C (which generates SIGINT), Python will ignore it but less will still see it.
However this only works if the child sets its signal handlers up properly (otherwise these are inherited from the parent).
I'm working on a project to produce a shell in Python, and one important feature is the ability to pause and background a running subprocess. However the only methods I've found of pausing the subprocess appear to kill it instantly, so I can't resume it later.
Our group has tried excepting KeyboardInterrupt:
try:
process = subprocess.Popen(processName)
process.communicate()
except KeyboardInterrupt:
print "control character pressed"
and also using signals:
def signal_handler(signal,frame):
print 'control character pressed'
signal.signal(signal.SIGINT, signal_handler)
process.communicate()
Another issue is that both of these only work when Ctrl-C is pressed, nothing else has any effect (I imagine this is why the subprocesses are being killed).
The reason you have the process dying is because you are allowing the Ctrl+C to reach the subprocess. If you were to use the parameter preexec_fn = os.setpgrp, as part of the Popen call, then the the child is set to be in a different process group from the parent.
Ctrl+C sends a SIGINT to the complete process group, but since the child is in a different process group, it doesn't receive the SIGINT and thus doesn't die.
After that, the send_signal() function can be used to send a SIGSTOP to the child process whenever you want to pause it, and a SIGCONT to resume it.
I have a Python 2.7 multiprocessing Process which will not exit on parent process exit. I've set the daemon flag which should force it to exit on parent death. The docs state that:
"When a process exits, it attempts to terminate all of its daemonic child processes."
p = Process(target=_serverLaunchHelper, args=args)
p.daemon = True
print p.daemon # prints True
p.start()
When I terminate the parent process via a kill command the daemon is left alive and running (which blocks the port on the next run). The child process is starting a SimpleHttpServer and calling serve_forever without doing anything else. My guess is that the "attempts" part of the docs means that the blocking server process is stopping process death and it's letting the process get orphaned as a result. I could have the child push the serving to another Thread and have the main thread check for parent process id changes, but this seems like a lot of code to just replicate the daemon functionality.
Does someone have insight into why the daemon flag isn't working as described? This is repeatable on windows8 64 bit and ubuntu12 32 bit vm.
A boiled down version of the process function is below:
def _serverLaunchHelper(port)
httpd = SocketServer.TCPServer(("", port), Handler)
httpd.serve_forever()
When a process exits, it attempts to terminate all of its daemonic child processes.
The key word here is "attempts". Also, "exits".
Depending on your platform and implementation, it may be that the only way to get daemonic child processes terminated is to do so explicitly. If the parent process exits normally, it gets a chance to do so explicitly, so everything is fine. But if the parent process is terminated abruptly, it doesn't.
For CPython in particular, if you look at the source, terminating daemonic processes is handled the same way as joining non-daemonic processes: by walking active_children() in an atexit function. So, your daemons will be killed if and only if your atexit handlers get to run. And, as that module's docs say:
Note: the functions registered via this module are not called when the program is killed by a signal not handled by Python, when a Python fatal internal error is detected, or when os._exit() is called.
Depending on how you're killing the parent, you might be able to work around this by adding a signal handler to intercept abrupt termination. But you might not—e.g., on POSIX, SIGKILL is not intercept able, so if you kill -9 $PARENTPID, this isn't an option.
Another option is to kill the process group, instead of just the parent process. For example, if your parent has PID 12345, kill -- -12345 on linux will kill it and all of its children (assuming you haven't done anything fancy).