I've seen the signal module, it seems alright for installing signal handlers and setting up alarms but is sending a signal to another process done via, for example
os.system('kill -s SIGUSR2 8269')
And then is there a simple way to do this if the process is on a different host machine?
os.kill() for local processes, paramiko and the kill command for remote systems.
Related
For my setup, I have a host machine and a remote machine, such that I have direct ssh access from the host machine to the remote one.
I'm using the host machine to start up (and possibly stop) a server, which is a long running process. For that I use subprocess.Popen in which the command looks something like this:
ssh remote_machine "cd dir ; python3 long_running_process.py"
via
p = subprocess.Popen(['ssh', 'remote_machine', 'cd dir ; python3 long_running_process.py'])
From what I gathered, although for the Popen call, we have shell=False, this would anyway enable the ssh process to run the cd and python processes under a shell like bash.
The problem arises when I want to stop this process, or more crucially when an Exception is raised in the long running process, to clean up and stop all processes on the host and most importantly on the remote machine.
Therefore, terminating the Popen process on the host machine does not suffice (actually, I send a SIGINT so that I could catch it on the remote side, but doesn't work), as the long running process still runs on the remote machine.
So if it actually occured that an exception was raised by THE CHILDREN PROCESSES of the long running process, then the long running process itself is not stopped.
Should I have to ssh again to stop the processes? (though I don't know the children's PIDs upfront)
There are two machines, where one has a script wait_for_signal.sh, and the second one has a script named controller.py. The code of each script is shown below.
The purpose of the controller.py is to spawn a subprocess that calls the wait_for_signal.sh script through ssh. When the controller needs to exit it needs to send an interrupt to the remote process that runs wait_for_signal.sh.
wait_for_signal.sh
#!/bin/bash
trap 'break' SIGINT
trap 'break' SIGHUP
echo "Start loop"
while true; do
sleep 1
done
echo "Script done"
controller.py
import os
import signal
import subprocess
remote_machine = user#ip
remote_path = path/to/script/
remote_proc = subprocess.Popen(['ssh', '-T', remote_machine,
'./' + remote_path + 'wait_for_signal.sh'],
shell=False, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
# do other stuff
os.kill(remote_proc.pid, signal.SIGINT)
Currently, the signal send is to the process that started the ssh connection on the local machine and not the remote machine. This causes the local process to stop but the remote process continues to execute.
How does ssh work and what type of signals does it send to the remote machine when it is stopped? How can I send the appropriate signal to the remote process, which was started by the ssh connection?
You're invoking ssh with the -T option, meaning that it won't allocate a PTY (pseudo-TTY) for the remote session. In this case, there's no way to signal the remote process through that ssh session.
The SSH protocol has a message to send a signal to the remote process. However, you're probably using OpenSSH for either the client or the server or both, and as far as I can tell, OpenSSH doesn't implement the signal message. So the OpenSSH client can't send the message, and the OpenSSH server won't act on it.
There is an SSH extension to send a "break" message which is supported by OpenSSH. In an interactive session, the OpenSSH client has an escape sequence that you can type to send a break to the server. The OpenSSH server handles break messages by sending a break to the PTY for the remote session, and unix PTYs will normally treat a break as a SIGINT. However, breaks are fundamentally a TTY concept, and none of this will work for remote sessions which don't have a PTY.
I can think of two ways to do what you want:
Invoke ssh with the -tt parameter instead of -T. This will cause ssh to request a TTY for the remote session. Running the remote process through a TTY will make it act like it's running interactively. Killing the local ssh process should cause the remote process to receive a SIGHUP. Writing a Ctrl-C to the local ssh process's standard input should cause the remote process to receive a SIGINT.
Open another ssh session to the remote host and use killall or some other command to signal the process that you want to signal.
I'm developing a python script that runs as a daemon in a linux environment. If and when I need to issue a shutdown/restart operation to the device, I want to do some cleanup and log data to a file to persist it through the shutdown.
I've looked around regarding Linux shutdown and I can't find anything detailing which, if any, signal, is sent to applications at the time of shutdown/restart. I assumed sigterm but my tests (which are not very good tests) seem to disagree with this.
When Linux is shutting down, (and this is slightly dependent on what kind of init scripts you are using) it first sends SIGTERM to all processes to shut them down, and then I believe will try SIGKILL to force them to close if they're not responding to SIGTERM.
Please note, however, that your script may not receive the SIGTERM - init may send this signal to the shell it's running in instead and it could kill python without actually passing the signal on to your script.
Hope this helps!
What is the recommended way to start long running (bash) scripts on several remote servers via fabric, so that you can later re-attach to the process for checking the status of the process, eventually sigterm it and get the exit code?
EDIT (10-Nov-2012):
In the mean-time I found a question going into the same direction: HOW TO use fabric use with dtach,screen,is there some example
It seems that the preferred way would be to use screen or tmux.
http://www.fabfile.org/faq.html#why-can-t-i-run-programs-in-the-background-with-it-makes-fabric-hang
I've got a Python script managing a gdb process on Windows, and I need to be able to send a SIGINT to the spawned process in order to halt the target process (managed by gdb)
It appears that there is only SIGTERM available in Win32, but clearly if I run gdb from the console and Ctrl+C, it thinks it's receiving a SIGINT. Is there a way I can fake this such that the functionality is available on all platforms?
(I am using the subprocess module, and python 2.5/2.6)
Windows doesn't have the unix signals IPC mechanism.
I would look at sending a CTRL-C to the gdb process.