Send signals to remote process with Python - python

There are two machines, where one has a script wait_for_signal.sh, and the second one has a script named controller.py. The code of each script is shown below.
The purpose of the controller.py is to spawn a subprocess that calls the wait_for_signal.sh script through ssh. When the controller needs to exit it needs to send an interrupt to the remote process that runs wait_for_signal.sh.
wait_for_signal.sh
#!/bin/bash
trap 'break' SIGINT
trap 'break' SIGHUP
echo "Start loop"
while true; do
sleep 1
done
echo "Script done"
controller.py
import os
import signal
import subprocess
remote_machine = user#ip
remote_path = path/to/script/
remote_proc = subprocess.Popen(['ssh', '-T', remote_machine,
'./' + remote_path + 'wait_for_signal.sh'],
shell=False, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
# do other stuff
os.kill(remote_proc.pid, signal.SIGINT)
Currently, the signal send is to the process that started the ssh connection on the local machine and not the remote machine. This causes the local process to stop but the remote process continues to execute.
How does ssh work and what type of signals does it send to the remote machine when it is stopped? How can I send the appropriate signal to the remote process, which was started by the ssh connection?

You're invoking ssh with the -T option, meaning that it won't allocate a PTY (pseudo-TTY) for the remote session. In this case, there's no way to signal the remote process through that ssh session.
The SSH protocol has a message to send a signal to the remote process. However, you're probably using OpenSSH for either the client or the server or both, and as far as I can tell, OpenSSH doesn't implement the signal message. So the OpenSSH client can't send the message, and the OpenSSH server won't act on it.
There is an SSH extension to send a "break" message which is supported by OpenSSH. In an interactive session, the OpenSSH client has an escape sequence that you can type to send a break to the server. The OpenSSH server handles break messages by sending a break to the PTY for the remote session, and unix PTYs will normally treat a break as a SIGINT. However, breaks are fundamentally a TTY concept, and none of this will work for remote sessions which don't have a PTY.
I can think of two ways to do what you want:
Invoke ssh with the -tt parameter instead of -T. This will cause ssh to request a TTY for the remote session. Running the remote process through a TTY will make it act like it's running interactively. Killing the local ssh process should cause the remote process to receive a SIGHUP. Writing a Ctrl-C to the local ssh process's standard input should cause the remote process to receive a SIGINT.
Open another ssh session to the remote host and use killall or some other command to signal the process that you want to signal.

Related

How to run two applications (deamon and client) at the same time in single SSH using Python Paramiko

I want to run client and daemon application which responds to client in the same time.
Connection established to SSH using Paramiko. But I could not run both daemon and client in the same time.
How to do this with Paramiko?
Here the expectation is, client provide input as 1,2,3 and daemon responds to each input.
Both run in the same SSH.
Could any one help me with this?
I assume you can use simple shell syntax to achieve what you need. You do not need any fancy code in Python/Paramiko.
Assuming *nix server, see How do I run multiple background commands in bash in a single line?
To run (any) command in Paramiko, see Execute command and wait for it to finish with Python Paramiko
So probably like this:
stdin, stdout, stderr = ssh_client.exec_command("deamon & client")
stdout.channel.set_combine_stderr(True)
output = stdout.readlines()
If you need to somehow run the two commands (deamon and clinet) independenty for a better control, you can start here:
Run multiple commands in different SSH servers in parallel using Python Paramiko
Except that you do not need to open multiple connections (SSHClient). You will just call SSHClient.exec_command twice on the same SSHClient instance.

How to interrupt child process running on a remote machine started by Popen on a host

For my setup, I have a host machine and a remote machine, such that I have direct ssh access from the host machine to the remote one.
I'm using the host machine to start up (and possibly stop) a server, which is a long running process. For that I use subprocess.Popen in which the command looks something like this:
ssh remote_machine "cd dir ; python3 long_running_process.py"
via
p = subprocess.Popen(['ssh', 'remote_machine', 'cd dir ; python3 long_running_process.py'])
From what I gathered, although for the Popen call, we have shell=False, this would anyway enable the ssh process to run the cd and python processes under a shell like bash.
The problem arises when I want to stop this process, or more crucially when an Exception is raised in the long running process, to clean up and stop all processes on the host and most importantly on the remote machine.
Therefore, terminating the Popen process on the host machine does not suffice (actually, I send a SIGINT so that I could catch it on the remote side, but doesn't work), as the long running process still runs on the remote machine.
So if it actually occured that an exception was raised by THE CHILDREN PROCESSES of the long running process, then the long running process itself is not stopped.
Should I have to ssh again to stop the processes? (though I don't know the children's PIDs upfront)

Why does the PID change when "ssh -f -N hostname" is called using subprocess in Python, and how can I terminate it reliably when my program ends?

I need to connect to a target device over a proxy to execute some comands on the target. To do this, I need to open an SSH tunnel to the proxy, and then use a Python library to interact with the target over SSH. The library is not capable of accommodating proxied connections. This concept works when I use my shell directly to bring up the tunnel and then use the Python library to interact with the target. I now need to move the shell command into my Python program.
I tried opening an SSH tunnel using subprocess with the following code:
config_file = "path/to/config"
cmd = shlex.split(f"ssh -f -N jumphost-tunnel -F {config_file}")
process = subprocess.Popen(
cmd, stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
)
This creates two problems.
Problem 1
When I call process.pid the PID is different from what I see when I execute ps aux | grep ssh and note the PID on the OS. It is off by 1 (i.e: PID from subprocess.pid is 44196 PID from ps aux is 44197).
I would like to understand why the PID is off by 1. Is it due to the SSH process being placed in the background when called with ssh -f?
Problem 2
It leaves a zombie SSH tunnel behind, as I cannot terminate the tunnel with subprocess.kill() due to not knowing the PID of the tunnel command.
How can I safely, and reliably terminate the SSH tunnel when the program completes?
For some background, I need to tunnel to a proxy server and execute a command on a target device over SSH. The target device is a Juniper SRX. I'm using the PyEZ-junos library to interact with it. The library uses Paramiko under the hood to interact with the Junos device, but the library implementation does not make use of the ProxyCommand or ProxyJump directives made available by OpenSSH, hence the call to subprocess to initiate the tunnel to the proxy server. I don't want to change the internals of the PyEZ library to fix the tunneling issue.
I haven't checked, but it would surprise me if the off-by-one PID is not caused by ssh “backgrounding” itself by forking a new process and letting the original process exit.
I don't think you really need the -f flag. subprocess.Popen starts a new process anyway.

python subprocess run a remote process in background and immediately close the connection

The task is to use python to run a remote process in background and immediately close the ssh session.
I have a remote script name 'start' under server:PATH/, the start script does nothing but lunch a long-live background program. 'start' script which has one line:
nohup PATH/Xprogram &
When I use python subprocess module to call my remote 'start' script, it does start OK. But the issue is: it seems the SSH connection is persist, meaning I am getting stdout from the remote Xprogram (since it is a long live program which has output to stdout). Does this indicating ssh connection is still there ?
All I need is call the start script without blocking and forget about it (leave the long-live program running, close ssh, release resources).
my python function call looks like this:
ret = subprocess.Popen(["ssh", "xxx#servername", "PATH/start"])
if I use ret.terminate() after the command, it then will kill the long-live program too.
I have also tried spur module. basically the same thing.
=====update====
#Dunes' answer solves the problem. Based on his answer, I did more digging and found this link very helpful.
My understanding of this is: basically, if any file descriptor is still held by your process (e.g. stdout held by my XProgram), then SSH session won't exit. However redirect stdout/stderr to NULL effectively close those file descriptor and let SSH session exit normally.
solution
ret = subprocess.Popen(["ssh", "xxx#servername", "PATH/start >dev/null 2>&1"])
After playing about a bit I found that nohup doesn't seem to be properly disconnecting the child process from the parent ssh session (as it should be). This means you have to manually close stdout or point it at a file, e.g.
Using bash:
ssh user#host "nohup PATH/XProgram >&- &"
Shell agnostic (as far as I know):
ssh user#host "nohup PATH/XProgram >/dev/null 2>&1 &"
In python:
from shlex import split
from subprocess import Popen
p = Popen(split('ssh user#host "nohup PATH/XProgram >&- &"'))
p.communicate() # returns (None, None)
Try
subprocess.Popen(["ssh", "xxx#servername", "nohup PATH/start & disown"])
For me,
subprocess.Popen(["ssh", "xxx#servername", "nohup sleep 1000 & disown"])
lets my script exit immediately while leaving sleep running on the server awhile.
When your script dies, an ssh process is left on your system, but killing it doesn't kill the remote process.

How do you send a signal to a remote process in python?

I've seen the signal module, it seems alright for installing signal handlers and setting up alarms but is sending a signal to another process done via, for example
os.system('kill -s SIGUSR2 8269')
And then is there a simple way to do this if the process is on a different host machine?
os.kill() for local processes, paramiko and the kill command for remote systems.

Categories

Resources