I am using paramiko for SSH with my remote devices.
and I am make communication with them using paramiko shell (invoke_shell() method) for interactivity.
The main problem with paramiko shell (as I've read in many answers here) is that there is no guarantee that paramiko SSHClient methods which indicates that the server finished writing such as recv_exit_status() and exit_status_ready() will behave as expected.
So I created mechanism that will read the last line of the output from the shell and determine if this line contains only shell's prompt in such case the command execution was finished.
This is works in case there is no changes in prompt.
Now I've read here Wait until task is completed on Remote Machine through Python that 'exec_command()' method return tuple 'stdin, stdout, stderr' and preforming stdout.channel.recv_exit_status() will block channel until it is done.
My question is if I will create such a tuple and will preform stdout.channel.recv_exit_status() with stdout that I will get when I am open on each interactive command (meaning I will do 'shell_chan.send('something')' and then will do stdout.channel.recv_exit_status()) will it do the job? will it return only when interactive command has finished?
SSHClient.recv_exit_status signals that the channel has closed.
The "shell" channel closes only when the shell closes. A "shell" is a black box with input and output. Nothing else. There's no way SSH, let alone Paramiko, will be able to tell anyhow, when individual commands in the shell have finished. The shell_chan.send('something') is not a signal to SSH to execute a command. It simply sends an arbitrary input to the "shell". Nothing else. It's totally at the shell disposition how the input is interpreted. All that the SSH/Paramiko gets back is an arbitrary unstructured output. Paramiko cannot interpret it anyhow for you.
If you need to be able to tell when a command has finished, you need to use "exec" channel (SSHClient.exec_command method in Paramiko). You should not use the "shell" channel, unless you are implementing an interactive SSH terminal client, like PuTTY (or in rare cases, when you talk to an limited SSH server that does not implement the "exec" channel, as it often the case with dedicated devices, such as routers, switches, etc).
For related questions, see:
Execute multiple dependent commands individually with Paramiko and find out when each command finishes
How to get each dependent command execution output using Paramiko exec_command
Use the same SSH object to issue "exec_command()" multiple times in Paramiko
Execute (sub)commands in secondary shell/command on SSH server in Paramiko
Invoke multiple commands inside a process in interactive ssh mode
Executing command using Paramiko exec_command on device is not working
What is the difference between exec_command and send with invoke_shell() on Paramiko?
Related
I want to run client and daemon application which responds to client in the same time.
Connection established to SSH using Paramiko. But I could not run both daemon and client in the same time.
How to do this with Paramiko?
Here the expectation is, client provide input as 1,2,3 and daemon responds to each input.
Both run in the same SSH.
Could any one help me with this?
I assume you can use simple shell syntax to achieve what you need. You do not need any fancy code in Python/Paramiko.
Assuming *nix server, see How do I run multiple background commands in bash in a single line?
To run (any) command in Paramiko, see Execute command and wait for it to finish with Python Paramiko
So probably like this:
stdin, stdout, stderr = ssh_client.exec_command("deamon & client")
stdout.channel.set_combine_stderr(True)
output = stdout.readlines()
If you need to somehow run the two commands (deamon and clinet) independenty for a better control, you can start here:
Run multiple commands in different SSH servers in parallel using Python Paramiko
Except that you do not need to open multiple connections (SSHClient). You will just call SSHClient.exec_command twice on the same SSHClient instance.
I am working with Paramiko 2.7.1, using a simple client implementation for running commands on remote SSH servers.
On most of my hosts, it works great. Input commands go out, output (if exists) comes back.
One specific type of host (an IBM VIOS partition to be precise) is giving me headaches in that the commands execute, but the output is always empty.
I have used PuTTY in an interactive session to log all SSH packets and check for any differences and, at least during an interactive session, no differences present between a working and a non-working host.
I have enabled Paramiko logging with:
basicConfig(level=DEBUG)
logging.getLogger("paramiko").setLevel(logging.DEBUG)
log_to_file('ssh.log')
But the output doesn't dump each packet. I have done a search for any parameters or methods that would dump those packets but I've come up empty.
Wireshark is not an option since we are talking about an encrypted connection.
I would prefer to keep using exec_command instead of having to refactor everything and adapt to using an SSH shell.
So, in the end. Is there any way to dump the entire SSH session with Paramiko? I can handle either SSH packets or raw data.
Edit 1: I have remembered that PuTTY's plink.exe does ssh exec commands, so I used it to compare both SSH server's output and stumbled onto the solution to my base problem: https://www.ibm.com/support/pages/unable-execute-commands-remotely-vio-server-padmin-user-ssh
Still, I'd rather have captured the session with Paramiko, since I will not always be able to simulate with other tools...
In addition to enabling logging, call Transport.set_hexdump():
client.get_transport().set_hexdump(True)
Regarding your original problem, see also:
Command executed with Paramiko does not produce any output
I am connecting to SSH via terminal (on Mac) and run a Paramiko Python script and for some reason, the two sessions seem to behave differently. The PATH environment variable is different in these cases.
This is the code I run:
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('host', username='myuser',password='mypass')
stdin, stdout, stderr =ssh.exec_command('echo $PATH')
print (stdout.readlines())
Any idea why the environment variables are different?
And how can I fix it?
The SSHClient.exec_command by default does not allocate a pseudo terminal for the session. As a consequence a different set of startup scripts is (might be) sourced (particularly for non-interactive sessions, .bash_profile is not sourced). And/or different branches in the scripts are taken, based on an absence/presence of TERM environment variable.
To emulate the default Paramiko behavior with the ssh, use the -T switch:
ssh -T myuser#host
See the ssh man:
-T Disable pseudo-tty allocation.
Contrary, to emulate the default ssh behavior with Paramiko, set the get_pty parameter of the exec_command to True:
def exec_command(self, command, bufsize=-1, timeout=None, get_pty=False):
Though rather than working around the issue by allocating the pseudo terminal in Paramiko, you should better fix your startup scripts to set the same PATH for all sessions.
For that see Some Unix commands fail with "<command> not found", when executed using Python Paramiko exec_command.
Working with the Channel object instead of the SSHClient object solved my problem.
chan=ssh.invoke_shell()
chan.send('echo $PATH\n')
print (chan.recv(1024))
For more details, see the documentation
I am connecting to SSH via terminal (on Mac) and run a Paramiko Python script and for some reason, the two sessions seem to behave differently. The PATH environment variable is different in these cases.
This is the code I run:
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('host', username='myuser',password='mypass')
stdin, stdout, stderr =ssh.exec_command('echo $PATH')
print (stdout.readlines())
Any idea why the environment variables are different?
And how can I fix it?
The SSHClient.exec_command by default does not allocate a pseudo terminal for the session. As a consequence a different set of startup scripts is (might be) sourced (particularly for non-interactive sessions, .bash_profile is not sourced). And/or different branches in the scripts are taken, based on an absence/presence of TERM environment variable.
To emulate the default Paramiko behavior with the ssh, use the -T switch:
ssh -T myuser#host
See the ssh man:
-T Disable pseudo-tty allocation.
Contrary, to emulate the default ssh behavior with Paramiko, set the get_pty parameter of the exec_command to True:
def exec_command(self, command, bufsize=-1, timeout=None, get_pty=False):
Though rather than working around the issue by allocating the pseudo terminal in Paramiko, you should better fix your startup scripts to set the same PATH for all sessions.
For that see Some Unix commands fail with "<command> not found", when executed using Python Paramiko exec_command.
Working with the Channel object instead of the SSHClient object solved my problem.
chan=ssh.invoke_shell()
chan.send('echo $PATH\n')
print (chan.recv(1024))
For more details, see the documentation
I've been using Paramiko today to work with a Python SSH connection, and it is useful.
However one thing I'd really like to be able to do over the SSH is to utilise some Pythonic sugar. As far as I can tell I can only use the inbuilt Paramiko functions, and if I want to anything using Python on the remote side I would need to use a script which I have placed on there, and call it.
Is there a way I can send Python commands over the SSH connection rather than having to make do only with the limitations of the Paramiko SSH connection? Since I am running the SSH connection through Paramiko within a Python script, it would only seem right that I could, but I can't see a way to do so.
RPyC could be what you're looking for. It gives you access to another machine's Python environment directly from the local script.
>>> import rpyc
>>> conn = rpyc.classic.connect("someremotehost.com")
>>> conn.modules.sys.path
['D:\\projects\\rpyc\\servers', 'd:\\projects', .....]
To establish a connection over SSL or SSH, see:
http://rpyc.sourceforge.net/docs/secure-connection.html#ssl
Well, that is what SSH created for - to be a secure shell, and the commands are executed on the remote machine (you can think of it as if you were sitting at a remote computer itself, and that either doesn't mean you can execute Python commands in a shell, though you're physically interact with a machine).
You can't send Python commands simply because Python do not have commands, it executes Python scripts.
So everything you can do is a "thing" that will make next steps:
Wrap a piece of Python code into file.
scp it to the remote machine.
Execute it there.
Remove the script (or cache it for further execution).
Basically shell commands are remote machine's programs themselves, so you can think of those scripts like shell extensions (python programs with command-line parameters, e.g.).