How to use paramiko.ServerInterface to response exec request - python

An OpenSSH server on Ubuntu, for example, can response to ssh [username#]hostname [command], do this single [command] and response.
For example,
me#ubuntu:~$ ssh 127.0.0.1 ls
Desktop Documents Downloads Music Pictures Public Videos
me#ubuntu:~$
Question is:
How to achieve this kind of server with paramiko.ServerInstance?
How to control the input and output to local shell's stdin and stdout?
One more question is, what if the background command handler is not an standard shell, but a implementation by something like sshim?

paramiko.ServerInstance actually gives a hook to implement and with this check_channel_exec_request, you can call command locally and use channel to send, recv or close.
check_channel_exec_request(channel, command)
Determine if a shell command will be executed for the client.
If this method returns True, the channel should be connected
to the stdin, stdout, and stderr of the shell command.
The default implementation always returns `False`.
Parameters:
channel (Channel) – the Channel the request arrived on.
command (str) – the command to execute.
Returns:
True if this channel is now hooked up to the stdin, stdout, and stderr of the executing command
False if the command will not be executed.

Related

Python telnet how to use when username and password not required

I have a requirement to telnet from one Windows PC to another. I would like to log in and issue commands (and see replies) using Python.
This is very easy to achieve this in my local cmd window:
Call up cmd and type 'telnet REMOTECOMPUTERNAME'.
Reply in window is:
'Welcome to the ChyronHego telnet server on REMOTECOMPUTERNAME'
I can issue commands (e.g. 'V\6\1\\') by typing directly into prompt.
Remote system responds by carrying out task or issuing error message in prompt.
(I have tried using telnetlib and system.process and os without any result so far)
Does anyone know how I can achieve this programmatically using Python?
Many thanks in advance.
Ian
You can use the subprocess module to perform a telnet cmd on windows. Additional parameters can be added to the list as a separate element. EX:["telnet", "HOST", 'V']
import subprocess
p = subprocess.Popen(["telnet", "HOST"], stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE)

Python paramiko SSH nothing returned when blocked by dialog

I'm using paramiko to ssh into a server and then from this server ssh into another one just to get the fingerprint, so I'd like to see the message that asks if I want to accept that fingerprint or no.
I'm using this code:
stdin,stdout,stderr = self.ssh.exec_command(command)
outlines = stdout.readlines()
resp = ''.join(outlines)
return resp
but it returns me nothing. I think it's because at some point the ssh command wait for me to input yes/no if I want to accept the fingerprint. I did man ssh and didn't find a way to automatically answer no for this question.
I've found this: Reading output of Top command using Paramiko which teaches how to invoke a shell. But what if I want to receive the text just up to the stopping point?

Use the same SSH object to issue "exec_command()" multiple times in Paramiko

I want to use the same SSH object to issue exec_command() multiple times in Paramiko module in Python.
The objective is to get output from the same session.
Is there a way to do it? The exec_command() closes channel once it completes executing a command and thereafter a new ssh object is needed to execute a following command .. but the sessions will differ which I do not want.
Code
import os, sys,
import connectlibs as ssh
s = ssh.connect("xxx.xx.xx.xxx", "Admin", "Admin")
channel = s.invoke_shell()
channel.send("net use F: \\\\xyz.xy.xc.xa\\dir\n")
>>>32
channel.send("net use")
>>>7
channel.recv(500)
'Last login: Tue Jun 2 23:52:29 2015 from xxx.xx.xx.xx\r\r\n\x1b]0;~\x07\r\r\n\x1b[32mAdmin#WIN \x1b[33m~\x1b[0m\r\r\n$ net use F: \\\\xyz.xy.xc.xa\\dir\r\nSystem error 67 has occurred.\r\r\n\r\r\nThe network name cannot be found.\r\r\n\r\r\n\x1b]0;~\x07\r\r\n\x1b[32mAdmin#WIN \x1b[33m~\x1b[0m\r\r\n$ net use'
>>>
An SSH session can have multiple channels indeed (but Paramiko possibly does not support it).
But by a session you seem to imagine a "shell session". But that's not what the SSH session is. A channel is actually, what corresponds to a "shell session".
In other words, even if you could open multiple "exec" channels with Paramiko over the same SSH connection (session) and call the exec_command on these, the commands get executed in a different shell session. So it won't help you.
You can test this with PuTTY SSH client. The recent versions support connection sharing, what basically means that you can have more PuTTY windows (each using its own channel) over a single SSH connection/session. If you execute a command in one PuTTY window, and the commands changes an environment (like an environment variable or a current working directory), the change won't get reflected to the other PuTTY window, even if they share the same SSH connection.
So you need to execute the commands in one channel. Depending on your needs (which are still not clear), you need to use the "exec" or the "shell" channel.
In either case you will have troubles determining, where output of one command ends and output of other command starts as they share the same "stream".
You can solve that by inserting a unique separator (string) in between and search for it in the channel output stream.
channel = ssh.invoke_shell()
channel.send('ls\n')
channel.send('echo unique-string-separating-output-of-the-commands\n')
channel.send('pwd\n')

Paramiko simulate ssh -t option

I instantiate a paramiko channel, then I execute a command and get its output:
channel = transport.open_session()
channel.exec_command('service myservice restart')
stdout = channel.makefile('rb')
for line in stdout:
print line,
However, after executing the command (which finishes), the output iterating gets blocked.
I tested with ssh:
ssh myhost service myservice restart # terminal gets blocked
ssh -t myhost service myservice restart # OK
So I want to simulate the "-t" option in paramiko. So far I tried:
channel = transport.open_session()
channel.get_pty()
channel.invoke_shell()
stdin, stdout = channel.makefile('wb'), channel.makefile('rb')
stdin.write('service myservice restart\n')
for line in stdout:
print line,
But now, stdout doesn't get closed, and the for never ends.
Any ideas?
It appears like invoke_shell() returns a Channel, and it looks like Channels require that you close them explicitly. I would attempt to close some of the channels you're opening, in particular the one returned by invoke_shell().
Have a look at the script that youre trying to run- see if there are any lines like this
/dev/null 2>&1
Im having the same issue as you- in my case trying to remotely run a bitnami control script. Something in your post jogged my memory and reminded me of the output redirections that are in the control script (these caused me some major headache before).
Generally theyre used to either ignore errors or maybe log them somewhere specific- I havent had a chance to try yet, but maybe either piping them back out at the end of the script or if you dont care about the response maybe even manually redirecting some created data out >&2 would work.

Why does supplying stdin to subprocess.Popen cause what is written to stdout to change?

I'm using Python's subprocess.Popen to perform some FTP using the binary client of the host operating system. I can't use ftplib or any other library for various reasons.
The behavior of the binary seems to change if I attach a stdin handler to the Popen instance. For example, using XP's ftp client, which accepts a text file of commands to issue:
>>>from subprocess import Popen, PIPE
>>>p = Popen(['ftp','-A','-s:commands.txt','example.com'], stdout=PIPE)
>>>p.communicate()[0]
'Connected to example.com.
220 ProFTPD 1.3.1 Server (Debian) ...
331 Anonymous login ok, send your complete email address as your password
<snip>
ftp> binary
200 Type set to I
ftp> get /testfiles/100.KiB
200 PORT command successful
150 Opening BINARY mode data connection for /testfiles/100.KiB (102400 bytes)
226 Transfer complete
ftp: 102400 bytes received in 0.28Seconds 365.71Kbytes/sec.
ftp> quit
>>>
commands.txt:
binary
get /testfiles/100.KiB
quit
When also supplying stdin, all you get in stdout is:
>>>from subprocess import Popen, PIPE
>>>p = Popen(['ftp','-A','-s:commands.txt','example.com'], stdin=PIPE, stdout=PIPE)
>>>p.communicate()[0]
'binary
get /testfiles/100.KiB
quit'
>>>
Initially I thought this was a quirk of the XP ftp client, perhaps knowing it wasn't in interactive mode and therefore limiting its output. However, the same behaviour happens with OS X's ftp - all the server responses are missing from stdout if stdin is supplied - which leads me to think that this is normal behaviour.
In Windows I can use the -s switch to effectively script ftp without using stdin, but on other platforms one relies on the shell for that kind of interaction.
Python version is 2.6.x on both platforms. Why would supplying a handle for stdin change stdout, and where have the server responses gone to?
The program may be using isatty(3) to detect presence of a tty on stdin.
I think I read somewhere (but can't remember where) that Windows ftp client came from one of the original BSD implementations. In that it would certainly shares some relationship with Mac OS X's ftp implementation.
For me, this is not related to Popen but to the client ftp program implementation, which makes some checks about the context in which it is launched (to see if it's interacting with a human or a shell script), using isatty(3) as mentionned with Ignacio in his answer. This is common practise for programs which can be used in both context. A well known example is GNU grep implementation for the --color=auto option : it will colorize output only if stdout is a tty, and not if the output of grep is piped into another command.

Categories

Resources