send remote shell command and retrieve its output - python

Because I am unable to find a library that does the following I started heading out to write my own. I'm unable to find a solution for some problems, though, and hope that here maybe someone has a suggestion.
What I want is this. I send a normal shell command like ls -al <some path> and I want its output (stdout and stderr) in the same fashion and order as it would appear in my terminal emulator when writing that command. The catch is, that I don't want to run this shell command on the computer I'm currently using, but remotely via ssh or serial connection. To put it another way, when calling ls -al <some path> 2>&1 >/tmp/out I want to remotely receive the contents of /tmp/out without interruption or changes.
The problem is, with the serial connection always, with the ssh connection depending on your choice of library, that you get terminal command chars like \x1b[K mixed into your output. I am currently unable to find what exactly is creating these commands, why nothing is consuming them already, and I also don't know how I would go about consuming all of them myself (there are a lot, naive approaches won't work).
Why is it a problem to get special characters in your output? Well often you want to compare in your python code if the output has a string ala if expected_output == output, or at least use regular expressions. But there is nothing regular about when and why these characters appear. Sometimes a very simple command like ls -al <something> might result in the whole ssh connection breaking down (supposely because of these characters, but at least for sure because I don't know enough about all this to even understand what's the problem).
How would you go about solving the ultimate goal of sending commands remotely and receiving their output? How would you solve one of the mentioned subproblems (ssh connection that speaks to me like I would be a terminal, consuming randomly appearing special characters without interpreting them, etc)?
PS: There are many things I've already tried, but while writing I found them too many to list them all here. Nothing led to a desired end result, though. This really is a quite complex problem, especially because there seem to be things involved that are not traceable (like how many (pseudo-)terminals are actually involved) and others were never documented (some of the terminal handling seems to come from a time, where an actual typewriter was connected to the computer).

You already have this type of library.Try
Pexpect.
You just need to spawn a pexpect child by making a ssh connection to the computer you want to make connection to.Then you can send commands and see ouput using .before functionality.
Example:
child = pexpect.spawn('ssh admin#192.168.33.40')
child.expect ('Password:')
child.sendline (mypassword)
child.expect('#') # or expect `$`.
child.sendline('<your command>')
child.expect('#')
print child.before
Ref:http://pexpect.sourceforge.net/pexpect.html
P.S there's also Paramiko for the same though i havent used it.

Related

Starting process in Google Colab with Prefix "!" vs. "subprocess.Popen(..)"

I've been using Google Colab for a few weeks now and I've been wondering what the difference is between the two following commands (for example):
!ffmpeg ...
subprocess.Popen(['ffmpeg', ...
I was wondering because I ran into some issues when I started either of the commands above and then tried to stop execution midway. Both of them cancel on KeyboardInterrupt but I noticed that after that the runtime needs a factory reset because it somehow got stuck. Checking ps aux in the Linux console listed a process [ffmpeg] <defunct> which somehow still was running or at least blocking some ressources as it seemed.
I then did some research and came across some similar posts asking questions on how to terminate a subprocess correctly (1, 2, 3). Based on those posts I generally came to the conclusion that using the subprocess.Popen(..) variant obviously provides more flexibility when it comes to handling the subprocess: Defining different stdout procedures or reacting to different returncode etc. But I'm still unsure on what the first command above using the ! as prefix exactly does under the hood.
Using the first command is much easier and requires way less code to start this process. And assuming I don't need a lot of logic handling the process flow it would be a nice way to execute something like ffmpeg - if I were able to terminate it as expected. Even following the answers from the other posts using the 2nd command never got me to a point where I could terminate the process fully once started (even when using shell=False, process.kill() or process.wait() etc.). This got me frustrated, because restarting and re-initializing the Colab instance itself can take several minutes every time.
So, finally, I'd like to understand in more general terms what the difference is and was hoping that someone could enlighten me. Thanks!
! commands are executed by the notebook (or more specifically by the ipython interpreter), and are not valid Python commands. If the code you are writing needs to work outside of the notebook environment, you cannot use ! commands.
As you correctly note, you are unable to interact with the subprocess you launch via !; so it's also less flexible than an explicit subprocess call, though similar in this regard to subprocess.call
Like the documentation mentions, you should generally avoid the bare subprocess.Popen unless you specifically need the detailed flexibility it offers, at the price of having to duplicate the higher-level functionality which subprocess.run et al. already implement. The code to run a command and wait for it to finish is simply
subprocess.check_call(['ffmpeg', ... ])
with variations for capturing its output (check_output) and the more modern run which can easily replace all three of the legacy high-level calls, albeit with some added verbosity.

paramiko recv() returning multiple echos of command [duplicate]

I am using Python's Paramiko library to SSH a remote machine and fetch some output from command-line. I see a lot of junk printing along with the actual output. How to get rid of this?
chan1.send("ls\n")
output = chan1.recv(1024).decode("utf-8")
print(output)
[u'Last login: Wed Oct 21 18:08:53 2015 from 172.16.200.77\r', u'\x1b[2J\x1b[1;1H[local]cli#BENU>enable', u'[local]cli#BENU#Configure',
I want to eliminate, [2J\x1b[1;1H and u from the output. They are junk.
It's not a junk. These are ANSI escape codes that are normally interpreted by a terminal client to pretty print the output.
If the server is correctly configured, you get these only, when you use an interactive terminal, in other words, if you requested a pseudo terminal for the session (what you should not, if you are automating the session).
The Paramiko automatically requests the pseudo terminal, if you used the SSHClient.invoke_shell, as that is supposed to be used for implementing an interactive terminal. See also How do I start a shell without terminal emulation in Python Paramiko?
If you automate an execution of remote commands, you better use the SSHClient.exec_command, which does not allocate the pseudo terminal by default (unless you override by the get_pty=True argument).
stdin, stdout, stderr = client.exec_command('ls')
See also What is the difference between exec_command and send with invoke_shell() on Paramiko?
Or as a workaround, see How can I remove the ANSI escape sequences from a string in python.
Though that's rather a hack and might not be sufficient. You might have other problems with the interactive terminal, not only the escape sequences.
You particularly are probably not interested in the "Last login" message and command-prompt (cli#BENU>) either. You do not get these with the exec_command.
If you need to use the "shell" channel due to some specific requirements or limitations of the server, note that it is technically possible to use the "shell" channel without the pseudo terminal. But Paramiko SSHClient.invoke_shell does not allow that. Instead, you can create the "shell" channel manually. See Can I call Channel.invoke_shell() without calling Channel.get_pty() beforehand, when NOT using Channel.exec_command().
And finally the u is not a part of the actual string value (note that it's outside the quotes). It's an indication that the string value is in the Unicode encoding. You want that!
This is actually not junk. The u before the string indicates that this is a unicode string. The \x1b[2J\x1b[1;1H is an escape sequence. I don't know exactly what it is supposed to do, but it appears to clear the screen when I print it out.
To see what I mean, try this code:
for string in output:
print string

Not able to find file using ssh on another server using python pexpect on linux

I created the simple python script using pexpect, created one spwan process using
CurrentCommand = "ssh " + serverRootUserName + "#" + serverHostName
child = pexpect.spawn(CurrentCommand)
Now I am running some command like ls-a or "find /opt/license/ -name '*.xml'"
using code
child.run(mycommand)
it works fine if running from Pycharm but if running from terminal it is not working it is not able to find any file, I think it is looking into my local system.
Can anyone suggest me something. Thanks
As a suggestion, have a look at the paramiko library (or fabric, which uses it, but has a specific purpose), as this is a python interface to ssh. It might make your code a bit better and more resilient against bugs or attacks.
However, I think the issue comes from your use of run.
This function runs the given command; waits for it to finish; then returns all output as a string. STDERR is included in output. If the full path to the command is not given then the path is searched.
What you should look at is 'expect'. I.e. your spawn with spawn then you should use expect to wait for that to get to an appropiate point (such as connected, terminal ready after motd pushed etc (because ouy might have to put a username and password in etc).
Then you want to run sendline to send a line to the program. See the example:
http://pexpect.readthedocs.io/en/latest/overview.html
Hope that helps, and seriously, have a look at paramiko ;)

Su - root with python fabric

How can I "su -" and pass the root password with fabric? My current job doesn't give us sudoers, but instead uses su - to root(stupid in my opinion). On googling I haven't found a simple(or any working) answer to this.
My normal code for fabric is like:
from fabric.api import *
env.host_string="10.10.10.10"
env.user="mahuser"
env.password="mahpassword"
run('whoami')
Need to be able to
run('su -')
and have it pass my password.
I hear you saying that your policy does not permit use of "sudo" command. Understood.
But what HAPPENS when you try using Fabric sudo()? Please try it and report back.
I don't think sudo() "requires" a sudo prompt at the other end. What sudo() is is a run() command which anticipates a password prompt and attempts to respond to it. That's all.
So in your case, "sudo('su -')". If it fails, try "sudo('su - -c whoami') to see if you have any temporary success at all.
The point I want to make is run() and sudo()sudo() and run() are nearly identical EXCEPT that sudo() will anticipate a server prompt, then answer it. That's the difference.
Conversely, I had a different problem recently, where I was trying to suppress the prompt for sudo() using SSH keys. I couldn't get my head around why Fabric was prompting for the password when bash+ssh wasn't. The docs weren't clear, but eventually I realized that the prompt was MY doing because I thought that sudo level commands required sudo(). Untrue. If your command requires no prompt, use run() and if your command requires password input, use sudo().
Worst case, if sudo() doesn't work for you, it will have still created an AttributeObject of the SSH connection. You may or may not be able to then push some "input" into the stdin attribute of that object (I'm not sure this is correct, it's untested. But that's what you'd do with Paramiko, blindly send text down the connection's STDIN and it gets picked up by the prompt).
Absolute worst case, call sudo()/run() on "expect" command which WILL work, but may not be the simplest cleanest solution.

need help sending command to server by ssh python

I am trying to connect to the server via ssh and dump the "df- h" output in some text file.
p=pexpect.spawn('ssh some.some.com')
i=p.expect([ssh_newkey,'password:',pexpect.EOF])
if i==0:
print "I say yes"
p.sendline('yes')
i=p.expect([ssh_newkey,'password:',pexpect.EOF])
if i==1:
p.sendline("somesome")
p.expect(pexpect.EOF)
i = p.sendline('df -h > /home/test/output.txt')
print i
response = p.before
print response
print p.before
I am trying to connect to the server and dump the server data in some text file.
My problem is i = p.sendline('df -h > /home/test/output.txt') is not doing anything,
Basically my output file is empty.
Please help me out.
Thanks.
You probably want to use paramiko to manage operations over an SSH connection.
My problem is i = p.sendline('df -h >
/home/test/output.txt') is not doing
anything
Isn't it setting i to 29? Is that what you mean?
Basically my output file is empty.
How do you know that? Nothing in this code is checking whether that file exists on the remote machine.
Please help me out.
Does your user on the remote machine have permission to write to the /home/test directory there, indeed, does that directory even exist? You're really giving us too few hints at exactly what you're doing, in exactly what context, and what exactly happens as a result, to be of any help yet, except for peppering you with such questions hoping you'll eventually tell us the many crucial pieces of data you're simply omitting. Help us help you out!-)
If you find yourself doing a lot of work over ssh to the same machines then you may want to look into something like func.
It looks like you're using Python as a shell here. Why don't you just save the relevant commands in a bash file and run that in one command instead? I think that'd work out a lot better. I also recommend that you enable SSH publickey authentication, it works better than passwords. Use the subprocess module to spawn processes from inside Python.
I guess this advice isn't helpful if you actually need to do things this way for some reason.

Categories

Resources