SSHClient.exec_command() reported "command not found" [duplicate] - python

This question already has an answer here:
Some Unix commands fail with "<command> not found", when executed using Python Paramiko exec_command
(1 answer)
Closed 3 years ago.
I've written a script to help others run a simple day to day commands on our storage system here at work. The script works fine with commands that return a short and simple output, for example, ls, however, when the script wants to run a command which has a large output, the output isn't returned. It's almost as if it times out but there's no feedback at all, e.g. I thought there might be part of the command output. I've done some research around this and discovered other people with the same problem. The answers they got was to use:
stdin, stdout, stderr = client.exec_command(command)
Which I was already using in my code.
I'm wondering if it's something to do with the buffer size, which annoyingly I don't know how to implement that in my code. I've tried adding a time delay using:
time.sleep(10)
But no joy from that. I have also tried using:
print stdout.channel.recv_exit_status()
However, I got a return of 127 so I think I'm way off the mark there!
My code is:
def ssh_command(ip, user, passwd, command):
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(ip, username=user, password=passwd)
stdin, stdout, stderr = client.exec_command(command)
print stdout.read()
print stderr.read()
return
if __name__ == '__main__':
ssh_command(ip, user, passwd, command)
I've omitted the first few blocks of code which are where a few variables are defined by raw input from the user. It's rather long so I thought it best to omit but naturally, I can post it if needs be.
For those interested in the command I'm trying to run, it's an IBM command unique to their GPFS (Spectrum Scale) storage system. The command is:
mmdf mmfs1 --block-size auto
The command returns the storage space on all the disk pools on the storage system.
UPDATE:
The stderr.read() states the command isn't recognised (bash: mmdf: command not found) despite it working when SSH'd into the storage controller.

Based on your latest comments you should use the absolute path to mmdf when running the command:
client.exec_command("/the/path/to/mmdf mmfs1 --block-size auto")
To find out where mmdf is, manually login to the server and run:
which mmdf
# or
type -P mmdf

Related

Paramiko: Show the output from channel session [duplicate]

I use a friends server that allows only one user to be logged from SSH, so normally I just log in as that user and then do su -l myuser to change accounts. I wanted to automate some boring stuff using Python, but I ran into problems with that. Apparently Paramiko module that I tried first invokes a single shell for every command, so that was out of the question. Later I tried using invoke_shell() to overcome that, but it still failed (I assume it's because changing user changes shell as well).
After that I found about Fabric module, but best I could do is open SSH shell with a proper user logged in, but without option to run any commands from code.
Is there any way to accomplish that? Final goal would probably look something like this:
ssh.login(temp_user, pass)
ssh.command("su -l myuser")
expect("Password: ", ssh.send("mypass\n")
ssh.command("somescript.sh > datadump.txt")
Using sudo is impossible, as well as adding passwordless login.
As suggested here is the code that I tried with Paramiko:
import paramiko
host = "hostip"
user = "user"
user_to_log = "myuser"
password = "pass"
password_to_log = "mypass"
login_command = "su -l " + user_to_log
ssh = paramiko.SSHClient()
ssh.load_system_host_keys()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostip, username=user,
password=password)
transport = ssh.get_transport()
session = transport.open_session()
session.set_combine_stderr(True)
session.get_pty()
session.exec_command("su -l " + user_to_log)
stdin = session.makefile('wb', -1)
stdin.write(password_to_log +'\n')
stdin.flush()
session.exec_command("whoami")
stdout = session.makefile('rb', -1)
for line in stdout.read().splitlines():
print('host: %s: %s' % (host, line))
su -c command won't work either, since server system doesn't support that option.
General disclaimers first (to others who stumble upon this question):
Using su is not the right solution. su is a tool intended for an interactive use, not for an automation. The correct solution is to login with the correct account directly.
Or at at least use a password-less sudo.
Or you can create a root-owned script with setuid right.
See also Allowing automatic command execution as root on Linux using SSH.
If you are stuck with su, on most systems you can use -c switch to su to specify a command:
su -c "whoami" user
See also How to run sudo with Paramiko? (Python)
If none of the above is feasible (and you really tried hard to make the admin enable some of the options above):
As the last resort option, you can write the command to a standard input of the su, the same way you already write a password (another thing not to do):
stdin, stdout, stderr = session.exec_command("su -l " + user_to_log)
stdin.write(password_to_log + '\n')
stdin.flush()
command = 'whoami'
stdin.write(command + '\n')
stdin.flush()
(also note that it's redundant to call makefile, as exec_command already returns that)
See Execute (sub)commands in secondary shell/command on SSH server in Python Paramiko.
Note that your question is not about which SSH client library to use. It does not matter if you use Paramiko or other. This all is actually a generic SSH/Linux/shell question.

Paramiko: Can't send command on its standard input [duplicate]

I use a friends server that allows only one user to be logged from SSH, so normally I just log in as that user and then do su -l myuser to change accounts. I wanted to automate some boring stuff using Python, but I ran into problems with that. Apparently Paramiko module that I tried first invokes a single shell for every command, so that was out of the question. Later I tried using invoke_shell() to overcome that, but it still failed (I assume it's because changing user changes shell as well).
After that I found about Fabric module, but best I could do is open SSH shell with a proper user logged in, but without option to run any commands from code.
Is there any way to accomplish that? Final goal would probably look something like this:
ssh.login(temp_user, pass)
ssh.command("su -l myuser")
expect("Password: ", ssh.send("mypass\n")
ssh.command("somescript.sh > datadump.txt")
Using sudo is impossible, as well as adding passwordless login.
As suggested here is the code that I tried with Paramiko:
import paramiko
host = "hostip"
user = "user"
user_to_log = "myuser"
password = "pass"
password_to_log = "mypass"
login_command = "su -l " + user_to_log
ssh = paramiko.SSHClient()
ssh.load_system_host_keys()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostip, username=user,
password=password)
transport = ssh.get_transport()
session = transport.open_session()
session.set_combine_stderr(True)
session.get_pty()
session.exec_command("su -l " + user_to_log)
stdin = session.makefile('wb', -1)
stdin.write(password_to_log +'\n')
stdin.flush()
session.exec_command("whoami")
stdout = session.makefile('rb', -1)
for line in stdout.read().splitlines():
print('host: %s: %s' % (host, line))
su -c command won't work either, since server system doesn't support that option.
General disclaimers first (to others who stumble upon this question):
Using su is not the right solution. su is a tool intended for an interactive use, not for an automation. The correct solution is to login with the correct account directly.
Or at at least use a password-less sudo.
Or you can create a root-owned script with setuid right.
See also Allowing automatic command execution as root on Linux using SSH.
If you are stuck with su, on most systems you can use -c switch to su to specify a command:
su -c "whoami" user
See also How to run sudo with Paramiko? (Python)
If none of the above is feasible (and you really tried hard to make the admin enable some of the options above):
As the last resort option, you can write the command to a standard input of the su, the same way you already write a password (another thing not to do):
stdin, stdout, stderr = session.exec_command("su -l " + user_to_log)
stdin.write(password_to_log + '\n')
stdin.flush()
command = 'whoami'
stdin.write(command + '\n')
stdin.flush()
(also note that it's redundant to call makefile, as exec_command already returns that)
See Execute (sub)commands in secondary shell/command on SSH server in Python Paramiko.
Note that your question is not about which SSH client library to use. It does not matter if you use Paramiko or other. This all is actually a generic SSH/Linux/shell question.

readline hangs on paramiko.Channel when reading "watch" command output

I am testing this code to read the output of watch command. I suspect it has to do with how watch works, but I can't figure out what's wrong or how to work around it:
import paramiko
host = "micro"
# timeout = 2 # Succeeds
timeout = 3 # Hangs!
command = 'ls / && watch -n2 \'touch "f$(date).txt"\''
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(host, password='', look_for_keys=False)
transport = ssh_client.get_transport()
channel = transport.open_session()
channel.get_pty()
channel.settimeout(timeout)
channel.set_combine_stderr(True)
stdout = channel.makefile()
channel.exec_command(command)
for line in stdout: # Hangs here
print(line.strip())
There are several similar issues, some of them quite old (1, 2, and probably others)
This does not happen with other commands that don't use watch either.
Does someone know what's special about this particular command and / or how to reliably set a timeout for the read operation?
(Tested on Python 3.4.2 and paramiko 1.15.1)
Edit 1: I incorporated channel.set_combine_stderr(True) as suggested in this answer to a related question, but still didn't do the trick. However, watch does produce a lot of output, so perhaps the problem is exactly that. In fact, using this command removed the hanging:
command = 'ls / && watch -n2 \'touch "f$(date).txt"\' > /dev/null'
So, probably this question is almost a duplicate of Paramiko ssh die/hang with big output, but makes me wonder if there's really no way to use .readline() (called through __next__ in this case) and one has to resort to read with a fixed buffer size and assemble the lines manually.
This probably hangs because watch does not produce newlines. If one replaces
for line in stdout:
print(line.strip())
with a busy loop with
stdout.readline(some_fixed_size)
it can be seen that the bytes never contain a newline character. Therefore, this is a very special case and is not related to other hangs reported in other issues and SO questions.

Paramiko exec command failure based on time

I have been searching and fooling around with this problem for 2 days now. Firstly, some context in the form of (summarised) code.
def setService(self, ...
ssh_client = self.conn.get_ssh_client(hostname, username=username, password=password)
setCommand = str('service ' + service_name + ' ' + status)
stdin, stdout, stderr = ssh_client.exec_command(setCommand)
# time.sleep(2)
return ...
Secondly. The whole codeset uses the same code, and everything works except for this "service foobar stop" and "service foobar start" command. It causes a Read Error (in ssh/auth.log) and does not actually effect the command. All other commands using this setup works fine (we do about 10 different commands). It happens on all target machines, from both dev machines, so I am ruling out ssh configs.
But, if I add any time delaying code after the exec_command(in the comment position), it works. A sleep(2), or a loop doing some debug printing, makes it work fine. Read Errors disappear from the auth.log and service start/stop as they should. Removing the sleep, or whatever it may be, breaks it again.
We "hack" fixed it by leaving a sleep in there, but I do not understand completely why it happens, or why stalling in the function fixes it.
Are we returning too quickly, before the exec was finished on the remote side? I do not think so, it seems to be blocking (returning into stdin, stderr, stdout).
Any advice on this would be highly appreciated.
Note: exec_command(command) is non-blocking..
I usually try to read the output from the buffer(which consumes some time - before returning), or I use a time.sleep which you've used in this case.
If you use(should) stdout.read()/readlines(), it forces your script to return the output in the stdout buffer, and in turn wait for exec_command to finish.

Python subprocess + scp - can't read all output

I'm trying to SCP a file between machines and I need to fail when the user hasn't set up a private/public certificate to do passwordless logins. Unfortunatly, using subprocess.Popen I can't figure out how to capture the following output:
The authenticity of host '***' can't be established.
RSA key fingerprint is ***.
Are you sure you want to continue connecting (yes/no)
It always shows up on the console and I can't get it in my program to detect it.
Here's some example code:
proc = subprocess.Popen(['scp', 'user#server:/location/file.txt', '/someplace/file.txt',
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
proc.wait()
print 'result: %s' % repr(proc.stderr.readline())
I've tried many other permutations. This one still prompts me, and not Python to enter yes/no. At least when I type no though I get:
result: 'Host key verification failed.\r\n'
'The authenticity of host '***' can't be established' means the machine your connecting from hasn't been told to save the other ends (server) identity to the known_hosts file and it asking if you trust the machine. You can change the ssh client to just add it automatically without prompting you.
try this:
proc = subprocess.Popen(['scp', '-o BatchMode=yes',
'user#server:/location/file.txt',
'/someplace/file.txt'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
proc.wait()
print 'result: %s' % repr(proc.stderr.readline())
With the above code i get:
me#myMachine:~$ python tmp.py
result: 'Host key verification failed.\r\n'
me#myMachine:~$
If I use disable StrictHostKeyChecking i get:
me#myMachine:~$ python tmp.py
result: 'Permission denied (publickey,password,keyboard-interactive).\r\n'
me#myMachine:~$ python tmp.py
So it looks like it is printing the first line from stderr with BatchMode turned on :)
I've run into something similar before, though in my case it was actually helpful. I believe ssh and friends don't actually read stdin and print on stdout or stderr, they do funky things to hook up with the terminal you're running in directly.
I believe the reasoning is they they're supposed to be able to talk to the user directly, even when run through wrapper shell scripts, because the user knows the password, not the calling script (and probably they deliberately don't want calling scripts to have the opportunity to intercept a password).
[Edit to add]: According to the man page on my system, scp does have a flag that might do what you want:
-B Selects batch mode (prevents asking for passwords or passphrases).

Categories

Resources