I use a friends server that allows only one user to be logged from SSH, so normally I just log in as that user and then do su -l myuser to change accounts. I wanted to automate some boring stuff using Python, but I ran into problems with that. Apparently Paramiko module that I tried first invokes a single shell for every command, so that was out of the question. Later I tried using invoke_shell() to overcome that, but it still failed (I assume it's because changing user changes shell as well).
After that I found about Fabric module, but best I could do is open SSH shell with a proper user logged in, but without option to run any commands from code.
Is there any way to accomplish that? Final goal would probably look something like this:
ssh.login(temp_user, pass)
ssh.command("su -l myuser")
expect("Password: ", ssh.send("mypass\n")
ssh.command("somescript.sh > datadump.txt")
Using sudo is impossible, as well as adding passwordless login.
As suggested here is the code that I tried with Paramiko:
import paramiko
host = "hostip"
user = "user"
user_to_log = "myuser"
password = "pass"
password_to_log = "mypass"
login_command = "su -l " + user_to_log
ssh = paramiko.SSHClient()
ssh.load_system_host_keys()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostip, username=user,
password=password)
transport = ssh.get_transport()
session = transport.open_session()
session.set_combine_stderr(True)
session.get_pty()
session.exec_command("su -l " + user_to_log)
stdin = session.makefile('wb', -1)
stdin.write(password_to_log +'\n')
stdin.flush()
session.exec_command("whoami")
stdout = session.makefile('rb', -1)
for line in stdout.read().splitlines():
print('host: %s: %s' % (host, line))
su -c command won't work either, since server system doesn't support that option.
General disclaimers first (to others who stumble upon this question):
Using su is not the right solution. su is a tool intended for an interactive use, not for an automation. The correct solution is to login with the correct account directly.
Or at at least use a password-less sudo.
Or you can create a root-owned script with setuid right.
See also Allowing automatic command execution as root on Linux using SSH.
If you are stuck with su, on most systems you can use -c switch to su to specify a command:
su -c "whoami" user
See also How to run sudo with Paramiko? (Python)
If none of the above is feasible (and you really tried hard to make the admin enable some of the options above):
As the last resort option, you can write the command to a standard input of the su, the same way you already write a password (another thing not to do):
stdin, stdout, stderr = session.exec_command("su -l " + user_to_log)
stdin.write(password_to_log + '\n')
stdin.flush()
command = 'whoami'
stdin.write(command + '\n')
stdin.flush()
(also note that it's redundant to call makefile, as exec_command already returns that)
See Execute (sub)commands in secondary shell/command on SSH server in Python Paramiko.
Note that your question is not about which SSH client library to use. It does not matter if you use Paramiko or other. This all is actually a generic SSH/Linux/shell question.
Related
I use a friends server that allows only one user to be logged from SSH, so normally I just log in as that user and then do su -l myuser to change accounts. I wanted to automate some boring stuff using Python, but I ran into problems with that. Apparently Paramiko module that I tried first invokes a single shell for every command, so that was out of the question. Later I tried using invoke_shell() to overcome that, but it still failed (I assume it's because changing user changes shell as well).
After that I found about Fabric module, but best I could do is open SSH shell with a proper user logged in, but without option to run any commands from code.
Is there any way to accomplish that? Final goal would probably look something like this:
ssh.login(temp_user, pass)
ssh.command("su -l myuser")
expect("Password: ", ssh.send("mypass\n")
ssh.command("somescript.sh > datadump.txt")
Using sudo is impossible, as well as adding passwordless login.
As suggested here is the code that I tried with Paramiko:
import paramiko
host = "hostip"
user = "user"
user_to_log = "myuser"
password = "pass"
password_to_log = "mypass"
login_command = "su -l " + user_to_log
ssh = paramiko.SSHClient()
ssh.load_system_host_keys()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostip, username=user,
password=password)
transport = ssh.get_transport()
session = transport.open_session()
session.set_combine_stderr(True)
session.get_pty()
session.exec_command("su -l " + user_to_log)
stdin = session.makefile('wb', -1)
stdin.write(password_to_log +'\n')
stdin.flush()
session.exec_command("whoami")
stdout = session.makefile('rb', -1)
for line in stdout.read().splitlines():
print('host: %s: %s' % (host, line))
su -c command won't work either, since server system doesn't support that option.
General disclaimers first (to others who stumble upon this question):
Using su is not the right solution. su is a tool intended for an interactive use, not for an automation. The correct solution is to login with the correct account directly.
Or at at least use a password-less sudo.
Or you can create a root-owned script with setuid right.
See also Allowing automatic command execution as root on Linux using SSH.
If you are stuck with su, on most systems you can use -c switch to su to specify a command:
su -c "whoami" user
See also How to run sudo with Paramiko? (Python)
If none of the above is feasible (and you really tried hard to make the admin enable some of the options above):
As the last resort option, you can write the command to a standard input of the su, the same way you already write a password (another thing not to do):
stdin, stdout, stderr = session.exec_command("su -l " + user_to_log)
stdin.write(password_to_log + '\n')
stdin.flush()
command = 'whoami'
stdin.write(command + '\n')
stdin.flush()
(also note that it's redundant to call makefile, as exec_command already returns that)
See Execute (sub)commands in secondary shell/command on SSH server in Python Paramiko.
Note that your question is not about which SSH client library to use. It does not matter if you use Paramiko or other. This all is actually a generic SSH/Linux/shell question.
This question already has an answer here:
Some Unix commands fail with "<command> not found", when executed using Python Paramiko exec_command
(1 answer)
Closed 3 years ago.
I've written a script to help others run a simple day to day commands on our storage system here at work. The script works fine with commands that return a short and simple output, for example, ls, however, when the script wants to run a command which has a large output, the output isn't returned. It's almost as if it times out but there's no feedback at all, e.g. I thought there might be part of the command output. I've done some research around this and discovered other people with the same problem. The answers they got was to use:
stdin, stdout, stderr = client.exec_command(command)
Which I was already using in my code.
I'm wondering if it's something to do with the buffer size, which annoyingly I don't know how to implement that in my code. I've tried adding a time delay using:
time.sleep(10)
But no joy from that. I have also tried using:
print stdout.channel.recv_exit_status()
However, I got a return of 127 so I think I'm way off the mark there!
My code is:
def ssh_command(ip, user, passwd, command):
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(ip, username=user, password=passwd)
stdin, stdout, stderr = client.exec_command(command)
print stdout.read()
print stderr.read()
return
if __name__ == '__main__':
ssh_command(ip, user, passwd, command)
I've omitted the first few blocks of code which are where a few variables are defined by raw input from the user. It's rather long so I thought it best to omit but naturally, I can post it if needs be.
For those interested in the command I'm trying to run, it's an IBM command unique to their GPFS (Spectrum Scale) storage system. The command is:
mmdf mmfs1 --block-size auto
The command returns the storage space on all the disk pools on the storage system.
UPDATE:
The stderr.read() states the command isn't recognised (bash: mmdf: command not found) despite it working when SSH'd into the storage controller.
Based on your latest comments you should use the absolute path to mmdf when running the command:
client.exec_command("/the/path/to/mmdf mmfs1 --block-size auto")
To find out where mmdf is, manually login to the server and run:
which mmdf
# or
type -P mmdf
I'm trying to SCP a file between machines and I need to fail when the user hasn't set up a private/public certificate to do passwordless logins. Unfortunatly, using subprocess.Popen I can't figure out how to capture the following output:
The authenticity of host '***' can't be established.
RSA key fingerprint is ***.
Are you sure you want to continue connecting (yes/no)
It always shows up on the console and I can't get it in my program to detect it.
Here's some example code:
proc = subprocess.Popen(['scp', 'user#server:/location/file.txt', '/someplace/file.txt',
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
proc.wait()
print 'result: %s' % repr(proc.stderr.readline())
I've tried many other permutations. This one still prompts me, and not Python to enter yes/no. At least when I type no though I get:
result: 'Host key verification failed.\r\n'
'The authenticity of host '***' can't be established' means the machine your connecting from hasn't been told to save the other ends (server) identity to the known_hosts file and it asking if you trust the machine. You can change the ssh client to just add it automatically without prompting you.
try this:
proc = subprocess.Popen(['scp', '-o BatchMode=yes',
'user#server:/location/file.txt',
'/someplace/file.txt'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
proc.wait()
print 'result: %s' % repr(proc.stderr.readline())
With the above code i get:
me#myMachine:~$ python tmp.py
result: 'Host key verification failed.\r\n'
me#myMachine:~$
If I use disable StrictHostKeyChecking i get:
me#myMachine:~$ python tmp.py
result: 'Permission denied (publickey,password,keyboard-interactive).\r\n'
me#myMachine:~$ python tmp.py
So it looks like it is printing the first line from stderr with BatchMode turned on :)
I've run into something similar before, though in my case it was actually helpful. I believe ssh and friends don't actually read stdin and print on stdout or stderr, they do funky things to hook up with the terminal you're running in directly.
I believe the reasoning is they they're supposed to be able to talk to the user directly, even when run through wrapper shell scripts, because the user knows the password, not the calling script (and probably they deliberately don't want calling scripts to have the opportunity to intercept a password).
[Edit to add]: According to the man page on my system, scp does have a flag that might do what you want:
-B Selects batch mode (prevents asking for passwords or passphrases).
counsel to the library to work with SSH. The main requirement is normal operation with the utility sudo.
I have already tried and what I am suffering:
paramiko - can not sudo at all, trying after a call to serve in STDIN password, but sudo wrote that then type: "No ttys present"
pxssh - mmmmmm, very slow, very very slow, awkward
fabric - can sudo only in what is an ideal world, as there is to work with different users and where i need send password ?
Have normal libraries that work with sudo, or not?
Rather than force sudo to work without a tty, why not get Paramiko to allocate you a TTY?
Paramiko and Pseudo-tty Allocation
I think you are looking for fabric.
You can configure sudo to work without a real terminal with 'requiretty' setting. From sudoers manual:
If set, sudo will only run when the user is logged in to a real tty. This will
disallow things like "rsh somehost sudo ls" since rsh(1) does not
allocate a tty.
Because it is not possible to turn off echo when there is no tty present,
some site may wish to set this flag to prevent a user from entering a visible
password. This flag is off by default.
This works for me with paramiko. Depending o what are you doing, you can also look at something like pexpect.
I had the same problem with pxssh at first: it was extremely slow!
Here is a way I found to make it run quicker:
#!/usr/bin/python
import pxssh
import getpass
try:
s = pxssh.pxssh()
s.PROMPT = "#"
hostname = raw_input('hostname: ')
username = raw_input('username: ')
password = getpass.getpass('password: ')
s.login(hostname, username, password, auto_prompt_reset=False)
s.sendline('ls') # run a command
s.prompt() # match the prompt
print(s.before) # print everything before the prompt.
s.sendline('ls -l /tmp') # run a command
s.prompt() # match the prompt
print(s.before) # print everything before the prompt.
s.logout()
except pxssh.ExceptionPxssh as e:
print("pxssh failed on login.")
print(e)
The key part is s.PROMPT = "#" and auto_prompt_reset=False in s.login().
This method requires that you know the pattern for the prompt (in my case it is "#", I think the PROMPT attribute can be set to a regular expression).
I also had some problems with login speed on pxssh. I tried using the code referenced above, but still was seeing 10+ seconds just to login. Using the original_prompt argument fixed the issue for me. You need to make sure to set the original_prompt to what you see when you first ssh into the machine, which in my case ended in '>'.
#!/usr/bin/env python
from pexpect import pxssh
host = 'hostname.domain'
user = 'username'
password = 'password'
terminal = pxssh.pxssh()
terminal.login(host, user, original_prompt='[>$]')
Greetings.
I have written a little python script that calls MySQL in a subprocess. [Yes, I know that the right approach is to use MySQLdb, but compiling it under OS X Leopard is a pain, and likely more painful if I wanted to use the script on computers of different architectures.] The subprocess technique works, provided that I supply the password in the command that starts the process; however, that means that other users on the machine could see the password.
The original code I wrote can be seen here.
This variant below is very similar, although I will omit the test routine to keep it shorter:
#!/usr/bin/env python
from subprocess import Popen, PIPE
# Set the command you need to connect to your database
mysql_cmd_line = "/Applications/MAMP/Library/bin/mysql -u root -p"
mysql_password = "root"
def RunSqlCommand(sql_statement, database=None):
"""Pass in the SQL statement that you would like executed.
Optionally, specify a database to operate on. Returns the result."""
command_list = mysql_cmd_line.split()
if database:
command_list.append(database)
# Run mysql in a subprocess
process = Popen(command_list, stdin=PIPE, stdout=PIPE,
stderr=PIPE, close_fds=True)
#print "Asking for output"
#needs_pw = process.stdout.readline()
#print "Got: " + needs_pw
# pass it in the password
process.stdin.write(mysql_password + "\n")
# pass it our commands, and get the results
#(stdout, stderr) = process.communicate( mysql_password + "\n" + sql_statement)
(stdout, stderr) = process.communicate( sql_statement )
return stdout
I am suspicious that the MySQL password prompt is not actually on stdout (or stderr), although I don't know how that could be or if it means I could trap it.
I did try reading output first, before supplying a password, but it didn't work. I also tried passing the password
Again, if I supply the password on the command line (and thus have no code between the "Popen" and "communicate" functions) my wrapped function works.
Two new thoughts, months laster:
Using pexpect would let me supply a password. It simulates a tty and gets all output, even that which bypasses stdout and stderr.
There is a project called MySQL Connector/Python, in early alpha, that will allow provide a pure python library for accessing MySQL, without requiring you to compile any C-code.
You could simply build a my.cnf file and point to that on the mysql command. Obviously you'll want to protect that file with permissions/acls. But it shouldn't be really an more/less secure then having the password in your python script, or the config for your python script.
So you would do something like
mysql_cmd_line = "/Applications/MAMP/Library/bin/mysql --defaults-file=credentials.cnf"
and your config would look about like this
[client]
host = localhost
user = root
password = password
socket = /var/run/mysqld/mysqld.sock
The only secure method is to use a MySQL cnf file as one of the other posters mentions. You can also pass a MYSQL_PWD env variable, but that is insecure as well: http://dev.mysql.com/doc/refman/5.0/en/password-security.html
Alternatively, you can communicate with the database using a Unix socket file and with a little bit of tweaking you can control permissions at the user id level.
Even better, you can use the free BitNami stack DjangoStack that has Python and MySQLDB precompiled for OS X (And Windows and Linux) http://bitnami.org/stacks
This may be a windows / SQL Server feature, but could you use a Trusted Connection (i.e. use your OS login/password to access the DB)? There may be an OS X equivalent for MySQL.
Or you may just need to set up your DB to use the OS login and password so that you don't need to keep it in your code.
Anyway, just an idea.
Try this:
process.stdin.write(mysql_password + "\n")
process.communicate()
(stdout, stderr) = process.communicate( sql_statement )
process.stdin.close()
return stdout
Call communicate() to force what you just wrote to the buffer to send. Also, it's good to close stdin when you are done.