python: interact with shell command using Popen - python

I need to execute several shell commands using python, but I couldn't resolve one of the problems. When I scp to another machine, usually it prompts and asks whether to add this machine to known host. I want the program to input "yes" automatically, but I couldn't get it to work. My program so far looks like this:
from subprocess import Popen, PIPE, STDOUT
def auto():
user = "abc"
inst_dns = "example.com"
private_key = "sample.sem"
capFile = "/home/ubuntu/*.cap"
temp = "%s#%s:~" %(user, inst_dns)
scp_cmd = ["scp", "-i", private_key, capFile, temp]
print ( "The scp command is: %s" %" ".join(scp_cmd) )
scpExec = Popen(scp_cmd, shell=False, stdin=PIPE, stdout=PIPE)
# this is the place I tried to write "yes"
# but doesn't work
scpExec.stdin.write("yes\n")
scpExec.stdin.flush()
while True:
output = scpExec.stdout.readline()
print ("output: %s" %output)
if output == "":
break
If I run this program, it still prompt and ask for input. How can I response to the prompt automatically? Thanks.

You're being prompted to add the host key to your know hosts file because ssh is configured for StrictHostKeyChecking. From the man page:
StrictHostKeyChecking
If this flag is set to “yes”, ssh(1) will never automatically add host keys to the ~/.ssh/known_hosts
file, and refuses to connect to hosts whose host key has changed. This provides maximum protection
against trojan horse attacks, though it can be annoying when the /etc/ssh/ssh_known_hosts file is
poorly maintained or when connections to new hosts are frequently made. This option forces the user
to manually add all new hosts. If this flag is set to “no”, ssh will automatically add new host keys
to the user known hosts files. If this flag is set to “ask”, new host keys will be added to the user
known host files only after the user has confirmed that is what they really want to do, and ssh will
You can set StrictHostKeyChecking to "no" if you want ssh/scp to automatically accept new keys without prompting. On the command line:
scp -o StrictHostKeyChecking=no ...
You can also enable batch mode:
BatchMode
If set to “yes”, passphrase/password querying will be disabled. This option is useful in scripts and
other batch jobs where no user is present to supply the password. The argument must be “yes” or
“no”. The default is “no”.
With BatchMode=yes, ssh/scp will fail instead of prompting (which is often an improvement for scripts).

Best way I know to avoid being asked about fingerprint matches is to pre-populate the relevant keys in .ssh/known_hosts. In most cases, you really should already know what the remote machines' public keys are, and it is straightforward to put them in a known_hosts that ssh can find.
In the few cases where you don't, and can't, know the remote public key, then the most correct solution depends on why you don't know. If, say, you're writing software that needs to be run on arbitrary user boxes and may need to ssh on the user's behalf to other arbitrary boxes, it may be best for your software to run ssh-keyscan on its own to acquire the ostensible remote public key, let the user approve or reject it explicitly if at all possible, and if approved, append the key to known_hosts and then invoke ssh.

Related

Pass a secret to a program started over SSH

I'm starting a Python program over SSH and I would like to pass a secret to it.
ssh remote python program.py
I control the code of the program so I can implement any method that I would like to. I've considered the following options:
Use a command-line argument
ssh remote python program.py --secret=abc
This won't work since any user on the local and remote machine can see that SSH and the program were invoked with this parameter.
Use TCP
ssh -L 1234:localhost:1234 remote python progam.py
The program would listen on port 1234 and wait for me to send the secret over a connection. This also doesn't work since any program could connect to port 1234 and pass garbage secrets to program.py.
Use stdin
cat secret.txt | ssh remote python program.py
This would work, but unfortunately for my use case stdin is already used to pass other data to the program.
Do I have any other options? Is stdin the only way?
Is stdin the only way?
Provided you cannot use a socket (TCP or UDP) to transmit the secret information to remote; it seems stdin is the only way, considering the constraints you mention and the way you describe your problem.
A socket would give you a file descriptor interface upon which you can write the local file to the remote end. As you cannot use it, stdin lasts as a practical option for a message-based application protocol. See below.
Do I have any other options?
Yes, you have many other options. i.e., you can create a message-based protocol by creating objects and doing read/write from inside program.py.
class Message:
SECRET, INFO = range(2) # see also: from enum import auto, Enum
#https://docs.python.org/3.7/library/enum.html
def __init__(self, type_, content_):
self.type = type_
self.content = content_
msec = Message(Message.SECRET, "secret here")
minf = Message(Message.INFO, "clear info here")

Python SSH iteration through different paths

I've got a JSON object which looks like this:
{UID_1:{
jumpboxes:[jump_ip1, jump_ip2,...],
hosts: [host_ip1, host_ip2,...],
...},
UID_2:{...
The authentication to the jumpboxes is via kerberos (passwordless), the authentication to the hosts is with password and the hosts are only visible via the jump hosts. I don't know out of the list of IPs which ones work, which are stuck, or non-responding, etc. so I need to find the first path that would let me open an SSH session.
What I can do is check for the exit codes when ssh-ing to the jump hosts with something like this:
jumpip = ''
for i in json[uid][jumpboxes]:
if os.system('ssh {}#{}'.format(username,i))>0:
continue
else:
jumpip = i
break
This gives me the first working jumpbox ip without issues, however having a password to establish a ssh connection with the second host isn't as easy to check for the exit code of.
There're multiple ways to open the tunnel - either with os.system() and using sshpass with a session proxy (something like:
if os.system('sshpass -p {} ssh -o ProxyCommand="ssh {}#{} nc {} 22" {}#{} -t {}'.format(password, user, jumpip, hosts[j], user, hosts[j], remote_cmd))>0:.... (for context let's assume the sshpass command will look something like this: sshpass -p Password123! -o ProxyCommand="ssh user#jumpbox nc hostip 22" user#hostip -t ll or doing pint in a subshell with something like os.system('ssh user#jumpbox -t ping {} -c 5'.format(hosts[j])) and although ping would return an exit code, ICMP echo replies don't mean I'd be able to open a tunnel (e.g the daemon can be stuck or could have crashed, etc.), or I can do a try-except-else block, that tries to open an ssh session to the remote host via the jumpbox with pexpect or with subrpocess.popen and with piping the stdio thus allowing me to push the password and if that fails to raise a custom exception, but I can't figure out how to get the exitcode from the ssh client, so I can check for the status...
Neither of these is robust enough for me, so I'd rather iterate through the IPs correctly, for which I'm open for suggestions.
A little bit of background - the tunnel would be used to start a nohup-ed command and then will be closed. The script uses multiprocessing and pool to go through a whole bunch of these, so I'll start them and then have a loop to check their status and retrieve the result of the remote script executed on the hosts. I know os.system is deprecated and I should use subprocess, but this isn't essential for the use-case so I don't really care about this. I'm looking for a smart way how to iterate through the possible paths which will take given a list with jumpbox with length n and a list with hosts with length m and timeout x seconds max of n*m*x seconds to figure out and instead shorten that time.
I'm also using pexpect(which uses paramiko itself) for the interactions with the remote hosts, once I've found the correct IPs I need to open the tunnel with.
Thanks in advance!
Paramiko's exit_status_ready function will tell you the exit status.
Return true if the remote process has exited and returned an exit
status. You may use this to poll the process status if you don’t want
to block in recv_exit_status. Note that the server may not return an
exit status in some cases (like bad servers).
Looking at the source code for pexpect, I don't see where it uses Paramiko, so you may need to replace all of your pexpect code with Paramiko code. Paramiko gives you a lot of control over all of the low level aspects of establishing an SSH connection, so it can be a little rough to figure out, but it does give you a lot of control over the entire process.
I figured it out - pexpect offers an exit code if there's a prompt, i.e. I did something along the lines of
host = ''
for i in hosts:
cmd = 'ssh {}#{} -t ssh {}'.format(user,jumpbox, i)
try:
p = pexpect.spawn(cmd)
if p.expect('.*') == 0:
host = i
break
except:
someException()
if host != '':
...
Thanks for all the input.

Use the same SSH object to issue "exec_command()" multiple times in Paramiko

I want to use the same SSH object to issue exec_command() multiple times in Paramiko module in Python.
The objective is to get output from the same session.
Is there a way to do it? The exec_command() closes channel once it completes executing a command and thereafter a new ssh object is needed to execute a following command .. but the sessions will differ which I do not want.
Code
import os, sys,
import connectlibs as ssh
s = ssh.connect("xxx.xx.xx.xxx", "Admin", "Admin")
channel = s.invoke_shell()
channel.send("net use F: \\\\xyz.xy.xc.xa\\dir\n")
>>>32
channel.send("net use")
>>>7
channel.recv(500)
'Last login: Tue Jun 2 23:52:29 2015 from xxx.xx.xx.xx\r\r\n\x1b]0;~\x07\r\r\n\x1b[32mAdmin#WIN \x1b[33m~\x1b[0m\r\r\n$ net use F: \\\\xyz.xy.xc.xa\\dir\r\nSystem error 67 has occurred.\r\r\n\r\r\nThe network name cannot be found.\r\r\n\r\r\n\x1b]0;~\x07\r\r\n\x1b[32mAdmin#WIN \x1b[33m~\x1b[0m\r\r\n$ net use'
>>>
An SSH session can have multiple channels indeed (but Paramiko possibly does not support it).
But by a session you seem to imagine a "shell session". But that's not what the SSH session is. A channel is actually, what corresponds to a "shell session".
In other words, even if you could open multiple "exec" channels with Paramiko over the same SSH connection (session) and call the exec_command on these, the commands get executed in a different shell session. So it won't help you.
You can test this with PuTTY SSH client. The recent versions support connection sharing, what basically means that you can have more PuTTY windows (each using its own channel) over a single SSH connection/session. If you execute a command in one PuTTY window, and the commands changes an environment (like an environment variable or a current working directory), the change won't get reflected to the other PuTTY window, even if they share the same SSH connection.
So you need to execute the commands in one channel. Depending on your needs (which are still not clear), you need to use the "exec" or the "shell" channel.
In either case you will have troubles determining, where output of one command ends and output of other command starts as they share the same "stream".
You can solve that by inserting a unique separator (string) in between and search for it in the channel output stream.
channel = ssh.invoke_shell()
channel.send('ls\n')
channel.send('echo unique-string-separating-output-of-the-commands\n')
channel.send('pwd\n')

Running consecutive commands with python fabric module

I am writing an application that interacts with numerous systems, specifically with switches,
i am trying to implement a function that will enable me to retreive logs from a specific switch using Fabric (python)
in a real session to the switch i would need to first run "enable" (and press enter key) and then run "debug generate dump" command.
using fabric.operations.run() i can only issue one command at a time,
using fabric.operations.open_shell() is not an option since i need to parse the output and also close the connection once it finishes.
Can someone assist on this?
THANKS!!
Here is an example of the code:
def getSwitchLog(self, host, port, username, password):
env.host_string = "%s:%s" % (host, port)
env.user = username
env.password = password
command = 'enable \r debug generate dump'
run(command, shell=cli, pty=True, combine_stderr=True, timeout=120)
shell=cli - because the switch does not run bash and 'cli' is the appropriate value in this case
\r should have sent "enter" key essentially sending 1. enable 2. enter 3. debug generate dump
this method works if i switch run with open_shell
but it seems run ignores \r
I was able to achieve what i need using:
command = 'sshpass -p admin ssh admin#switchIP cli \"enable\" \"show version\"'
fabric.api.local(command, capture=True, shell=None)
however this method is not robust as fabric.api.run() and also requires the running node to have sshpass installed
This is an example of the output from the switch CLI as the commands entered interactively (keyboard) without fabric
[standalone: master] > enable
[standalone: master] # debug generate dump
[standalone: master] # debug generate dump Generated dump sysdump-SX6036-1-20130630-104051.tgz
[standalone: master] #
thanks.
So working with a session state isn't something Fabric does. Every call is a new session. There are some other project that try and get around this, one being fexpect, but since you're attempting to query a switch I don't believe that will work. Since fexpect (last i knew) uploads a expect script which it then runs to the remote machine.
What you might have better luck with though is pxssh from the pexpect module. It allows ssh+expect like work simple enough. It's outside Fabric, but more likely to work for you right out of the gate I think.
work with robot-framework based on 'paramiko'. It has more simple API (write/read_unitl/read_all) to interact with your switch shell.
http://robotframework.org/SSHLibrary/latest/SSHLibrary.html

How to pass SSH options with Fabric?

We are trying to improve automation of some server processes; we use Fabric. I anticipate having to manage multiple hosts, and that means that SSH connections must be made to servers that haven't been SSH'd into before. If that happens, SSH always asks for verification of connection, which will break automation.
I have worked around this issue, in the same process, using the -o stricthostkeychecking=no option on an SSH command that I use to synchronize code with rsync, but I will also need to use it on calls with Fabric.
Is there a way to pass ssh-specific options to Fabric, in particular the one I mentioned above?
The short answer is:
For new hosts, nothing is needed. env.reject_unknown_hosts defaults to False
For known hosts with changed keys, env.disable_known_hosts = True will decide to proceed connecting to changed hosts.
Read ye olde docs: http://docs.fabfile.org/en/1.5/usage/ssh.html#unknown-hosts
The paramiko library is capable of loading up your known_hosts file,
and will then compare any host it connects to, with that mapping.
Settings are available to determine what happens when an unknown host
(a host whose username or IP is not found in known_hosts) is seen:
Reject: the host key is rejected and the connection is not made. This results in a Python exception, which will terminate your Fabric session with a message that the host is unknown.
Add: the new host key is added to the in-memory list of known hosts, the connection is made, and things continue normally. Note that this does not modify your on-disk known_hosts file!
Ask: not yet implemented at the Fabric level, this is a paramiko library option which would result in the user being prompted about the unknown key and whether to accept it.
Whether to reject or add hosts, as above, is controlled in Fabric via
the env.reject_unknown_hosts option, which is False by default for
convenience’s sake. We feel this is a valid tradeoff between
convenience and security; anyone who feels otherwise can easily modify
their fabfiles at module level to set env.reject_unknown_hosts = True.
http://docs.fabfile.org/en/1.5/usage/ssh.html#known-hosts-with-changed-keys
Known hosts with changed keys
The point of SSH’s key/fingerprint tracking is so that
man-in-the-middle attacks can be detected: if an attacker redirects
your SSH traffic to a computer under his control, and pretends to be
your original destination server, the host keys will not match. Thus,
the default behavior of SSH (and its Python implementation) is to
immediately abort the connection when a host previously recorded in
known_hosts suddenly starts sending us a different host key.
In some edge cases such as some EC2 deployments, you may want to
ignore this potential problem. Our SSH layer, at the time of writing,
doesn’t give us control over this exact behavior, but we can sidestep
it by simply skipping the loading of known_hosts – if the host list
being compared to is empty, then there’s no problem. Set
env.disable_known_hosts to True when you want this behavior; it is
False by default, in order to preserve default SSH behavior.
Warning Enabling env.disable_known_hosts will leave you wide open to
man-in-the-middle attacks! Please use with caution.

Categories

Resources