Python SSH iteration through different paths - python

I've got a JSON object which looks like this:
{UID_1:{
jumpboxes:[jump_ip1, jump_ip2,...],
hosts: [host_ip1, host_ip2,...],
...},
UID_2:{...
The authentication to the jumpboxes is via kerberos (passwordless), the authentication to the hosts is with password and the hosts are only visible via the jump hosts. I don't know out of the list of IPs which ones work, which are stuck, or non-responding, etc. so I need to find the first path that would let me open an SSH session.
What I can do is check for the exit codes when ssh-ing to the jump hosts with something like this:
jumpip = ''
for i in json[uid][jumpboxes]:
if os.system('ssh {}#{}'.format(username,i))>0:
continue
else:
jumpip = i
break
This gives me the first working jumpbox ip without issues, however having a password to establish a ssh connection with the second host isn't as easy to check for the exit code of.
There're multiple ways to open the tunnel - either with os.system() and using sshpass with a session proxy (something like:
if os.system('sshpass -p {} ssh -o ProxyCommand="ssh {}#{} nc {} 22" {}#{} -t {}'.format(password, user, jumpip, hosts[j], user, hosts[j], remote_cmd))>0:.... (for context let's assume the sshpass command will look something like this: sshpass -p Password123! -o ProxyCommand="ssh user#jumpbox nc hostip 22" user#hostip -t ll or doing pint in a subshell with something like os.system('ssh user#jumpbox -t ping {} -c 5'.format(hosts[j])) and although ping would return an exit code, ICMP echo replies don't mean I'd be able to open a tunnel (e.g the daemon can be stuck or could have crashed, etc.), or I can do a try-except-else block, that tries to open an ssh session to the remote host via the jumpbox with pexpect or with subrpocess.popen and with piping the stdio thus allowing me to push the password and if that fails to raise a custom exception, but I can't figure out how to get the exitcode from the ssh client, so I can check for the status...
Neither of these is robust enough for me, so I'd rather iterate through the IPs correctly, for which I'm open for suggestions.
A little bit of background - the tunnel would be used to start a nohup-ed command and then will be closed. The script uses multiprocessing and pool to go through a whole bunch of these, so I'll start them and then have a loop to check their status and retrieve the result of the remote script executed on the hosts. I know os.system is deprecated and I should use subprocess, but this isn't essential for the use-case so I don't really care about this. I'm looking for a smart way how to iterate through the possible paths which will take given a list with jumpbox with length n and a list with hosts with length m and timeout x seconds max of n*m*x seconds to figure out and instead shorten that time.
I'm also using pexpect(which uses paramiko itself) for the interactions with the remote hosts, once I've found the correct IPs I need to open the tunnel with.
Thanks in advance!

Paramiko's exit_status_ready function will tell you the exit status.
Return true if the remote process has exited and returned an exit
status. You may use this to poll the process status if you don’t want
to block in recv_exit_status. Note that the server may not return an
exit status in some cases (like bad servers).
Looking at the source code for pexpect, I don't see where it uses Paramiko, so you may need to replace all of your pexpect code with Paramiko code. Paramiko gives you a lot of control over all of the low level aspects of establishing an SSH connection, so it can be a little rough to figure out, but it does give you a lot of control over the entire process.

I figured it out - pexpect offers an exit code if there's a prompt, i.e. I did something along the lines of
host = ''
for i in hosts:
cmd = 'ssh {}#{} -t ssh {}'.format(user,jumpbox, i)
try:
p = pexpect.spawn(cmd)
if p.expect('.*') == 0:
host = i
break
except:
someException()
if host != '':
...
Thanks for all the input.

Related

How to avoid multiple ssh to contact remote?

ssh researcher#192.168.1.1 'ls somefile' > folders.txt
echo "Trying to connect and show files from Remote"
scp researcher#192.168.1.1:somefile somefile
From the code we can see that iam using a ssh session to first look for all files and again iam establishing another ssh session to download files.Ignore any content or syntax errors(I have replaced some confidential info).
So everytime i try to connect to remote it asks me password,and this process alone takes 3-4 seconds and my script has 4 ssh calls and this is taking a lot of time.So instead of connecting for 4 times , is there a way so i can connect only once and maintain session and do remaining calls.
Help me with suggestions to do this.
Sometimes, there is a legitimate need to open multiple connections, either sequentially or in parallel. For those cases, you can run one ssh in master mode, which establishes the connection, and run the others in control mode, which "piggyback" on the first connection, avoiding the need to authenticate again.
# * -M puts the connection in master mode
#
# * ControlPersist keeps the connection alive after the current client
# exits, for use by other clients
#
# * ControlPath specifies the socket to use. %C expands to a combination
# of the local and remote host names, the user id, and the port.
# It should be in a directory writeable only by you, but for this
# example we just put it in the current directory. The same socket
# is used by each client wishing to piggyback on the open connection.
ssh -M -o ControlPersist=yes -o ControlPath=%C researcher#192.168.1.1 ls somefile > folders.txt
scp -o ControlPath=%C researcher#192.168.1.1:somefile somefile
# -O simply sends a command to the master connection, in this case
# closing it.
ssh -o ControlPath=%C -O exit researcher#192.168.1.1 # end the session
You can automate a lot of this by adding the appropriate options to your .ssh/config file instead of repeating them on the command line. See the various Control* options in man ssh_config for more detail.

How to properly exit from an interactive SSH session with forwarded ports launched via Python?

I'm trying to write a wrapper Python script that automatically sets up port forwards to a remote host based on some parameters, and then gives me that shell. Everything works great, up until I want to exit the shell -- at which point, the session hangs and never returns me back to Python. Here's a toy example that does the same thing:
>>> import os
>>> os.system('ssh -L8080:localhost:80 fooserver.net')
user#fooserver.net password:
[fooserver.net]$ hostname
fooserver.net
[fooserver.net]$ exit
(hangs)
I believe this has something to do with the forwarded TCP port being in "TIME_WAIT" and keeping the SSH session alive until it closes, because this doesn't happen if I never request that forwarded port locally. What's the right way to handle this? Can I capture the "exit" from inside Python and then kill the os.system() pipe or something?

Python raw_input malfunctions and returns -bash: line 1: <INPUT>: command not found

I have written a script that establishes an SSH tunnel and connects to a database over that tunnel.
Extremely simplified nutshell (obvious parameters and extra logic omitted):
sshTunnelCmd = "ssh -N -p %s -L %s:127.0.0.1:%s -i %s %s#%s" % (
sshport, localport, remoteport, identityfile, user, server
)
args = shlex.split(sshTunnelCmd)
tunnel = subprocess.Popen(args)
time.sleep(2)
con = MySQLdb.connect(host="127.0.0.1", port=localport, user=user, passwd=pw, db=db)
## DO THE STUFF ##
con.close()
tunnel.kill()
The shell-equivalent commands are below, and I have tested both the commands and the script to work in "clean client" conditions, i.e. after a reboot.
ssh -N -p 22 -L 5000:127.0.0.1:3306 user#server
mysql --port 5000 -h 127.0.0.1 -u dbuser -p
SSH login is with keys and in ~/.ssh/config the server is configured as
Host server
Hostname F.Q.D.N
Port 22
User user
ControlMaster auto
ControlPath ~/.ssh/master-%r#%h:%p
ControlPersist 600
IdentityFile ~/.ssh/id_rsa
In the ## DO THE STUFF ## section there is code that tries to connect to the database as regular users. If an exception is raised, it asks for manual input of root credentials, creates the regular users and continue to do the stuff (ordinary queries, all tested manually and working in the python code under clean client conditions).
ruz = raw_input('root user? ')
print (ruz)
rup = raw_input('root password? ')
print (rup)
print ("Root connecting to database.")
try:
cxroot = MySQLdb.connect(host=host, port=port, user=ruz, passwd=rup)
cur = cxroot.cursor()
except MySQLdb.Error, e:
print ("Root failed, sorry.")
print "Error %d: %s" % (e.args[0],e.args[1])
print ("GAME OVER.")
return -1
Under clean client, the first and some subsequent executions work well, including when I try to test the script robustness and remove the user server-side. However, at some point, it hangs in a weird way after the second raw_input in the code block above. Output:
root user? root
root
root password? s3cReTsTr1n9
-bash: line 1: s3cReTsTr1n9: command not found
The only thing I can do at this point is kill the process or hit CTRL+C, which is followed by the following traceback:
^CTraceback (most recent call last):
File "./initdb.py", line 571, in <module>
main()
File "./initdb.py", line 526, in main
connection = connectDB ('127.0.0.1', localport, dbuser, dbpw, db)
File "./initdb.py", line 128, in connectDB
rup = raw_input('root password? ')
KeyboardInterrupt
Another unexpected symptom I noticed is that keyboard input to the terminal window (I am running this in a bash terminal within Xubuntu 14.04LTS) becomes spuriously unresponsive, so I have to close the terminal tab and start a new tab. This clears keyboard input, but not script behaviour.
I have tried to search for a solution but the usual search engines are not helpful in my case, probably because I do not completely understand what is going on. I suspect that keyboard input is somehow redirected to a process, possibly the tunnel subprocess, but I cannot explain why the first raw_input works as expected and the second one does not.
I am also uncomfortable with the way I create the tunnel, so any advice for a more robust tunnel creation is welcome. Specifically, I would like to have more fine grained control over the tunnel creation, rather than waiting an arbitrary two seconds for the tunnel to be established because I have no feedback from that subprocess.
Thanks for sharing your time and expertise.
There's two sections to my answer: how I'd go about diagnosing this, and how I would go about doing this.
To begin with, I'd suggest using the prompt that's failing as an opportunity to do some exploration.
There's two approaches you could take here:
Just enter hostname (or whatever) to find out where it's running
Enter bash, or if the remote end has an X server add -X to your ssh command then type a terminal program (xterm, gnome-terminal, etc). In your new shell you can poke around to see what's going on.
If you determine it's running on the client side you could diagnose it with strace:
strace -f -o blah.log yourscript.py
... where you'd enter an easy to search string for the password then search for that in blah.log. Because of the -f flag it will print the PID of the process that attempted to execute it; backtracking from there you'll probably find that PID started with a fork from another PID. That PID is what tried to execute it, so you should be able to investigate from there.
As for how I'd do this: I'm still fairly new to python so I would've been inclined to use perl or expect. Down the perl path you might look at:
Net::SSH::Tunnel; this is probably the first one I'd look at using.
Use open or open3 then do something hacky like:
wait for stdout on the process to have text available; you'd have to get rid of -N for that, and you'd be at the mercy of remote auto-logout.
One of the various responses to ssh-check-if-a-tunnel-is-alive
Net::SSH::Expect (eg this post, though I didn't look at his implementation so you'd have to make your own choice on that). This or the "real" expect are probably overkill but you could find a way I'm sure.
Although ruby has a gem like perl's Net::SSH::Tunnel, I don't see a pip for python. This question and this one both discuss it and they seem to indicate you're limited to either starting it as a sub-process or using paramiko.
Are you free to configure the server as you like?
Then try a vpn connection instead of ssh port forwarding. This will easier reconnect without affecting your application, so the tunnel may be more stable.
For the raw_input problem i cannot see why it happens, but maybe the ssh command in a shell interferes with your terminal? If you really want to integrate the ssh tunnel you may want to look at some python modules for handling ssh.
-bash: line 1: s3cReTsTr1n9: command not found
I got same error as -bash command not found even though I was just accepting raw_input() / input(). I tried this in both 2.7 and 3.7 version.
I was trying run a client server program on same mac machine. I had two files server.py and client.py. Everytime in the terminal, I first ran the server.py in background and then ran client.py.
Terminal 1: python server.py &
Terminal 2: python client.py
Each time I got the error "-bash: xxxx: command not found". xxxx here is whatever input I gave.
Finally after spending 5 hours on this I stopped running server.py in background.
Terminal 1: python server.py
Terminal 2: python client.py
And viola it worked. raw_input and input did not give me this error again.
I am not sure if this helps. But this is the only post I found on internet which had exactly the same issue as mine. And thought maybe this would help.

Paramiko simulate ssh -t option

I instantiate a paramiko channel, then I execute a command and get its output:
channel = transport.open_session()
channel.exec_command('service myservice restart')
stdout = channel.makefile('rb')
for line in stdout:
print line,
However, after executing the command (which finishes), the output iterating gets blocked.
I tested with ssh:
ssh myhost service myservice restart # terminal gets blocked
ssh -t myhost service myservice restart # OK
So I want to simulate the "-t" option in paramiko. So far I tried:
channel = transport.open_session()
channel.get_pty()
channel.invoke_shell()
stdin, stdout = channel.makefile('wb'), channel.makefile('rb')
stdin.write('service myservice restart\n')
for line in stdout:
print line,
But now, stdout doesn't get closed, and the for never ends.
Any ideas?
It appears like invoke_shell() returns a Channel, and it looks like Channels require that you close them explicitly. I would attempt to close some of the channels you're opening, in particular the one returned by invoke_shell().
Have a look at the script that youre trying to run- see if there are any lines like this
/dev/null 2>&1
Im having the same issue as you- in my case trying to remotely run a bitnami control script. Something in your post jogged my memory and reminded me of the output redirections that are in the control script (these caused me some major headache before).
Generally theyre used to either ignore errors or maybe log them somewhere specific- I havent had a chance to try yet, but maybe either piping them back out at the end of the script or if you dont care about the response maybe even manually redirecting some created data out >&2 would work.

python: interact with shell command using Popen

I need to execute several shell commands using python, but I couldn't resolve one of the problems. When I scp to another machine, usually it prompts and asks whether to add this machine to known host. I want the program to input "yes" automatically, but I couldn't get it to work. My program so far looks like this:
from subprocess import Popen, PIPE, STDOUT
def auto():
user = "abc"
inst_dns = "example.com"
private_key = "sample.sem"
capFile = "/home/ubuntu/*.cap"
temp = "%s#%s:~" %(user, inst_dns)
scp_cmd = ["scp", "-i", private_key, capFile, temp]
print ( "The scp command is: %s" %" ".join(scp_cmd) )
scpExec = Popen(scp_cmd, shell=False, stdin=PIPE, stdout=PIPE)
# this is the place I tried to write "yes"
# but doesn't work
scpExec.stdin.write("yes\n")
scpExec.stdin.flush()
while True:
output = scpExec.stdout.readline()
print ("output: %s" %output)
if output == "":
break
If I run this program, it still prompt and ask for input. How can I response to the prompt automatically? Thanks.
You're being prompted to add the host key to your know hosts file because ssh is configured for StrictHostKeyChecking. From the man page:
StrictHostKeyChecking
If this flag is set to “yes”, ssh(1) will never automatically add host keys to the ~/.ssh/known_hosts
file, and refuses to connect to hosts whose host key has changed. This provides maximum protection
against trojan horse attacks, though it can be annoying when the /etc/ssh/ssh_known_hosts file is
poorly maintained or when connections to new hosts are frequently made. This option forces the user
to manually add all new hosts. If this flag is set to “no”, ssh will automatically add new host keys
to the user known hosts files. If this flag is set to “ask”, new host keys will be added to the user
known host files only after the user has confirmed that is what they really want to do, and ssh will
You can set StrictHostKeyChecking to "no" if you want ssh/scp to automatically accept new keys without prompting. On the command line:
scp -o StrictHostKeyChecking=no ...
You can also enable batch mode:
BatchMode
If set to “yes”, passphrase/password querying will be disabled. This option is useful in scripts and
other batch jobs where no user is present to supply the password. The argument must be “yes” or
“no”. The default is “no”.
With BatchMode=yes, ssh/scp will fail instead of prompting (which is often an improvement for scripts).
Best way I know to avoid being asked about fingerprint matches is to pre-populate the relevant keys in .ssh/known_hosts. In most cases, you really should already know what the remote machines' public keys are, and it is straightforward to put them in a known_hosts that ssh can find.
In the few cases where you don't, and can't, know the remote public key, then the most correct solution depends on why you don't know. If, say, you're writing software that needs to be run on arbitrary user boxes and may need to ssh on the user's behalf to other arbitrary boxes, it may be best for your software to run ssh-keyscan on its own to acquire the ostensible remote public key, let the user approve or reject it explicitly if at all possible, and if approved, append the key to known_hosts and then invoke ssh.

Categories

Resources