I'm using Python to automate copying binaries off a network sensor using scp. I want to add in some error checking and I can't figure out how to reliably check if SSH throws errors, such as a hostname resolution error. I'm currently using .communicate() to collect stdout, and then matching on "ssh" in the error message. The reason I'm checking if err starts with "ssh" is because if no error is thrown, that err variable contains the banner of the sensor it's logging in to, so I don't really have a way to reliably check if err actually has a value or not (If that makes sense). I'm also checking error codes in case a file is not found or some other error is tossed. Is there a better method?
This is the currently working code:
sp = Popen(['scp', '#'.join([self.user, self.sensor]) + ':{0}{1}'.format(self.binPath, self.binName), self.storePath], stdout = PIPE, stderr = PIPE)
data, error = sp.communicate()
if error.startswith("ssh"):
print("ERROR: {}".format(error))
else:
if sp.returncode == 1:
print("ERROR: {} - No such file or directory".format(self.binPath + self.binName))
elif sp.returncode == 0:
self.hashCMP(self.storePath, self.binName, md5Sum)
else:
pass
Would one way around this be to create a test for the domain? For example using something like:
from socket import getaddrinfo
result = getaddrinfo("www.google.com", None)
print result[0][4]
I notice you are using popen - if your OS has nc (netcat) could you maybe run the command:
nc -v <host> <port> #I believe this uses the getaddrinfo under the hood as well ;-)
Thanks,
//P
Related
I'm a software tester, trying to verify that the log on a remote QNX (a BSD variant) machine will contain the correct entries after specific actions are taken. I am able to list the contents of the directory in which the log resides, and use that information in the command to read (really want to use tail -n XX <file>) the file. So far, I always get a "(No such file or directory)" when trying to read the file.
We are using Froglogic Squish for automated testing, because the Windows UI (that interacts with the server piece on QNX) is built using Qt extensions for standard Windows elements. Squish uses Python 2.7, so I am using Python 2.7.
I am using paramiko for the SSH connection to the QNX server. This has worked great for sending commands to the simulator piece that also runs on the QNX server.
So, here's the code. Some descriptive names have been changed to avoid upsetting my employer.
import sys
import time
import select
sys.path.append(r"C:\Python27\Lib\site-packages")
sys.path.append(r"C:\Python27\Lib\site-packages\pip\_vendor")
import paramiko
# Import SSH configuration variables
ssh_host = 'vvv.xxx.yyy.zzz'
thelog_dir = "/logs/the/"
ssh_user = 'un'
ssh_pw = 'pw'
def execute_Command(fullCmd):
outptLines = []
#
# Try to connect to the host.
# Retry a few times if it fails.
#
i = 1
while True:
try:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ssh_host, 22, ssh_user, ssh_pw)
break
except paramiko.AuthenticationException:
log ("Authentication failed when connecting to %s" % ssh_host)
return 1
except:
log ("Could not SSH to %s, waiting for it to start" % ssh_host)
i += 1
time.sleep(2)
# If we could not connect within time limit
if i == 30:
log ("Could not connect to %s. Giving up" % ssh_host)
return 1
# Send the command (non-blocking?)
stdin, stdout, stderr = ssh.exec_command(fullCmd, get_pty=True)
for line in iter(stdout.readline, ""):
outptLines.append(line)
#
# Disconnect from the host
#
ssh.close()
return outptLines
def get_Latest_Log():
fullCmd = "ls -1 %s | grep the_2" %thelog_dir
files = execute_Command(fullCmd)
theFile = files[-1]
return theFile
def main():
numLines = 20
theLog = get_Latest_Log()
print("\n\nThe latest log is %s\n\n" %theLog)
fullCmd = "cd /logs/the; tail -n 20 /logs/the/%s" %theLog
#fullCmd = "tail -n 20 /logs/the/%s" %theLog
print fullCmd
logLines = execute_Command(fullCmd)
for line in logLines:
print line
if __name__ == "__main__":
# execute only if run as a script
main()
I have tried to read the file using both tail and cat. I have also tried to get and open the file using Paramiko's SFTP client.
In all cases, the response of trying to read the file fails -- despite the fact that listing the contents of the directory works fine. (?!) And BTW, the log file is supposed to be readable by 'world'. Permissions are -rw-rw-r--.
The output I get is:
"C:\Users\xsat086\Documents\paramikoTest>python SSH_THE_MsgChk.py
The latest log is the_20210628_115455_205.log
cd /logs/the; tail -n 20 /logs/the/the_20210628_115455_205.log
(No such file or directory)the/the_20210628_115455_205.log"
The file name is correct. If I copy and paste the tail command into an interactive SSH session with the QNX server, it works fine.
Is it something to do with the 'non-interactive' nature of this method of sending commands? I read that some implementations of SSH are built upon a command that offers a very limited environment. I don't see how that would impact this tail command.
Or am I doing something stupid in this code?
I cannot really explain completely, why you get the results you get.
But in general a corrupted output is a result of enabling and not handling terminal emulation. You enable the terminal emulation using get_pty=True. Remove it. You should not use the terminal emulation, when automating command execution.
Related question:
Is there a simple way to get rid of junk values that come when you SSH using Python's Paramiko library and fetch output from CLI of a remote machine?
I have a follow up question that builds off the question I asked here: Run multiple commands in different SSH servers in parallel using Python Paramiko, which was already answered.
Thanks to the answer on the link above, my python script is as follows:
# SSH.py
import paramiko
import argparse
import os
path = "path"
python_script = "worker.py"
# definitions for ssh connection and cluster
ip_list = ['XXX.XXX.XXX.XXX', 'XXX.XXX.XXX.XXX', 'XXX.XXX.XXX.XXX']
port_list = [':XXXX', ':XXXX', ':XXXX']
user_list = ['user', 'user', 'user']
password_list = ['pass', 'pass', 'pass']
node_list = list(map(lambda x: f'-node{x + 1} ', list(range(len(ip_list)))))
cluster = ' '.join([node + ip + port for node, ip, port in zip(node_list, ip_list, port_list)])
# run script on command line of local machine
os.system(f"cd {path} && python {python_script} {cluster} -type worker -index 0 -batch 64 > {path}/logs/'command output'/{ip_list[0]}.log 2>&1")
# loop for IP and password
stdouts = []
clients = []
for i, (ip, user, password) in enumerate(zip(ip_list[1:], user_list[1:], password_list[1:]), 1):
try:
print("Open session in: " + ip + "...")
client = paramiko.SSHClient()
client.connect(ip, user, password)
except paramiko.SSHException:
print("Connection Failed")
quit()
try:
path = f"C:/Users/{user}/Desktop/temp-ines"
stdin, stdout, stderr = ssh.exec_command(
f"cd {path} && python {python_script} {cluster} -type worker -index {i} -batch 64>"
f"C:/Users/{user}/Desktop/{ip}.log 2>&1 &"
)
clients.append(ssh)
stdouts.append(stdout)
except paramiko.SSHException:
print("Cannot run file. Continue with other IPs in list...")
client.close()
continue
# Wait for commands to complete
for i in range(len(stdouts)):
print("hello")
stdouts[i].read()
print("hello1")
clients[i].close()
print('hello2")
print("\n\n***********************End execution***********************\n\n")
This script, which is run locally, is able to SSH into the servers and run the command (i.e., run a python script called worker.py and log the command output to a log file). I.e., it is able to go through the first for loop with no issues.
My issue is related to the second for loop. Please see the print statements I added in the second for loop to be clear. When I run SSH.py locally, this is what I observe:
As you can see, I ssh into each of the servers and then stay at reading the command output of the first server I ssh over to. The worker.py script can take 30 mins or so to complete and the command output is the same on each server -- so it will take 30 mins to read the command output of the first server, then close the SSH connection of the first server, take a couple seconds to read the command output of the second server (as it is the same as the first one and would already be entirely printed), close its SSH connection, and so on. Please see below some of the command line output, if this helps.
Now, my question is, what if I don't want to wait until the worker.py script finishes, i.e., those entire 30 mins? I cannot/do not know how to raise a KeyboardInterrupt exception. What I have tried is quitting the local SSH.py script. However, as you can see from the print statements, this will not close the SSH connections although the training, and thus the log files, will stop logging info. In addition, after I quit the local SSH.py script, if I try to delete any of the log files, I get an error saying "cannot delete file because it is being used in cmd.exe" -- this only happens sometimes and I believe it is because of not closing the SSH connections?
First run in python console:
It hangs: Local python and log file running and saving but no print statements and no python and log file being run/saved in servers.
I run it again so second process starts:
Now, the first process doesn't hang anymore (python running and log files being saved in server). And can close this second run/process. It is like the second run/process helps with the hang of the first run/process.
If I were to run python SSH.py in the terminal it would just hang.
This was not happening before.
If you know that SSHClient.close cleanly close the connection and abort the remote command, call it on response to KeyboardInterrupt.
For this you cannot use the simple solution with stdout.read, as it blocks and prevents handling of the Ctrl+C on Windows.
Use the waiting code from my answer to Run multiple commands in different SSH servers in parallel using Python Paramiko (the while any(x is not None for x in stdouts): snippet).
And wrap it to try:...except (KeyboardInterrupt):.
try:
while any(x is not None for x in stdouts):
for i in range(len(stdouts)):
stdout = stdouts[i]
if stdout is not None:
channel = stdout.channel
# To prevent losing output at the end, first test for exit,
# then for output
exited = channel.exit_status_ready()
while channel.recv_ready():
s = channel.recv(1024).decode('utf8')
print(f"#{i} stdout: {s}")
while channel.recv_stderr_ready():
s = channel.recv_stderr(1024).decode('utf8')
print(f"#{i} stderr: {s}")
if exited:
print(f"#{i} done")
clients[i].close()
stdouts[i] = None
time.sleep(0.1)
except (KeyboardInterrupt):
print("Aborting")
for i in range(len(clients)):
print(f"#{i} closing")
clients[i].close()
If you do not need to separate the stdout and stderr, you can greatly simplify the code by using Channel.set_combine_stderr. See Paramiko ssh die/hang with big output.
i created a script which runs on boot which checks if there's an internet connection on my raspberry pi, and at the same time updates the time (care of ntp) - via os.system().
import datetime, os, socket, subprocess
from time import sleep
dir_path = os.path.dirname(os.path.abspath(__file__))
def internet(host="8.8.8.8"):
result = subprocess.call("ping -c 1 "+host, stdout=open(os.devnull,'w'), shell=True)
if result == 0:
return True
else:
return False
timestr = time.strftime("%Y-%m-%d--%H:%M:%S")
netstatus = internet()
while netstatus == False:
sleep(30)
netstatus = internet()
if netstatus == True:
print "successfully connected! updating time . . . "
os.system("sudo bash "+dir_path+"/updatetime.sh")
print "time updated! time check %s"%datetime.datetime.now()
where updatetime.sh contains the following:
service ntp stop
ntpd -q -g
service ntp start
this script runs at reboot/boot and i'm running this in our workplace, 24/7. also, outputs from scripts like these are saved in a log file. it's working fine, but is there a way how NOT to output connect: Network is unreachable
if there's no internet connection? thanks.
edit
i run this script via a shell script i named launch.sh which runs check_net.py (this script's name), and other preliminary scripts, and i placed launch.sh in my crontab to run on boot/reboot:
#reboot sh /home/pi/launch.sh > /home/pi/logs/cronlog 2>&1
from what i've read in this thread: what does '>/dev/null/ 2>&1' mean, 2 handles stderr where as 1 handles stdout.
i am new to this. I wish to see my stdout - but not the stderrs (in this case, the connect: Network is unreachable messages (only)..
/ogs
As per #shellter 's link suggestion in the comments, i restructured my cron to:
#reboot sh /home/pi/launch.sh 2>&1 > /home/pi/logs/cronlog | grep "connect: Network is unreachable"
alternatively, i also came up of an alternative solution, which involves a different way of checking an internet connection with the use urllib2.urlopen():
def internet_init():
try:
urllib2.urlopen('https://www.google.com', timeout=1)
return True
except urllib2.URLError as err:
return False
either of the two methods above omitted any connect: Network is unreachable error output in my logs.
thanks!
/ogs
I have a class that creates the connection. I can connect and execute 1 command before the channel is closed. On another system i have i can execute multiple commands and the channel does not close. Obviously its a config issue with the systems i am trying to connect to.
class connect:
newconnection = ''
def __init__(self,username,password):
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
ssh.connect('somehost', username=username,password=password,port=2222,timeout=5)
except:
print "Count not connect"
sys.exit()
self.newconnection = ssh
def con(self):
return self.newconnection
Then i use 'ls' command just to print some output
sshconnection = connect('someuser','somepassword').con()
stdin, stdout, stderr = sshconnection.exec_command("ls -lsa")
print stdout.readlines()
print stdout
stdin, stdout, stderr = sshconnection.exec_command("ls -lsa")
print stdout.readlines()
print stdout
sshconnection.close()
sys.exit()
After the first exec_command runs it prints the expected output of the dir list. When i print stdout after the first exec_command it looks like the channel is closed
<paramiko.ChannelFile from <paramiko.Channel 1 (closed) -> <paramiko.Transport at 0x2400f10L (cipher aes128-ctr, 128 bits) (active; 0 open channel(s))>>>
Like i said on another system i am able to keep running commands and the connection doesn't close. Is there a way i can keep this open? or a better way i can see the reason why it closes?
edit: So it looks like you can only run 1 command per SSHClient.exec_command... so i decided to get_transport().open_session() and then run a command. The first one always works. The second one always fails and the scripts just hangs
With just paramiko after the exec_command executes the channel is closed and the ssh returns an auth prompt.
Seems its not possible with just paramiko, try fabric or another tool.
** fabric did not work out too.
Please see the following referece as it provides a way to do this in Paramiko:
How do you execute multiple commands in a single session in Paramiko? (Python)
it's possible with netmiko (tested on windows).
this example is written for connecting to cisco devices but the principle is adaptable for others as well.
import netmiko
from netmiko import ConnectHandler
import json
def connect_enable_silent(ip_address,ios_command):
with open ("credentials.txt") as line:
line_1 = json.load(line)
for k,v in line_1.items():
router=(k,v)
try:
ssh = ConnectHandler(**router[1],device_type="cisco_ios",ip=ip_address)
ssh.enable()
except netmiko.ssh_exception.NetMikoAuthenticationException:
#incorrect credentials
continue
except netmiko.ssh_exception.NetMikoTimeoutException:
#oddly enough if it can log in but not able to authenticate to enable mode the ssh.enable() command does not give an authentication error
#but a time-out error instead
try:
ssh = ConnectHandler(username = router[1]['username'],password = router[1]['password'],device_type="cisco_ios", ip=ip_address)
except netmiko.ssh_exception.NetMikoTimeoutException:
# connection timed out (ssh not enabled on device, try telnet)
continue
except Exception:
continue
else:
output = ssh.send_command(ios_command)
ssh.disconnect()
if "at '^' marker." in output:
#trying to run a command that requires enble mode but not authenticated to enable mode
continue
return output
except Exception:
continue
else:
output = ssh.send_command(ios_command)
ssh.disconnect()
return output
output = connect_enable_silent(ip_address,ios_command)
for line in output.split('\n'):
print(line)
Credentials text is meant to store different credentials in case you are planning to call this function to access multiple devices and not all of them using the same credentials. It is in the format:
{"credentials_1":{"username":"username_1","password":"password_1","secret":"secret_1"},
"credentials_2":{"username":"username_2","password":"password_2","secret":"secret_2"},
"credentials_3": {"username": "username_3", "password": "password_3"}
}
The exceptions can be changed to do different things, in my case i just needed it to not return an error and continue trying the next set, which is why most exceptions are silenced.
I'm using a simple pexpect script to ssh to a remote machine and grab a value returned by a command.
Is there any way, pexpect or sshwise I can use to ignore the unix greeting?
That is, from
child = pexpect.spawn('/usr/bin/ssh %s#%s' % (rem_user, host))
child.expect('[pP]assword: ', timeout=5)
child.sendline(spass)
child.expect([pexpect.TIMEOUT, prompt])
child.before = '0'
child.sendline ('%s' % cmd2exec)
child.expect([pexpect.EOF, prompt])
# Collected data processing
result = child.before
# logon to the machine returns a lot of garbage, the returned executed command is at the 57th position
print result.split('\r\n') [57]
result = result.split('\r\n') [57]
How can I simply get the returned value, ignoring,
the "Last successful login" and "(c)Copyright" stuff
and without having to concern with the value correct position?
Thanks !
If you have access to the server to which you are logging in, you can try creating a file named .hushlogin in the home directory. The presence of this file silences the standard MOTD greeting and similar stuff.
Alternatively, try ssh -T, which will disable terminal allocation entirely; you won't get a shell prompt, but you may still issue commands and read the response.
There is also a similar thread on ServerFault which may be of some use to you.
If the command isn't interactive, you can just run ssh HOST COMMAND to run the command without all the login excitement happening at all. If the command is interactive, you can frequently use the ssh -t option (ssh -t HOST COMMAND) to force pseudo-tty allocation and trick the remote process to think that it's running attached to a TTY.
I have used paramiko to automate ssh connection and I have found it useful. It can deal with greetings and silent execution.
http://www.lag.net/paramiko/
Hey there you kann kill all that noise by using the sys module and a small class:
class StreamToLogger(object):
"""
Fake file-like stream object that redirects writes to a logger instance.
"""
def __init__(self, logger, log_level=logging.INFO):
self.logger = logger
self.log_level = log_level
self.linebuf = ''
def write(self, buf):
for line in buf.rstrip().splitlines():
self.logger.log(self.log_level, line.rstrip())
#Mak
stdout_logger = logging.getLogger('STDOUT')
sl = StreamToLogger(stdout_logger, logging.INFO)
sys.stdout = sl
stderr_logger = logging.getLogger('STDERR')
sl = StreamToLogger(stderr_logger, logging.ERROR)
sys.stderr = sl
Can't remember where i found that snippet but it works for me :)