paramiko errno 110, Python script fails to connect from a specific host - python

I have written a paramiko script to batch-transfer files with sftp. The script works fine on my development machine -- Linux Mint 13, using Python 2.7.
When I moved the script to the production system, I found I had to build Python from scratch on it since the system Python was too old. So I built Python 2.7 on it --Centos -- and then attempted to run my script. It failed with a:
paramiko.SSHException - Errno 110, connection timeout
I've googled for that exception, but didn't find anything that seemed to fit. The script seems to 'hang' and the timeout on the paramiko.Transport((host, port)) part.
I thought this strange so attempted to do an sftp using openssh from that system, just to assure the remote host was responsive. It was -- and it worked.
So, now I go back to my script and simplify everything so it makes a bare-bones connection .. Still, I get a connection timeout. I don't know how to turn up debug with paramiko. Any suggestions?
Here's the basic script:
import os.path
import sys
import traceback
import paramiko
host = 'sftp.host.com'
user = 'user'
pw = 'password'
storepath = '/home/ftpservice/download'
is_dir = lambda x: oct(x)[1:3] == '40'
is_file = lambda x: oct(x)[1:3] == '10'
tp = paramiko.Transport((host, 22))
print 'tp is made, connecting '
tp.connect(username=user, password=pw, hostkey=None)
sftp = tp.open_sftp_client()
print 'sftp client made, now listing files'
filelist = sftp.listdir('.')
print filelist
for i in filelist:
fs = sftp.stat(i)
print "file is %s " % i
print "stmode %s" % sftp.stat(i).st_mode
if is_dir(sftp.stat(i).st_mode):
print "%s is a directory " % i
elif is_file(sftp.stat(i).st_mode):
print "%s is a file " % i
else:
print "no clue what %s is " % i

Related

process local files on a remote server

I have here a bunch of files on my workstation. I wrote a multiprocessing python3 script which runs pretty fine. My workstation is a tiny-pc but I have a huge 40 Thread server which I would like to utilize with this software. I know I can manually rsync the files to the server, then execute the script there and rsync them back. But programming and doing something by hand isn't fun ;-)
Thus, I would like to have the local files on my tiny-pc transferred to the server, processes there with my script and the output (.csv and plots) should be transferred back to my workstation.
How can I do that?
I think paramiko would be the software to go. This is what I got:
import paramiko
import sysrsync
from datetime import date
import time
today = date.today()
today_dmy = today.strftime("%d.%m.%Y")
input_local = '/mnt/c/Users/user/Documents/input/' + today_dmy
output_local = '/mnt/c/Users/user/Documents/output'
input_external = '/home/user/input/' + today_dmy
output_external = '/home/user/output/' + today_dmy
ip = '192.168.10.6'
key = '/home/user/.ssh/id_rsa'
sysrsync.run(source = input_local,
destination = input_external,
destination_ssh = ip,
options = ['-avz'],
private_key = key)
ssh_client=paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
session = ssh_client.connect(hostname=ip, username='user', password='pwd')
stdin, stdout, stderr = ssh_client.exec_command('python3 /home/user/Programm.py')
print(stderr.readlines())
print(stdout.readlines())
time.sleep(10)
sysrsync.run(source = output_external,
destination = output_local,
source_ssh = ip,
options=['-avz'],
private_key = key)
ssh_client.close()
The rsync-part is working, but I get the error that the modules which are loaded in "Programm.py" aren't available. However, on the server these modules ARE installed and working (if I login into the server and execute "python3 Programm.py" by hand it works).
This is the error of "stderr"
['Traceback (most recent call last):\n', ' File "/home/user/Programm.py", line 8, in <module>\n', ' import pandas as pd\n', "ModuleNotFoundError: No module named 'pandas'\n"]
For me it seems that the code tries to execute the local python3 and not the one on the server. Or am I wrong?
What have I to do / What's wrong?

Command output is corrupted when executed using Python Paramiko exec_command

I'm a software tester, trying to verify that the log on a remote QNX (a BSD variant) machine will contain the correct entries after specific actions are taken. I am able to list the contents of the directory in which the log resides, and use that information in the command to read (really want to use tail -n XX <file>) the file. So far, I always get a "(No such file or directory)" when trying to read the file.
We are using Froglogic Squish for automated testing, because the Windows UI (that interacts with the server piece on QNX) is built using Qt extensions for standard Windows elements. Squish uses Python 2.7, so I am using Python 2.7.
I am using paramiko for the SSH connection to the QNX server. This has worked great for sending commands to the simulator piece that also runs on the QNX server.
So, here's the code. Some descriptive names have been changed to avoid upsetting my employer.
import sys
import time
import select
sys.path.append(r"C:\Python27\Lib\site-packages")
sys.path.append(r"C:\Python27\Lib\site-packages\pip\_vendor")
import paramiko
# Import SSH configuration variables
ssh_host = 'vvv.xxx.yyy.zzz'
thelog_dir = "/logs/the/"
ssh_user = 'un'
ssh_pw = 'pw'
def execute_Command(fullCmd):
outptLines = []
#
# Try to connect to the host.
# Retry a few times if it fails.
#
i = 1
while True:
try:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ssh_host, 22, ssh_user, ssh_pw)
break
except paramiko.AuthenticationException:
log ("Authentication failed when connecting to %s" % ssh_host)
return 1
except:
log ("Could not SSH to %s, waiting for it to start" % ssh_host)
i += 1
time.sleep(2)
# If we could not connect within time limit
if i == 30:
log ("Could not connect to %s. Giving up" % ssh_host)
return 1
# Send the command (non-blocking?)
stdin, stdout, stderr = ssh.exec_command(fullCmd, get_pty=True)
for line in iter(stdout.readline, ""):
outptLines.append(line)
#
# Disconnect from the host
#
ssh.close()
return outptLines
def get_Latest_Log():
fullCmd = "ls -1 %s | grep the_2" %thelog_dir
files = execute_Command(fullCmd)
theFile = files[-1]
return theFile
def main():
numLines = 20
theLog = get_Latest_Log()
print("\n\nThe latest log is %s\n\n" %theLog)
fullCmd = "cd /logs/the; tail -n 20 /logs/the/%s" %theLog
#fullCmd = "tail -n 20 /logs/the/%s" %theLog
print fullCmd
logLines = execute_Command(fullCmd)
for line in logLines:
print line
if __name__ == "__main__":
# execute only if run as a script
main()
I have tried to read the file using both tail and cat. I have also tried to get and open the file using Paramiko's SFTP client.
In all cases, the response of trying to read the file fails -- despite the fact that listing the contents of the directory works fine. (?!) And BTW, the log file is supposed to be readable by 'world'. Permissions are -rw-rw-r--.
The output I get is:
"C:\Users\xsat086\Documents\paramikoTest>python SSH_THE_MsgChk.py
The latest log is the_20210628_115455_205.log
cd /logs/the; tail -n 20 /logs/the/the_20210628_115455_205.log
(No such file or directory)the/the_20210628_115455_205.log"
The file name is correct. If I copy and paste the tail command into an interactive SSH session with the QNX server, it works fine.
Is it something to do with the 'non-interactive' nature of this method of sending commands? I read that some implementations of SSH are built upon a command that offers a very limited environment. I don't see how that would impact this tail command.
Or am I doing something stupid in this code?
I cannot really explain completely, why you get the results you get.
But in general a corrupted output is a result of enabling and not handling terminal emulation. You enable the terminal emulation using get_pty=True. Remove it. You should not use the terminal emulation, when automating command execution.
Related question:
Is there a simple way to get rid of junk values that come when you SSH using Python's Paramiko library and fetch output from CLI of a remote machine?

How to access a different/remote Windows machine using Python?

I am currenlty using machine A and I am trying to access machine B via Python to copy files from machine B to machine A.
I have already tried the methods explained here How to connect to a remote Windows machine to execute commands using python? , but with no luck as I cannot manage to even get access to the remote machine.
I am open to other solutions, even better if using Python 3+.
Here is an example of the code in use.
ip = r'\\IP.IP.IP.IP'
username = r'AccountUserName'
password = r'AccountPassword'
# -------------------------------- with win32net
import win32net
import win32file
data = {
'remote': r'\\IP.IP.IP.IP\C$',
'local': 'C:',
'username': username,
'password': password
}
win32net.NetUseAdd(None, 2, data)
# -------------------------------- with wmi
import wmi
from socket import *
try:
print ("Establishing connection to %s" %ip)
connection = wmi.WMI(ip, user=username, password=password )
print ("Connection established")
except wmi.x_wmi:
print ("Your Username and Password of "+getfqdn(ip)+" are wrong.")
Using the win32net method
According to the documentation here https://learn.microsoft.com/en-us/windows/win32/api/lmuse/nf-lmuse-netuseadd
If the function is to be run from the same computer the script is running from (A), then the first parameter f NetUseAdd can be left to NONE, but with that I get the error
pywintypes.error: (87, 'NetUseAdd', 'The parameter is incorrect.')
Whilst if I change it with "127.0.0.1" I get the error
pywintypes.error: (50, 'NetUseAdd', 'The request is not supported.')
And lastly, if I change it with the same IP that I am trying to access I get the error
pywintypes.error: (1326, 'NetUseAdd', 'Logon failure: unknown user name or bad password.')
Using the wmi method
It gives the error
Your Username and Password of \\IP.IP.IP.IP are wrong.
There can be multiple ways to achieve this. One of them is given below which makes use of inbuilt windows utilities.
import os
machine_b = {"ip":"10.197.145.244","user":"administrator","pwd":"abc1234"}
src = r"C:\Temp" # folder to copy from remote machine
dest = r"C:\Python27\build\temp" # destination folder on host machine
network_drive_letter = "Z:"
source_driver_letter = os.path.splitdrive(src)[0][0]
cmd = "netuse %s \\%s\%s$ %s /u:%s"%(network_drive_letter, machine_b["ip"],source_driver_letter,machine_b["pwd"],machine_b["user"])
os.system(cmd)
cmd = "robocopy %s %s /mir"%(src.replace(source_driver_letter,network_drive_letter),dest)
os.system(cmd)
You can improve this code by handling exceptions and replacing os.system with subprocess.Popen calls.
Note: Be careful with /MIR switch as it can copy as well as delete files in the host machine. It creates mirror of destination folder.

Creating and logging into a linux virtual machine in automation with python

I currently have a working python script that SSHs into a remote Linux machine and executes commands on that machine. I'm using paramiko to handle ssh connectivity. Here is the code in action, executing an hostname -s command:
blade = '192.168.1.15'
username='root'
password=''
# now, connect
try:
client = paramiko.SSHClient()
client.load_system_host_keys()
client.set_missing_host_key_policy(paramiko.WarningPolicy())
print '*** Connecting...'
client.connect(blade, 22, username, password)
# print hostname for verification
stdin, stdout, stderr = client.exec_command('hostname --short')
print stdout.readlines()
except Exception, e:
print '*** Caught exception: %s: %s' % (e.__class__, e)
traceback.print_exc()
try:
client.close()
except:
pass
sys.exit(1)
This works fine, but what I'm actually trying to do is more complicated. What I would actually like to do is SSH into that same Linux machine, as I did above, but then create a temporary virtual machine on it, and execute a command on that virtual machine. Here is my (nonworking) attempt:
blade='192.168.1.15'
username='root'
password=''
# now, connect
try:
# client = paramiko.SSHClient()
client.load_system_host_keys()
client.set_missing_host_key_policy(paramiko.WarningPolicy())
print '*** Connecting...'
client.connect(blade, 22, username, password)
# create VM, log in, and print hostname for verification
stdin, stdout, stderr = client.exec_command('sudo kvm -m 1024 -drive file=/var/lib/libvirt/images/oa4-vm$
time.sleep(60) #delay to allow VM to initialize
stdin.write(username + '\n') #log into VM
stdin.write(password + '\n') #log into VM
stdin, stdout, stderr = client.exec_command('hostname --short')
print stdout.readlines()
except Exception, e:
print '*** Caught exception: %s: %s' % (e.__class__, e)
traceback.print_exc()
try:
client.close()
except:
pass
sys.exit(1)
When I run this, I get the following:
joe#computer:~$ python automata.py
*** Connecting...
/home/joe/.local/lib/python2.7/site-packages/paramiko/client.py:95: UserWarning: Unknown ssh-rsa host key for 192.168.1.15: 25f6a84613a635f6bcb5cceae2c2b435
(key.get_name(), hostname, hexlify(key.get_fingerprint())))
*** Caught exception: <class 'socket.error'>: Socket is closed
Traceback (most recent call last):
File "automata.py", line 32, in function1
stdin.write(username + '\n') #log into VM
File "/home/joe/.local/lib/python2.7/site-packages/paramiko/file.py", line 314, in write
self._write_all(data)
File "/home/joe/.local/lib/python2.7/site-packages/paramiko/file.py", line 439, in _write_all
count = self._write(data)
File "/home/joe/.local/lib/python2.7/site-packages/paramiko/channel.py", line 1263, in _write
self.channel.sendall(data)
File "/home/joe/.local/lib/python2.7/site-packages/paramiko/channel.py", line 796, in sendall
raise socket.error('Socket is closed')
error: Socket is closed
I'm not sure how to interpret this error -- "socket is closed" makes me think the SSH connection is terminating one I try to create the VM. Does anyone have any pointers?
update
I'm attempting to use the pexpect wrapper and having trouble getting it to interact with the un/pw prompt. I'm testing the process by ssh'ing into a remote machine and running a test.py script which prompts me for a username, then saves the username in a text file. Here is my fab file:
env.hosts = ['hostname']
env.user = 'userame'
env.password = 'password'
def vm_create():
run("python test.py")
And the contents of test.py on the remote machine are:
#! /usr/bin/env python
uname = raw_input("Enter Username: ")
f = open('output.txt','w')
f.write(uname + "\n")
f.close
So, I can execute "fab vm_create" on the local machine and it successfully establishes the SSH connection and prompts me for the username, as defined by test.py. However, if I execute a third python file on my local machine with the pexpect wrapper, like this:
import pexpect
child = pexpect.spawn('fab vm_create')
child.expect ('Enter Username: ')
child.sendline ('password')
Nothing seems to happen. I get no errors, and no output.txt is created on the remote machine. Am I using pexpect incorrectly?
As much as I love paramiko, this may be better suited to using Fabric.
Here's a sample fabfile.py:
from fabric.api import run
from fabric.api import sudo
from fabric.api import env
env.user = 'root'
env.password = ''
env.host = ='192.168.1.15'
def vm_up():
sudo("kvm -m 1024 -drive file=/var/lib/libvirt/images/oa4-vm$...")
run("hostname --short")
To then run this, use
$ fab vm_up
If you don't set the host and password in the fabfile itself (rightly so), then you can set these at the command line:
$ fab -H 192.168.1.15 -p PASSWORD vm_up
However, your kvm line is still expecting input. To send input (and wait for the expected prompts), write another script that uses pexpect to call fab:
child = pexpect.spawn('fab vm_up')
child.expect('username:') # Put this in the format you're expecting
child.send('root')
use fabric http://docs.fabfile.org/en/1.8/
Fabric is a Python (2.5 or higher) library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks
from fabric.api import run
def host_name():
run('hostname -s')

Issues trying to SSH into a fresh EC2 instance with Paramiko

I'm working on a script that spins up a fresh EC2 instance with boto and uses the Paramiko SSH client to execute remote commands on the instance. For whatever reason, the Paramiko client is unabled to connect, I get the error:
Traceback (most recent call last):
File "scripts/sconfigure.py", line 29, in <module>
ssh.connect(instance.ip_address, username='ubuntu', key_filename=os.path.expanduser('~/.ssh/test'))
File "build/bdist.macosx-10.3-fat/egg/paramiko/client.py", line 291, in connect
File "<string>", line 1, in connect
socket.error: [Errno 61] Connection refused
I can ssh in fine manually using the same key file and user. Has anyone run into issues using Paramiko? My full code is below. Thanks.
import boto.ec2, time, paramiko, os
# Connect to the us-west-1 region
ec2 = boto.ec2.regions()[3].connect()
image_id = 'ami-ad7e2ee8'
image_name = 'Ubuntu 10.10 (Maverick Meerkat) 32-bit EBS'
new_reservation = ec2.run_instances(
image_id=image_id,
key_name='test',
security_groups=['web'])
instance = new_reservation.instances[0]
print "Spinning up instance for '%s' - %s. Waiting for it to boot up." % (image_id, image_name)
while instance.state != 'running':
print "."
time.sleep(1)
instance.update()
print "Instance is running, ip: %s" % instance.ip_address
print "Connecting to %s as user %s" % (instance.ip_address, 'ubuntu')
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(instance.ip_address, username='ubuntu', key_filename=os.path.expanduser('~/.ssh/test'))
stdin, stdout, stderr = ssh.exec_command('echo "TEST"')
print stdout.readlines()
ssh.close()
I seem to have figured this out by trial and error. Even though the instance status is "running" according to boto, there is a delay for when it will actually allow an SSH connection. Adding a "time.sleep(30)" before the "ssh.connect(...)" seems to do the trick for me, though this may vary.
The way to check it's ssh available is to make sure its two status checks both passes. On web UI it looks like this:
And using boto3 (the original question used boto but it was 5 years ago), we can do:
session = boto3.Session(...)
client = session.client('ec2')
res = client.run_instances(...) # launch instance
instance_id = res['Instances'][0]['InstanceId']
while True:
statuses = client.describe_instance_status(InstanceIds=[instance_id])
status = statuses['InstanceStatuses'][0]
if status['InstanceStatus']['Status'] == 'ok' \
and status['SystemStatus']['Status'] == 'ok':
break
print '.'
time.sleep(5)
print "Instance is running, you are ready to ssh to it"
Why not use boto.manage.cmdshell instead?
cmd = boto.manage.cmdshell.sshclient_from_instance(instance,
key_path,
user_name='ec2_user')
(code taken from line 152 in ec2_launch_instance.py)
For available cmdshell commands have a look at the SSHClient class from cmdshell.py.
I recently ran into this issue. The "correct" way would be to initiate a close() first and then reopen the connection. However on older versions, close() was broken.
With this version or later, it should be fixed:
https://github.com/boto/boto/pull/412
"Proper" method:
newinstance = image.run(min_count=instancenum, max_count=instancenum, key_name=keypair, security_groups=security_group, user_data=instancename, instance_type=instancetype, placement=zone)
time.sleep(2)
newinstance.instances[0].add_tag('Name',instancename)
print "Waiting for public_dns_name..."
counter = 0
while counter < 70:
time.sleep(1)
conn.close()
conn = boto.ec2.connection.EC2Connection(ec2auth.access_key,ec2auth.private_key)
startedinstance = conn.get_all_instances(instance_ids=str(newinstance.instances[0].id))[0]
counter = counter + 1
if str(startedinstance.instances[0].state) == "running":
break
if counter == 69:
print "Timed out waiting for instance to start."
print "Added: " + startedinstance.instances[0].tags['Name'] + " " + startedinstance.instances[0].public_dns_name
I recently view this code and I have a suggestion for code,
Instead of running while loop to check whether the instance is running or not, you can try "wait_until_running()".
Following is the sample code...
client = boto3.resource(
'ec2',
region_name="us-east-1"
)
Instance_ID = "<your Instance_ID>"
instance = client.Instance(Instance_ID)
instance.start()
instance.wait_until_running()
After that try to code for the ssh connection code.

Categories

Resources