Remote directory doesn't exit error when uploading - python

exceptions.IOError when i run python mysftpclient.py . I am new to python, Can you tell what causing this error...?
Error
*** Caught exception: <type 'exceptions.IOError'>: [Errno 2] Directory does not exist.
============================================================
Total files copied: 0
All operations complete!
============================================================
Code
def put_dir(self, source, target):
''' Uploads the contents of the source directory to the target path. The
target directory needs to exists. All subdirectories in source are
created under target.
'''
for item in os.listdir(source):
if os.path.isfile(os.path.join(source, item)):
self.put(os.path.join(source, item), '%s/%s' % (target, item))
else:
my_mkdir(self, '%s/%s' % (target, item), 551, ignore_existing=True)
put_dir(self, os.path.join(source, item), '%s/%s' % (target, item))
def my_mkdir(self, sftp, path, mode=511, ignore_existing=False):
''' Augments mkdir by adding an option to not fail if the folder exists '''
try:
sftp.mkdir(path, mode)
except IOError:
if ignore_existing:
pass
else:
raise
Update
I think below the line causing the issue
# now, connect and use paramiko Transport to negotiate SSH2 across the connection
try:
print 'Establishing SSH connection to:', hostname, port, '...'
t = paramiko.Transport((hostname, port))
t.connect()
agent_auth(t, username)
if not t.is_authenticated():
print 'RSA key auth failed! Trying password login...'
t.connect(username=username, password=password, hostkey=hostkey)
else:
sftp = t.open_session()
sftp = paramiko.SFTPClient.from_transport(t)
parent=os.path.split(dir_local)[1]
try:
sftp.mkdir(parent)
sftp.chdir(parent)
except IOError, e:
print '(assuming ', dir_remote, 'exists)', e
put_dir(sftp, dir_local, dir_remote)
except Exception, e:
print '*** Caught exception: %s: %s' % (e.__class__, e)
try:
t.close()
except:
pass
print '=' * 60
print 'Total files copied:',files_copied
print 'All operations complete!'
print '=' * 60
I don't know somehow i am getting
(assuming /NewFolder/201410181636099007 exists) Unable to create the file/directory

I think "sftp.mkdir(parent)" is throwing this exception. And then it directly jumps to exception block where it is printing error which you are mentioning.
(assuming /NewFolder/201410181636099007 exists) Unable to create the file/directory
then your script is calling put_dir which might be expecting path set into paramiki SFTP by sftp.chdir(parent) but which is not true when exception is raised.
I think you can put "sftp.chdir(parent)" line just before put_dir as in any case you would like to set your path.
For example,
try:
sftp.mkdir(parent)
except IOError, e:
print '(assuming ', dir_remote, 'exists)', e
sftp.chdir(parent)
put_dir(sftp, dir_local, dir_remote)

Related

Using Paramiko in Python check if directory can be deleted

I found out a way to delete a remote directory using Paramiko based on:
How to delete all files in directory on remote SFTP server in Python?
However, if directory does not have valid permissions than it would fail. I have moved deletion to an exception block. However, is there a way to check if directory has valid permissions to delete?
Currently, what I notice is that in a recursive directory, if one of the sub-directory does not write permissions, then it would fail to delete, but I want deletion to continue and ignore the one missing necessary permission. But, if the root directory itself doesn't have valid permission, then throw and exception and bail out with appropriate exit code.
How can I do this?
def rmtree(sftp, remotepath, level=0):
try:
for f in sftp.listdir_attr(remotepath):
rpath = posixpath.join(remotepath, f.filename)
if stat.S_ISDIR(f.st_mode):
rmtree(sftp, rpath, level=(level + 1))
else:
rpath = posixpath.join(remotepath, f.filename)
try:
sftp.remove(rpath)
except Exception as e:
print("Error: Failed to remove: {0}".format(rpath))
print(e)
except IOError as io:
print("Error: Access denied. Do not have permissions to remove: {0} -- {1}".format(remotepath, level))
print(io)
if level == 0:
sys.exit(1)
except Exception as e:
print("Error: Failed to delete: {0} -- {1}".format(remotepath, level))
print(e)
if level == 0:
sys.exit(1)
if level <= 2:
print('removing %s%s' % (' ' * level, remotepath))
try:
sftp.rmdir(remotepath)
except IOError as io:
print("Error: Access denied for deleting. Invalid permission")
print(io)
except Exception as e:
print("Error: failed while deleting: {0}".format(remotepath))
print(e)
return
It's not possible to "check" for actual permissions for a specific operation with SFTP protocol. SFTP API does not provide such functionality nor enough information for you to decide on your own.
See also How to check for read permission using JSch with SFTP protocol?
You would have to use another API – e.g. execute some test in a shell using SSHClient.exec_command. For example you can use the test command, e.g.:
test -w /parent/directory

How to print long listing for files in a remote server using pysftp in Python2.7

I have a script which connects to remote server using pysftp and does all basic functions such as put,get and listing files in remote server(basic operation between remote and local machine). The script doesn't show me the long listing of files in a folder in remote machine.But it prints me the long listing of files in the current path in local machine. I found it strange and hence have been trying out all possible solutions like cwd,cd,chdir etc. Please find the particular portion of the code and help me resolve the issue.
if (command == 'LIST'):
print"Script will start listing files"
try:
s = pysftp.Connection('ip', username='user', password='pwd')
except Exception, e:
print e
logfile.write("Unable to connect to FTP Server: Erro is-->" + "\n")
sys.exit()
try:
s.cwd('remote_path')
print(s.pwd)
except Exception, e:
print e
logfile.write("Unable to perform cwd:" + "\n")
sys.exit()
try:
print(s.pwd)
print(subprocess.check_output(["ls"]))
except Exception, e:
print "Unable to perform listing of files in Remote Directory"
s.close()
sys.exit()
Thanks and regards,
Shreeram

check if directory exists on remote machine before sftp

this my function that copies file from local machine to remote machine with paramiko, but it doesn't check if the destination directory exists and continues copying and doesn't throws error if remote path doesn't exists
def copyToServer(hostname, username, password, destPath, localPath):
transport = paramiko.Transport((hostname, 22))
transport.connect(username=username, password=password)
sftp = paramiko.SFTPClient.from_transport(transport)
sftp.put(localPath, destPath)
sftp.close()
transport.close()
i want to check if path on remote machine exists and throw error if not.
thanks in advance
In my opinion it's better to avoid exceptions, so unless you have lots of folders, this is a good option for you:
if folder_name not in self.sftp.listdir(path):
sftp.mkdir(os.path.join(path, folder_name))
This will do
def copyToServer(hostname, username, password, destPath, localPath):
transport = paramiko.Transport((hostname, 22))
sftp = paramiko.SFTPClient.from_transport(transport)
try:
sftp.put(localPath, destPath)
sftp.close()
transport.close()
print(" %s SUCCESS " % hostname )
return True
except Exception as e:
try:
filestat=sftp.stat(destPath)
destPathExists = True
except Exception as e:
destPathExists = False
if destPathExists == False:
print(" %s FAILED - copying failed because directory on remote machine doesn't exist" % hostname)
log.write("%s FAILED - copying failed directory at remote machine doesn't exist\r\n" % hostname)
else:
print(" %s FAILED - copying failed" % hostname)
log.write("%s FAILED - copying failed\r\n" % hostname)
return False
You can use the chdir() method of SFTPClient. It checks if the remote path exists, raises error if not.
try:
sftp.chdir(destPath)
except IOError as e:
raise e
I would use the listdir method within SFTPClient. You will probably have to use this recursively to ensure the entire path is valid.

Launch instance script from Python and AWS cookbook does not create any instances

I am currently working my way through the Python and AWS cookbook from O'Reilly publishing. I'm currently having a look at the launch instance script which I had working (Which - funny story - I didnt think was working, till I clicked on another region in AWS and saw I had created around 50 instances trying to get script working... Lesson for people new to AWS: Always set and know your default region).
Now for some reason the script seems to run without any errors, but in the interactive python shell nothing seems to print out and no instances are created. I dont know what I have changed to stop this working. In some testing I have done (basically importing boto in interactive mode and connecting to EC2 from there), it seems to successfully connect so I have no idea what is causing this.
The script is below, so if anyone can help or tell me any tests that I can do, that would be great:
import os
import time
import boto
import boto.manage.cmdshell
def launch_instance(ami="i-7ef58184",
instance_type="t1.micro",
key_name="paws",
key_extension=".pem",
key_dir="~/.ssh",
group_name="paws",
ssh_port="22",
cidr="0.0.0.0/0",
tag="paws",
user_data=None,
cmd_shell=True,
login_user="ec2-user",
ssh_passwd=None):
cmd = None
ec2 = boto.connect_ec2() # Crededentials are stored in /etc/boto.cfg
ec2 = boto.connect_ec2(debug=2)
try:
key = ec2.get_all_key_pairs(keynames=[key_name])[0]
except ec2.ResponseError, e:
if e.code == 'InvalidKeyPair.NotFound':
print 'Creating keypair %s' % key_name
key = ec2.create_key_pair(key_name)
key.save(key_dir)
else:
raise
try:
group = ec2.get_all_security_groups(groupnames=[group_name])[0]
except ec2.ResponseError, e:
if e.code == 'InvalidGroup.NotFound':
print'Creating security group %s' % group_name
group = ec2.create_security_group(group_name,
'A group that allows SSH access')
else:
raise
try:
group.authorize('tcp',ssh_port,ssh_port,cidr)
except ec2.ResponseError, e:
if e.code == 'InvalidPermission.Duplicate':
print ('Security group %s already authorized') % group_name
else:
raise
reservation = ec2.run_instances(ami,
key_name=key_name,
security_groups=[group_name],
instance_type=instance_type,
user_data=user_data)
instance = reservation.instances[0]
print 'waiting for instance...'
while instance.state != 'running':
time.sleep(30)
instance.update()
print 'Instance is now running'
print 'Instance IP is %s' % instance.ip_address
instance.add_tag(tag)
if cmd_shell:
key_path = os.path.join(os.path.expanduser(key_dir),
key_name+key_extension)
cmd = boto.manage.sshclient_from_instance(instance,
key_path,
username=login_user)
return (instance, cmd)
Can anyone help?
The error is that you have indented all code to be run within the except of keypair creation. Since you already have created the keys and they are available, the exception is not thrown and the entire except block is skipped. You need to unindent/dedent your following code by 4 spaces.
try:
key = ec2.get_all_key_pairs(keynames=[key_name])[0]
except ec2.ResponseError, e:
if e.code == 'InvalidKeyPair.NotFound':
print 'Creating keypair %s' % key_name
key = ec2.create_key_pair(key_name)
key.save(key_dir)
else:
raise
### this should be at this level
try:
...
The same applies to the security group handling - you want to continue whether or not an exception was raised, thus dedent the code as follows:
try:
group.authorize('tcp',ssh_port,ssh_port,cidr)
except ec2.ResponseError, e:
if e.code == 'InvalidPermission.Duplicate':
print ('Security group %s already authorized') % group_name
else:
raise
### from here on at this level
reservation = ec2.run_instances(ami, ....

Paramiko SFTP - Avoid having to specify full local filename?

I have some Python code that uses Paramiko to grab build files from a remote server:
def setup_sftp_session(self, host='server.com', port=22, username='puppy'):
self.transport = paramiko.Transport((host, port))
privatekeyfile = os.path.expanduser('~/.ssh/id_dsa')
try:
ssh_key = paramiko.DSSKey.from_private_key_file(privatekeyfile)
except IOError, e:
self.logger.error('Unable to find SSH keyfile: %s' % str(e))
sys.exit(1)
try:
self.transport.connect(username = username, pkey = ssh_key)
except paramiko.AuthenticationException, e:
self.logger.error("Unable to logon - are you sure you've added the pubkey to the server?: %s" % str(e))
sys.exit(1)
self.sftp = paramiko.SFTPClient.from_transport(self.transport)
self.sftp.chdir('/some/location/buildfiles')
def get_file(self, remote_filename):
try:
self.sftp.get(remote_filename, 'I just want to save it in the local cwd')
except IOError, e:
self.logger.error('Unable to find copy remote file %s' % str(e))
def close_sftp_session(self):
self.sftp.close()
self.transport.close()
I'd like to retrieve each file, and deposit it in the current local working directory.
However, Paramiko doesn't seem to have an option for this - you need to specify the full local destination. You can't even specify a directory (e.g. "./", or even "/home/victorhooi/files") - you need the full path including filename.
Is there any way around this? It'll be annoying if we have to specify the local filename as well, instead of just copying the remote one.
Also - the way I'm handling exceptions in setup_sftp_session, with exit(1) - is that a good practice, or is there a better way?
Cheers,
Victor
you have to insert
os.path.join(os.getcwd(), remote_filename)
Calling exit() in a function is not a good idea. Maybe you want to reuse the code and take some action in case of an exception. If you keep the exit() call you are lost.
I suggest to modify this function such that a True is returned in case of success, and False else. Then the caller can decide what to do.
Another approach would be not to catch the exceptions. So the caller has to handle them and the caller gets the full information (including the stacktrace) about the circumstances of the failure.

Categories

Resources