I am attempting to use python to connect to a server and upload some files from my local directory to /var/www/html but every time I try to do this I get this error:
Error: ftplib.error_perm: 553 Could not create file.
I have already did a chown and a chmod -R 777 to the path. I am using vsftpd and already set write enabled. Does anyone have any ideas?
Code:
ftp = FTP('ipaddress')
ftp.login(user='user', passwd = 'user')
ftp.cwd('/var/www/html')
for root, dirs, files in os.walk(path):
for fname in files:
full_fname = os.path.join(root, fname)
ftp.storbinary('STOR' + fname, open(full_fname, 'rb'))
I had a similar problem also getting the error 553: Could not create file. What (update: partially) solved it for me was changing this line from:
ftp.storbinary('STOR' + fname, open(full_fname, 'rb'))
to:
ftp.storbinary('STOR ' + '/' + fname, open(full_fname, 'rb'))
Notice that there is a space just after the 'STOR ' and I added a forward slash ('/') just before the filename to indicate that i'd like the file stored in the FTP root directory
UPDATE: [2016-06-03]
Actually this only solved part of the problem. I realized later that it was a permissions problem. The FTP root directory allowed writing by the FTP user but i had manually created folders within this directory using another user thus the new directories did not allow the FTP user to write to these directories.
Possible solutions:
Change the permissions on the directories such that the FTP user is
the owner of these directories, or is at least able to read and
write to them.
Create the directories using the ftp.mkd(dir_name) function, then change directory using the ftp.cwd(dir_name) function and
then use the appropriate STOR function (storlines or storbinary)
to write the file to the current directory.
As far as my understanding goes, the STOR command seems to only take a filename as a parameter (not a file path), that's why you need to make sure you are in the correct 'working directory' before using the STOR function (Remember the space after the STOR command)
ftp.storbinary('STOR ' + fname, open(full_fname, 'rb'))
Does path == '/var/www/html'? That's a local path. You need an FTP path.
The local path /var/www/html is not generally accessible by FTP. When you connect to the FTP server, the file system presented to you begins at, often, your user's home directory /home/user.
Since it sounds like you're running the ftp server (vsftpd) on the remote machine, the simplest solution might be something like:
user#server:~$ ln -s /var/www/html /home/user/html
Then you could call ftp.cwd('html'), and ftp.nlst() to get the remote directory listing, and navigate it from there.
Also, don't forget to put a space character in the 'STOR' string (should be 'STOR ').
Best of luck!
I'm sure at this point you have found a solution, but I just stumbled across this thread while I was looking for a solution. I ended up using the following:
# Handles FTP transfer to server
def upload(ftp, dir, file):
# Test if directory exists. If not, create it
if dir.split('/')[-1] not in ftp.nlst('/'.join(dir.split('/')[:-1])):
print("Creating directory: " + dir)
ftp.mkd(dir)
# Check if file extension is text format
ext = path.splitext(file)[1]
if ext.lower() in (".txt", ".htm", ".html"):
ftp.storlines("STOR " + dir + '/' + file, open(dir + '/' + file, "rb"))
else:
ftp.storbinary("STOR " + dir + '/' + file, open(dir + '/' + file, "rb"), 1024)
Related
I'm trying to upload a whole folder to dropbox but only the files get uploaded. Should I create a folder programatically or can I solve the folder-uploading so simple? Thanks
import os
import dropbox
access_token = '***********************'
dbx = dropbox.Dropbox(access_token)
dropbox_destination = '/live'
local_directory = 'C:/Users/xoxo/Desktop/man'
for root, dirs, files in os.walk(local_directory):
for filename in files:
local_path = root + '/' + filename
print("local_path", local_path)
relative_path = os.path.relpath(local_path, local_directory)
dropbox_path = dropbox_destination + '/' + relative_path
# upload the file
with open(local_path, 'rb') as f:
dbx.files_upload(f.read(), dropbox_path)
error:
dropbox.exceptions.ApiError: ApiError('xxf84e5axxf86', UploadError('path', UploadWriteFailed(reason=WriteError('disallowed_name', None), upload_session_id='xxxxxxxxxxx')))
[Cross-linking for reference: https://www.dropboxforum.com/t5/API-support/UploadWriteFailed-reason-WriteError-disallowed-name-None/td-p/245765 ]
There are a few things to note here:
In your sample, you're only iterating over files, so you won't get dirs uploaded/created.
The /2/files/upload endpoint only accepts file uploads, not folders. If you want to create folders, use /2/files/create_folder_v2. You don't need to explicitly create folders for any parent folders in the path for files you upload via /2/files/upload though. Those will be automatically created with the upload.
Per the /2/files/upload documentation, disallowed_name means:
Dropbox will not save the file or folder because of its name.
So, it's likely you're getting this error because you're trying to upload an ignored filed, e.g., ".DS_STORE". You can find more information on those in this help article under "Ignored files".
I'm trying to fetch from SFTP with the following structure:
main_dir/
dir1/
file1
dir2/
file2
I tried to achieve this with commands below:
sftp.get_r(main_path + dirpath, local_path)
or
sftp.get_d(main_path + dirpath, local_path)
The local path is like d:/grabbed_files/target_dir, and the remote is like /data/some_dir/target_dir.
With get_r I am getting FileNotFound exception. With get_d I am getting empty dir (when target dir have files not dirs, it works fine).
I'm totally sure that directory exists at this path. What am I doing wrong?
This one works for me, but when you download directory it create full path locally.
pysftp.Connection.get_r()
I also created simple download and upload methods:
def download_r(sftp, outbox):
tmp_dir = helpers.create_tmpdir()
assert sftp.isdir(str(outbox))
assert pathlib.Path(tmp_dir).is_dir()
sftp.get_r(str(outbox), str(tmp_dir))
tmp_dir = tmp_dir / outbox
return tmp_dir
def upload_r(sftp, inbox, files):
assert sftp.isdir(str(inbox))
if pathlib.Path(files).is_dir():
logger.debug(list(files.iterdir()))
sftp.put_r(str(files), str(inbox))
else:
logger.debug('No files here.')
I didn't understand why it doesn't work so I ended with my own recursive solution:
def grab_dir_rec(sftp, dirpath):
local_path = target_path + dirpath
full_path = main_path + dirpath
if not sftp.exists(full_path):
return
if not os.path.exists(local_path):
os.makedirs(local_path)
dirlist = sftp.listdir(remotepath=full_path)
for i in dirlist:
if sftp.isdir(full_path + '/' + i):
grab_dir_rec(sftp, dirpath + '/' + i)
else:
grab_file(sftp, dirpath + '/' + i)
In the event that you want a context manager wrapper around pysftp that does this for you, here is a solution that is even less code (after you copy/paste the github gist) that ends up looking like the following when used
path = "sftp://user:password#test.com/path/to/file.txt"
# Read a file
with open_sftp(path) as f:
s = f.read()
print s
# Write to a file
with open_sftp(path, mode='w') as f:
f.write("Some content.")
The (fuller) example: http://www.prschmid.com/2016/09/simple-opensftp-context-manager-for.html
This context manager happens to have auto-retry logic baked in in the event you can't connect the first time around (which surprisingly happens more often than you'd expect in a production environment...).
Oh, and yes, this assumes you are only getting one file per connection as it will auto-close the ftp connection.
The context manager gist for open_sftp: https://gist.github.com/prschmid/80a19c22012e42d4d6e791c1e4eb8515
I have to copy a ziped folder using ftplib as follows:
ftp = FTP('ip')
ftp.login(user='user', passwd = 'pass')
filename= "D:/sample.zip"
ftp.storlines("STOR " + os.path.basename(filename), open(filename,"r"))
On the remote the sample folder does gets copied but it is just '1kb' size in actual its size is 2963Kb. So, could you help me out how shall i copy the complete ziped folder on the remote.
Firstly, use storbinary() and not storlines. The latter is for ASCII files.
And since zip files are binary, the file should be opened in binary mode:
ftp.storbinary("STOR " + os.path.basename(filename), open(filename, "rb"))
I am using Python and trying to connect to SFTP and want to retrieve an XML file from there and need to place it in my local system. Below is the code:
import paramiko
sftpURL = 'sftp.somewebsite.com'
sftpUser = 'user_name'
sftpPass = 'password'
ssh = paramiko.SSHClient()
# automatically add keys without requiring human intervention
ssh.set_missing_host_key_policy( paramiko.AutoAddPolicy() )
ssh.connect(sftpURL, username=sftpUser, password=sftpPass)
ftp = ssh.open_sftp()
files = ftp.listdir()
print files
Here connection is success full. And now I want to see all the folders and all the files and need to enter in to required folder for retrieving the XML file from there.
Finally my intention is to view all the folders and files after connecting to SFTP server.
In the above code I had used ftp.listdir() through which I got output as some thing like below
['.bash_logout', '.bash_profile', '.bashrc', '.mozilla', 'testfile_248.xml']
I want to know whether these are the only files present?
And the command I used above is right to view the folders too?
What is the command to view all the folders and files?
The SFTPClient.listdir returns everything, files and folders.
Were there folders, to tell them from the files, use SFTPClient.listdir_attr instead. It returns a collection of SFTPAttributes.
from stat import S_ISDIR, S_ISREG
sftp = ssh.open_sftp()
for entry in sftp.listdir_attr(remotedir):
mode = entry.st_mode
if S_ISDIR(mode):
print(entry.filename + " is folder")
elif S_ISREG(mode):
print(entry.filename + " is file")
The accepted answer by #Oz123 is inefficient. SFTPClient.listdir internally calls SFTPClient.listdir_attr and throws most information away returning file and folder names only. The answer then uselessly and laboriously re-retrieves all that data by calling SFTPClient.lstat for each file.
See also How to fetch sizes of all SFTP files in a directory through Paramiko.
Obligatory warning: Do not use AutoAddPolicy – You are losing a protection against MITM attacks by doing so. For a correct solution, see Paramiko "Unknown Server"
One quick solution is to examine the output of lstat of each object in ftp.listdir().
Here is how you can list all the directories.
>>> for i in ftp.listdir():
... lstatout=str(ftp.lstat(i)).split()[0]
... if 'd' in lstatout: print i, 'is a directory'
...
Files are the opposite search:
>>> for i in ftp.listdir():
... lstatout=str(ftp.lstat(i)).split()[0]
... if 'd' not in lstatout: print i, 'is a file'
...
Here is a solution I have come up with. Based on https://stackoverflow.com/a/59109706 . My solution gives a pretty output.
Update I have modified it slightly to incorporate Martin's suggestions. Now my code is considerably fast compared to my initial version using isdir and listdir
# prefix components:
space = ' '
branch = '│ '
# pointers:
tee = '├── '
last = '└── '
def stringpath(path):
# just a helper to get string of PosixPath
return str(path)
from pathlib import Path
from stat import S_ISDIR
def tree_sftp(sftp, path='.', parent='/', prefix=''):
"""
Loop through files to print it out
for file in tree_sftp(sftp):
print(file)
"""
fullpath = Path(parent, path)
strpath = stringpath(fullpath)
dirs = sftp.listdir_attr(strpath)
pointers = [tee] * (len(dirs) - 1) + [last]
pdirs = [Path(fullpath, d.filename) for d in dirs]
sdirs = [stringpath(path) for path in pdirs]
for pointer, sd, d in zip(pointers, sdirs, dirs):
yield prefix + pointer + d.filename
if S_ISDIR(d.st_mode):
extension = branch if pointer == tee else space
yield from tree_sftp(sftp, sd, prefix=prefix + extension)
You can try it out like this using pysftp
import pysftp
with pysftp.Connection(HOSTNAME, USERNAME, PASSWORD) as sftp:
for file in tree_sftp(sftp):
print(file)
Let me know if if works for you.
I'm having trouble creating a directory and then opening/creating/writing into a file in the specified directory. The reason seems unclear to me. I'm using os.mkdir() and
path=chap_name
print "Path : "+chap_path #For debugging purposes
if not os.path.exists(path):
os.mkdir(path)
temp_file=open(path+'/'+img_alt+'.jpg','w')
temp_file.write(buff)
temp_file.close()
print " ... Done"
I get the error
OSError: [Errno 2] No such file or directory: 'Some Path Name'
Path is of the form 'Folder Name with un-escaped spaces'
What am I doing wrong here?
Update: I tried running the code without creating the directory
path=chap_name
print "Path : "+chap_path #For debugging purposes
temp_file=open(img_alt+'.jpg','w')
temp_file.write(buff)
temp_file.close()
print " ... Done"
Still get an error. Confused further.
Update 2:The Problem seems to be the img_alt, it contains a '/' in some cases, which makes is causing the trouble.
So I need to handle the '/'.
Is there anyway to escape the '/' or is deletion the only option?
import os
path = chap_name
if not os.path.exists(path):
os.makedirs(path)
filename = img_alt + '.jpg'
with open(os.path.join(path, filename), 'wb') as temp_file:
temp_file.write(buff)
Key point is to use os.makedirs in place of os.mkdir. It is recursive, i.e. it generates all intermediate directories. See http://docs.python.org/library/os.html
Open the file in binary mode as you are storing binary (jpeg) data.
In response to Edit 2, if img_alt sometimes has '/' in it:
img_alt = os.path.basename(img_alt)
import os
os.mkdir('directory name') #### this command for creating directory
os.mknod('file name') #### this for creating files
os.system('touch filename') ###this is another method for creating file by using unix commands in os modules