I am trying to access files stored on my Dropbox using the offical Dropbox SDK for Python.. I tried a few ways of putting in the directory name whose contents I wanted to have listed based off of a script taken from this link https://practicaldatascience.co.uk/data-science/how-to-use-the-dropbox-api-with-python. Following the instructions in this website, I created an App, generated a dropbox access token (which produced 'long-gibberish'), and gave myself read permissions for Files and Folders.
When I login to Dropbox through the website, the folder structure for the folder that I want to access looks like so: Folder/ SubFolder/ SubSubFolder.
DROPBOX_ACCESS_TOKEN = 'long-gibberish'
def dropbox_connect():
"""Create a connection to Dropbox."""
try:
dbx = dropbox.Dropbox(DROPBOX_ACCESS_TOKEN)
except AuthError as e:
print('Error connecting to Dropbox with access token: ' + str(e))
return dbx
def dropbox_list_files(path):
"""Return a Pandas dataframe of files in a given Dropbox folder path in the Apps directory.
"""
dbx = dropbox_connect()
try:
files = dbx.files_list_folder(path).entries
files_list = []
for file in files:
if isinstance(file, dropbox.files.FileMetadata):
metadata = {
'name': file.name,
'path_display': file.path_display,
'client_modified': file.client_modified,
'server_modified': file.server_modified
}
files_list.append(metadata)
df = pd.DataFrame.from_records(files_list)
return df.sort_values(by='server_modified', ascending=False)
except Exception as e:
print('Error getting list of files from Dropbox: ' + str(e))
I get the following error when calling the function:
dropbox_list_files('Folder/SubFolder/SubSubFolder')
Error getting list of files from Dropbox: ApiError('short-gibberish', ListFolderError('path', LookupError('not_found', None)))
I would like to get some help on how to set the right path.
After locating a Dropbox for Python tutorial, it was figured that the directory structure can be retrieved in the following manner
for entry in dbx.files_list_folder('').entries:
print(entry.name)
But, before that an instance of the Dropbox object must be created:
dbx = dropbox.Dropbox('YOUR_ACCESS_TOKEN')
which is achieved by invoking and assigning dbx:
dbx = dropbox_connect()
Related
I'm trying to pull a list of dictionaries from Dropbox API for some images and to be represented as follows;
[{name: 'XYZ', url: 'THIS IS A URL'}]
I've use the following which generates a list of file names and path's just fine.
path = ''
files_list = []
def dropbox_list_files(path):
dbx = dropbox_connect()
try:
files = dbx.files_list_folder(path).entries
for file in files:
if isinstance(file, dropbox.files.FileMetadata):
metadata = {
'name': file.name,
'path_lower': file.path_lower,
}
files_list.append(metadata)
But I'm stuck on now creating the sharable links, aswell as then appending these to each of the relevant files in a list.
In the Dropbox API documentation they have the 'sharing_create_shared_link_with_settings' function which looks like it can do this, but requires a file path, which I am not sure if this is just a single path, or how I could A) pass in an iterable of paths for all the files in sequence, and then B) how would I append these together for the purpose explained above?
Any help is much appreciated.
Once you have the files_list populated, loop through and pass the file path to the sharing_create_shared_link method. To do that, you would add something like this to your script to print a list of links..
for file in files_list:
try:
link = dbx.sharing_create_shared_link(file['path_lower'])
print(link.url)
except Exception as e:
print(e)
Assuming you are using the Dropbox SDK, you will want to ensure you have appropriate scopes set (e.g. sharing.write) otherwise will you encounter a permission error.
service = self.auth()
items = self.listFilesInFolder(downLoadFolderKey)
for item in items:
file_id = (item.get('id'))
file_name = (item.get('name'))
request = service.files().get_media(fileId=file_id)
fh = io.BytesIO()
downloader = MediaIoBaseDownload(fh, request)
done = False
while done is False:
status, done = downloader.next_chunk()
print ("Download %d%%." % int(status.progress() * 100) + file_name)
filepath = fileDownPath + file_name
with io.open(filepath, 'wb') as f:
fh.seek(0)
f.write(fh.read())
I am using Google Drive API v3.
I am trying to download a full directory. But the problem is the directory itself contains folders and when I try to run this bit of code. This error happens.
<HttpError 403 when requesting https://www.googleapis.com/drive/v3/files/1ssF0XD8pi6oh6DXB1prIJPWKMz9dggm2?alt=media returned "Only files with binary content can be downloaded. Use Export with Google Docs files.">
The error I figure is due to it trying to download the folders, within the directory. But how do I download the full directory?
P.S The directory changes so I cannot hard code file IDs and then download the files.
I believe your situation and goal as follows.
By items = self.listFilesInFolder(downLoadFolderKey), you have already been able to retrieve all file and folder list including the subfolders under the specific folder.
items include the mimeType for each files and folders.
In your issue, when the folder is used in the loop, the error occurs.
You want to remove this error.
For this, how about this answer?
Modification point:
When the mimeType is included in items of items = self.listFilesInFolder(downLoadFolderKey), the folder can be checked by the mimeType. The mimeType of folder is application/vnd.google-apps.folder.
From your script, I think that when the Google Docs file (Spreadsheet, Document, Slides and so on) is downloaded with the method of "Files: get", the same error occurs.
In order to download the Google Docs files, it is required to use the method of "Files: export".
When above point is reflected to your script, how about the following modification?
Modified script:
From:
request = service.files().get_media(fileId=file_id)
To:
file_mimeType = (item.get('mimeType'))
if file_mimeType == 'application/vnd.google-apps.folder':
continue
request = service.files().export_media(fileId=file_id, mimeType='application/pdf') if 'application/vnd.google-apps' in file_mimeType else service.files().get_media(fileId=file_id)
In this modification, at first, please confirm whether the file mimeType to items of items = self.listFilesInFolder(downLoadFolderKey) is included, again. By this, the folder can be skipped and also, Google Docs files and the files except for Google Docs can be downloaded using the value of mimeType.
In this modification, as a sample modification, Google Docs files are downloaded as the PDF file. If you want to change the output mimeType, please modify mimeType='application/pdf'.
References:
G Suite and Drive MIME Types
Files: get
Files: export
I am new to python and I want to know if we can upload files from our local system to box.com?
Or else can we take help from any mediator like Jenkins to upload this files?
You can use the below boxsdk library code.
def upload_file_to_box(client, folder_id, filename):
folder = client.folder(folder_id=folder_id)
items = folder.get_items()
for item in items:
if item.name == filename:
updated_file = client.file(item.id).update_contents(item.name)
print('File "{0}" has been updated'.format(updated_file.name))
return
uploaded_file = folder.upload(filename)
print('File "{0}" has been uploaded'.format(uploaded_file.name))
This will check for a specific file name and compare it with all files names in the folder and updates a new version if exists, otherwise uploads a new file.
Also you can search the filename inside a folder using search API by using the below code. But the search API has a time lag of 10 minutes or greater.
items = client.search().query(query='"{}"'.format(filename), limit=100, ancestor_folders=[folder])
I don't know if I understood your question correctly, but there is a package for python to connect to the box platform through an API http://opensource.box.com/box-python-sdk/tutorials/intro.html
I am working on creating a script to watch a folder, grab any new .zip files, and then upload them via FTP to a predetermined area. Right now FTP testing is being performed Locally, since the environment isnt yet created.
The strategy I am taking is to first, unzip into a local folder. Then, perform ftplib.storbinary , on the file from the local folder, to the ftpdestination. However, the unzipping process appears to be working but I am getting a "file does not exist" error, all though I can see it in the folder itself.
Also, is there anyway to unzip directly into an FTP location? I havent been able to find a way hence the approach I am taking.
Thanks, local ftp info removed from code. All paths that are relevant in this code will be changed, most likely to dynamic fashion, but for now this is a local environment
extractZip2.py
import zipfile
import ftplib
import os
import logging
import time
from socket import error as socket_error
#Logging Setup
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger('__name__')
FTPaddress = ''
FTPusername = ''
FTPpassword = ''
ftp_destination_location = ''
path_to_watch = "C:/Users/206420055/Desktop/test2/"
before = dict ([(f,None) for f in os.listdir(path_to_watch)])
temp_destination_location = "C:/Users/206420055/Desktop/temp/"
def unzip(fullPath,temporaryPath):
with zipfile.ZipFile(fullPath, "r") as z :
logger.info("Unzipping {0}".format(fullPath))
z.extractall(temporaryPath)
logger.info("Unzipped into local directory {0}".format(temp_destination_location))
def check_or_create_ftp(session, folder):
"""
Checks to see if necessary folder for currenttab is available.
Creates the folder if not found, and enters it.
"""
if folder not in session.nlst():
logger.info('Directory for {0} does not exist, creating directory\n'.format(folder))
session.mkd(folder)
session.cwd(folder)
def check_or_create(temp_destination):
"""
Checks to see if local savepath exists. Will create savepath if not exists.
"""
if not os.path.exists(temp_destination):
logger.info('Directory for %s does not exist, creating directory\n' % temp_destination)
os.makedirs(str(temp_destination))
def transfer(address,username,password,filename,destination):
logger.info("Creating Session")
try:
session = session_init(address,username,password,destination)
except (socket_error,ftplib.error_perm) as e:
logger.error(str(e))
logger.error("Error in Session Init")
else:
try:
logger.info("Sending File {0}".format(filename))
send_file(session,filename)
except (IOError, OSError, ftplib.error_perm) as e:
logger.error(e)
def session_init(address,username,password,path):
session = ftplib.FTP(address,username,password)
check_or_create_ftp(session,path)
logger.info("Session Established")
return session
def send_file(session,filename):
file = open(filename,'rb')
logger.info('Sending File : STOR '+filename)
session.storbinary('STOR '+ filename, file)
file.close()
def delete_local_files(savepath, file):
logger.info("Cleaning Up Folder {0}".format(savepath))
os.remove(file)
while 1:
time.sleep(5)
after = dict ([(f,None) for f in os.listdir(path_to_watch)])
added = [f for f in after if not f in before]
removed = [f for f in before if not f in after]
if added: print "Added: ",", ".join(added)
before = after
check_or_create(temp_destination_location)
if added :
for file in added:
print file
if file.endswith('.zip'):
unzip(path_to_watch+file, temp_destination_location)
temp_files = os.listdir(temp_destination_location)
print("Temp Files {0}".format(temp_files))
for tf in temp_files:
print("TF {0}".format(tf))
transfer(FTPaddress,FTPusername,FTPpassword,tf,ftp_destination_location)
#delete_local_files(temp_destination_location,tf)
else:
pass
edit: adding error image
Seen above, we see the file in the temp folder. But the console obviously shows the error.
just change it to
from glob import glob
zips_in_path = dict ([(f,None) for f in glob("{base_path}/*.zip".format(base_path = path_to_watch)])
os.listdir does not include the path_to_watch part of the path it is just the filenames, however glob does.
so you could also do
after = dict ([(os.path.join(path_to_watch,f),None) for f in os.listdir(path_to_watch)])
using either of these methods you should be able to get the full path to the files in the path
Using Python gdata in my Goggle App Engine Application I am trying to copy a google docs resource and put the copy in the same folder as the source file. The CopyResource method that I use put the copy at the root level.
Would you know how to query Google Docs to have the list of folders/collections in which a file is?
import gdata.docs.service
import gdata.docs.client
...
doc_service = gdata.docs.client.DocsClient()
doc_service.ClientLogin('username', 'abc123', 'my app name')
try:
doc = doc_service.GetResourceById(resourceId)
newdoc = doc_service.CopyResource(doc, filename)
# newdoc is copied at the root level of Google Drive
# need to move it where the source file is located.
newdocid = newdoc.id.text.decode('utf-8').split('%3A')[1]
# would like here to have my newly copied document
# moved to the same directory as the source one.
# assuming the source is only in one folder.
except gdata.client.RequestError, inst:
logging.info('Copy Error: %s', inst.body)