I have a Google drive repository where I used to upload lots of files. This time I would like to download something from this same repository.
The following code works to download a file with file_id:
DRIVE = discovery.build('drive', 'v3', http=creds.authorize(Http()))
file_id = '23242342345YdJqjvKLVbenO22FeKcL'
request = team_drive.DRIVE.files().get_media(fileId=file_id)
fh = io.BytesIO()
downloader = MediaIoBaseDownload(fh, request)
done = False
while done is False:
status, done = downloader.next_chunk()
print ("Download %d%%." % int(status.progress() * 100))
fh.seek(0)
with open('test.csv', 'wb') as f:
shutil.copyfileobj(fh, f, length=131072)
I would like to do the same but download a file from a folder this time. I tried the following code to display files in a given folder with folder_id. But it does not work.
folder_id = '13223232323237jWuf3__hKAG18jVo'
results = team_drive.DRIVE.files().list(q="mimeType='application/vnd.google-apps.spreadsheet' and parents in '"+folder_id+"'",fields="nextPageToken, files(id, name)",pageSize=400).execute()
Should the code work? I got an empty list. Any contribution would be appreciated
I believe your goal and situation as follows.
You want to download the Google Spreadsheet, which is the latest modified time, from the specific folder in your shared Drive as the XLSX format.
You want to achieve this using googleapis for python.
You have already been able to download the file using Drive API.
For this, I would like to propose the following sample script. The flow of this script is as follows.
Retrieve the latest Google Spreadsheet from the specific folder in the shared Drive.
For this, I use results = DRIVE.files().list(pageSize=1, fields="files(modifiedTime,name,id)", orderBy="modifiedTime desc", q="'" + folder_id + "' in parents and mimeType = 'application/vnd.google-apps.spreadsheet'", supportsAllDrives=True, includeItemsFromAllDrives=True).execute()
By this, the Google Spreadsheet with the latest modified time can be retrieved.
Retrieve the file ID of latest Google Spreadsheet.
In this case, results.get('files', [])[0]['id'] is the file ID.
Download the Google Spreadsheet as the XLSX format.
In this case, DRIVE.files().export_media(fileId=file_id, mimeType='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet') is used.
When above flow is used, the sample script is as follows.
Sample script:
folder_id = "###" # Please set the folder ID.
DRIVE = discovery.build('drive', 'v3', http=creds.authorize(Http()))
results = DRIVE.files().list(pageSize=1, fields="files(modifiedTime,name,id)", orderBy="modifiedTime desc", q="'" + folder_id + "' in parents and mimeType = 'application/vnd.google-apps.spreadsheet'", supportsAllDrives=True, includeItemsFromAllDrives=True).execute()
items = results.get('files', [])
if items:
file_id = items[0]['id']
file_name = items[0]['name']
request = DRIVE.files().export_media(fileId=file_id, mimeType='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet')
fh = io.FileIO(file_name + '.xlsx', mode='wb')
downloader = MediaIoBaseDownload(fh, request)
done = False
while done is False:
status, done = downloader.next_chunk()
print('Download %d%%.' % int(status.progress() * 100))
Note:
From your script, I couldn't correctly understand about DRIVE and team_drive.DRIVE. In this case, from DRIVE = discovery.build('drive', 'v3', http=creds.authorize(Http())), I used DRIVE. If this cannot be used, please modify it.
Reference:
Files: list in Drive API v3
I use this function to get the URL's of files in a Drive folder:
from google.colab import auth
from oauth2client.client import GoogleCredentials
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
myDrive = GoogleDrive(gauth)
def getGDriveFileLinks(drive, folder_id, mime_type=None):
"""
Returns a list of dicts of pairs of file names and shareable links
drive: a GoogleDrive object with credentials https://pythonhosted.org/PyDrive/pydrive.html?highlight=googledrive#pydrive.drive.GoogleDrive
folder_id: a folderID of the folderID containing the file (grab it from the folder's URL)
mime_type (optional): the identifier of the filetype https://developers.google.com/drive/api/v3/mime-types,
https://www.iana.org/assignments/media-types/media-types.xhtml
"""
file_list = []
mime_type_query = "mimeType='{}' and ".format(mime_type) if mime_type != None else ''
files = drive.ListFile({'q': mime_type_query + "'{}' in parents".format(folder_id)}).GetList()
for file in files:
keys = file.keys()
if 'alternateLink' in keys:
link = file['alternateLink']
elif 'webContentLink' in keys:
link = file['webContentLink']
elif 'webViewLink' in keys:
link = file['webViewLink']
else:
try:
file.InsertPermission({
'type': 'anyone',
'value': 'anyone',
'role': 'reader'})
link = file['alternateLink']
except (HttpError, ApiRequestError):
link = 'Insufficient permissions for this file'
if 'title' in keys:
name = file['title']
else:
name = file['id']
file_list.append({'name': name, 'link': link})
return file_list
print(getGDriveFileLinks(myDrive, 'folder_id'))
Then, the URL can be used to retrieve the file using pydrive.
if anyone uses ruby and needs help, this command return IO.
drive.export_file(sheet_id, "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet")
ref: https://googleapis.dev/ruby/google-api-client/latest/Google/Apis/DriveV3/DriveService.html#export_file-instance_method
Related
So I'm trying to download whatever files are in a folder in my Google Drive into a folder in my computer using Python and Google Drive API. But for some reason the files that are downloaded appears to be corrupted and their sizes are just zero bytes. I tried two different solutions and both of 'em aren't working. Can someone tell me what I'm doing wrong?
from __future__ import print_function
import pickle
from pathlib import Path
import requests
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
from googleapiclient.http import MediaIoBaseDownload
import io, os
def dl_file(g_drive, folderID):
folder_path = Path("<folder path>")
files_obj = g_drive.files()
files_list = files_obj.list(pageSize=100,
fields="nextPageToken, files(name, id, webContentLink, parents)").execute()
items = files_list['files']
if not items:
print('No files found.')
else:
for item in items:
if item['parents'] == [folderID]:
file_name = item['name']
#Method 1
fh = io.BytesIO()
downloader = MediaIoBaseDownload(fh, files_obj.get_media(fileId=item['id']))
done = False
while done is False:
status, done = downloader.next_chunk()
print("Download %d%%." % int(status.progress() * 100))
with io.open(f"{folder_path.as_posix()}/{file_name}", 'wb') as f:
f.seek(0)
f.write(fh.read())
#Method 2
url = item['webContentLink']
r = requests.get(url, stream=True)
if r.ok:
print(f"Saving to '{folder_path.name}/'")
with open(f"{folder_path.as_posix()}/{file_name}", 'wb') as f:
for chunk in r.iter_content(chunk_size=1024 * 8):
if chunk:
f.write(chunk)
f.flush()
os.fsync(f.fileno())
else: # HTTP status code 4XX/5XX
print("Download failed: status code {}\n{}".format(r.status_code, r.text))
g_drive = build('drive', 'v3', credentials=creds)
dl_file(g_drive, "<folder_ID>")
I believe your goal as follows.
You want to download the files which are not Google Docs in the specific folder on Google Drive.
You want to achieve this using python.
You have already been able to download the files using Drive API.
You tried 2 methods of #Method 1 and #Method 2.
#Method 1 uses googleapis.
#Method 2 uses requests.
For this, how about this answer?
Modification points:
In this modification, I would like to modify #Method 1 using googleapis.
When the files are downloaded from the specific folder, I would like to propose to directly retrieve the file list except for Google Docs from the specific folder using the "Files: list" method. In this case, the search query is used.
When these are reflected to your script, it becomes as follows.
Modified script:
Please modify dl_file as follows.
def dl_file(g_drive, folderID):
folder_path = Path("<folder path>")
files_obj = g_drive.files()
files_list = files_obj.list(pageSize=100,
q="'" + folderID + "' in parents and not mimeType contains 'application/vnd.google-apps'",
fields="nextPageToken, files(name, id, webContentLink, parents)").execute()
items = files_list['files']
if not items:
print('No files found.')
else:
for item in items:
file_name = item['name']
request = g_drive.files().get_media(fileId=item['id'])
fh = io.FileIO(folder_path.as_posix() + '/' + file_name, mode='wb')
downloader = MediaIoBaseDownload(fh, request)
print('Download of ' + file_name)
done = False
while done is False:
status, done = downloader.next_chunk()
print('Download %d%%.' % int(status.progress() * 100))
When above script is run, the downloaded files are created to the directory of <folder path>.
References:
Files: list
Search for files and folders
When I retrieve csv files on Google Drive via api, I get files with no contents.
The code below consists of 3 parts (1: authenticate 2: search for files, 3: download files).
I suspect there is something wrong in step3: download files specifically around while done is False because I have no problem accessing Google Drive and download files. It's just that they are all empty files.
It would be great if someone can show me how I can fix it.
Codes below are mostly borrowed from Google website. Thank you for your time in advance!
Step 1: Authentication
from apiclient import discovery
from httplib2 import Http
import oauth2client
from oauth2client import file, client, tools
obj = lambda: None # this code allows for an empty class
auth = {"auth_host_name":'localhost', 'noauth_local_webserver':'store_true', 'auth_host_port':[8080, 8090], 'logging_level':'ERROR'}
for k, v in auth.items():
setattr(obj, k, v)
scopes = 'https://www.googleapis.com/auth/drive'
store = file.Storage('token_google_drive2.json')
creds = store.get()
# The following will takes a user to authentication link if no token file is found.
if not creds or creds.invalid:
flow = client.flow_from_clientsecrets('client_id.json', scopes)
creds = tools.run_flow(flow, store, obj)
Step 2: Search for files and create a dictionary of files to download
from googleapiclient.discovery import build
page_token = None
drive_service = build('drive', 'v3', credentials=creds)
while True:
name_list = []
id_list = []
response = drive_service.files().list(q="mimeType='text/csv' and name contains 'RR' and name contains '20191001'", spaces='drive',fields='nextPageToken, files(id, name)', pageToken=page_token).execute()
for file in response.get('files', []):
name = file.get('name')
id_ = file.get('id')
#name and id are strings, so create list first before creating a dictionary
name_list.append(name)
id_list.append(id_)
#also you need to remove ":" in name_list or you cannot download files - nowhere to be found in the folder!
name_list = [word.replace(':','') for word in name_list]
page_token = response.get('nextPageToken', None)
if page_token is None:
break
#### Create dictionary using name_list and id_list
zipobj = zip(name_list, id_list)
temp_dic = dict(zipobj)
Step 3: Download Files (the troublesome part)
import io
from googleapiclient.http import MediaIoBaseDownload
for i in range(len(temp_dic.values())):
file_id = list(temp_dic.values())[i]
v = list(temp_dic.keys())[i]
request = drive_service.files().get_media(fileId=file_id)
fh = io.FileIO(v, mode='w')
downloader = MediaIoBaseDownload(fh, request)
done = False
while done is False:
status, done = downloader.next_chunk()
status_complete = int(status.progress()*100)
print(f'Download of {len(temp_dic.values())} files, {int(status.progress()*100)}%')
Actually I figured out myself. Below is an edit.
All I needed to do was delete done = False
while done is False: and add fh.close() to close the downloader.
The complete revised part 3 is as follows:
from googleapiclient.http import MediaIoBaseDownload
for i in range(len(temp_dic.values())):
file_id = list(temp_dic.values())[i]
v = list(temp_dic.keys())[i]
request = drive_service.files().get_media(fileId=file_id)
# replace the filename and extension in the first field below
fh = io.FileIO(v, mode='wb') #only in Windows, writing for binary is specified with wb
downloader = MediaIoBaseDownload(fh, request)
status, done = downloader.next_chunk()
status_complete = int(status.progress()*100)
print(f'{list(temp_dic.keys())[i]} is {int(status.progress()*100)}% downloaded')
fh.close()
print(f'{len(list(temp_dic.keys()))} files')
So I am trying to download a lot of different files from google drive, then combine them into smaller fewer files. However, for some reason, my code is downloading duplicate files, or potentially just reading the BytesIO object incorrectly. I have pasted the code below and here is just a quick explanation of the file structure.
So I have ~135 folders, containing 52 files each. My goal is to loop through each folder, download the 52 files, then convert those 52 files into one file that is more compressed (get rid of unnecessary/duplicate data).
Code
def main(temporary_workspace, workspace):
store = file.Storage('tokenRead.json')
big_list_of_file_ids = []
creds = store.get()
if not creds or creds.invalid:
flow = client.flow_from_clientsecrets('credentials.json', SCOPES)
creds = tools.run_flow(flow, store)
service = build('drive', 'v3', http=creds.authorize(Http()))
# Call the Drive v3 API
results = service.files().list(
q="'MAIN_FOLDER_WITH_SUBFOLDERS_ID' in parents",
pageSize=1000, fields="nextPageToken, files(id, name)").execute()
items = results.get('files', [])
list_of_folders_and_ids = []
if not items:
raise RuntimeError('No files found.')
else:
for item in items:
list_of_folders_and_ids.append((item['name'], item['id']))
list_of_folders_and_ids.sort(key=lambda x: x[0])
for folder_id in list_of_folders_and_ids:
start_date = folder_id[0][:-3]
id = folder_id[1]
print('Folder: ', start_date, ', ID: ', id)
query_string = "'{}' in parents".format(id)
results = service.files().list(
q=query_string, fields="nextPageToken, files(id, name)"
).execute()
items = results.get('files', [])
list_of_files_and_ids = []
if not items:
raise RuntimeError('No files found.')
else:
for item in items:
list_of_files_and_ids.append((item['name'], item['id']))
for file_id in list_of_files_and_ids:
# Downloading the files
if file_id[1] not in big_list_of_file_ids:
big_list_of_file_ids.append(file_id[1])
else:
print('Duplicate file ID!')
exit()
print('\tFile: ', file_id[0], ', ID: ', file_id[1])
request = service.files().get_media(fileId=file_id[1])
fh = io.BytesIO()
downloader = MediaIoBaseDownload(fh, request)
done = False
while done is False:
status, done = downloader.next_chunk()
print("Download: {}".format(int(status.progress() * 100)))
fh.seek(0)
temporary_location = os.path.join(tmp_workspace, file_id[0])
with open(temporary_location, 'wb') as out:
out.write(fh.read())
fh.close()
convert_all_netcdf(temporary_workspace, start_date, workspace, r'Qout_south_america_continental',
num_of_rivids=62317)
os.system('rm -rf %s/*' % tmp_workspace)
So as you can see, I first get the ID's of all of the folders, then I loop through each folder and get the 52 files in that folder, then I save all of the 52 files to a temporary folder, convert them into one file, which I save in another directory, and then delete all of the 52 files and move on to the next folder in Google Drive. The problem is, when I compare the files that I compressed with the convert_all_netcdf method, they are all the same. I feel as though I am doing something wrong with the BytesIO object, do I need to do something more to clear it? It also may be that I am accidentally reading from the same folder every time in the google drive api calls. Any help is appreciated.
I realize that this was probably not a great question, and I mainly asked it because I thought I was doing something wrong with the BytesIO object, but I found the answer. I was reading all of the files that I downloaded with a library called Xarray, and I was forgetting to close the connection. This was causing me to only read the first connection on subsequent loops, giving me duplicates. Thanks to anyone who tried!
I am trying to retrieve file metadata from Google drive API V3 in Python. I did it in API V2, but failed in V3.
I tried to get metadata by this line:
data = DRIVE.files().get(fileId=file['id']).execute()
but all I got was a dict of 'id', 'kind', 'name', and 'mimeType'. How can I get 'md5Checksum', 'fileSize', and so on?
I read the documentation.
I am supposed to get all the metadata by get() methods, but all I got was a small part of it.
Here is my code:
from __future__ import print_function
import os
from apiclient.discovery import build
from httplib2 import Http
from oauth2client import file, client, tools
try:
import argparse
flags = argparse.ArgumentParser(parents=[tools.argparser]).parse_args()
except ImportError:
flags = None
SCOPES = 'https://www.googleapis.com/auth/drive.metadata
https://www.googleapis.com/auth/drive'
store = file.Storage('storage.json')
creds = store.get()
if not creds or creds.invalid:
flow = client.flow_from_clientsecrets('storage.json', scope=SCOPES)
creds = tools.run_flow(flow, store)
DRIVE = build('drive','v3', http=creds.authorize(Http()))
files = DRIVE.files().list().execute().get('files',[])
for file in files:
print('\n',file['name'],file['id'])
data = DRIVE.files().get(fileId=file['id']).execute()
print('\n',data)
print('Done')
I tried this answer:
Google Drive API v3 Migration
List
Files returned by service.files().list() do not contain information now, i.e. every field is null. If you want list on v3 to behave like in v2, call it like this:
service.files().list().setFields("nextPageToken, files");
but I get a Traceback:
DRIVE.files().list().setFields("nextPageToken, files")
AttributeError: 'HttpRequest' object has no attribute 'setFields'
Suppose you want to get the md5 hash of a file given its fileId, you can do it like this:
DRIVE = build('drive','v3', http=creds.authorize(Http()))
file_service = DRIVE.files()
remote_file_hash = file_service.get(fileId=fileId, fields="md5Checksum").execute()['md5Checksum']
To list some files on the Drive:
results = file_service.list(pageSize=10, fields="files(id, name)").execute()
I have built a small application gDrive-auto-sync containing more examples of API usage.
It's well-documented, so you can have a look at it if you want.
Here is the main file containing all the code. It might look like a lot but more than half of lines are just comments.
If you want to retrieve all the fields for a file resource, simply set fields='*'
In your above example, you would run
data = DRIVE.files().get(fileId=file['id'], fields='*').execute()
This should return all the available resources for the file as listed in:
https://developers.google.com/drive/v3/reference/files
There is a library PyDrive that provide easy interactions with google drive
https://googledrive.github.io/PyDrive/docs/build/html/filelist.html
Their example:
from pydrive.drive import GoogleDrive
drive = GoogleDrive(gauth) # Create GoogleDrive instance with authenticated GoogleAuth instance
# Auto-iterate through all files in the root folder.
file_list = drive.ListFile({'q': "'root' in parents and trashed=false"}).GetList()
for file1 in file_list:
print('title: %s, id: %s' % (file1['title'], file1['id']))
All you need is file1['your key']
I want to upload a file from my python script to my dropbox account automatically. I can't find anyway to do this with just a user/pass. Everything I see in the Dropbox SDK is related to an app having user interaction. I just want to do something like this:
https://api-content.dropbox.com/1/files_put//?user=me&pass=blah
The answer of #Christina is based on Dropbox APP v1, which is deprecated now and will be turned off on 6/28/2017. (Refer to here for more information.)
APP v2 is launched in November, 2015 which is simpler, more consistent, and more comprehensive.
Here is the source code with APP v2.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import dropbox
class TransferData:
def __init__(self, access_token):
self.access_token = access_token
def upload_file(self, file_from, file_to):
"""upload a file to Dropbox using API v2
"""
dbx = dropbox.Dropbox(self.access_token)
with open(file_from, 'rb') as f:
dbx.files_upload(f.read(), file_to)
def main():
access_token = '******'
transferData = TransferData(access_token)
file_from = 'test.txt'
file_to = '/test_dropbox/test.txt' # The full path to upload the file to, including the file name
# API v2
transferData.upload_file(file_from, file_to)
if __name__ == '__main__':
main()
The source code is hosted on GitHub, here.
Important Note: this answer is deprecated since dropbox uses v2 API now.
See the answer of #SparkAndShine for current API version solution
Thanks to #smarx for the answer above! I just wanted to clarify for anyone else trying to do this.
Make sure you install the dropbox module first of course, pip install dropbox.
Create an app under your own dropbox account in the "App Console". (https://www.dropbox.com/developers/apps)
Just for the record I created my App with the following:
a. App Type as "Dropbox API APP".
b. Type of data access as "Files & Datastores"
c. Folder access as "My app needs access to files already on Dropbox". (ie: Permission Type as "Full Dropbox".)
Then click the "generate access token" button and cut/paste into the python example below in place of <auth_token>:
import dropbox
client = dropbox.client.DropboxClient(<auth_token>)
print 'linked account: ', client.account_info()
f = open('working-draft.txt', 'rb')
response = client.put_file('/magnum-opus.txt', f)
print 'uploaded: ', response
folder_metadata = client.metadata('/')
print 'metadata: ', folder_metadata
f, metadata = client.get_file_and_metadata('/magnum-opus.txt')
out = open('magnum-opus.txt', 'wb')
out.write(f.read())
out.close()
print metadata
Here's my approach using API v2 (and Python 3). I wanted to upload a file and create a share link for it, which I could email to users. It's based on sparkandshine's example. Note I think the current API documentation has a small error which sparkandshine has corrected.
import pathlib
import dropbox
import re
# the source file
folder = pathlib.Path(".") # located in this folder
filename = "test.txt" # file name
filepath = folder / filename # path object, defining the file
# target location in Dropbox
target = "/Temp/" # the target folder
targetfile = target + filename # the target path and file name
# Create a dropbox object using an API v2 key
d = dropbox.Dropbox(your_api_access_token)
# open the file and upload it
with filepath.open("rb") as f:
# upload gives you metadata about the file
# we want to overwite any previous version of the file
meta = d.files_upload(f.read(), targetfile, mode=dropbox.files.WriteMode("overwrite"))
# create a shared link
link = d.sharing_create_shared_link(targetfile)
# url which can be shared
url = link.url
# link which directly downloads by replacing ?dl=0 with ?dl=1
dl_url = re.sub(r"\?dl\=0", "?dl=1", url)
print (dl_url)
import dropbox
access_token = '************************'
file_from = 'index.jpeg' //local file path
file_to = '/Siva/index.jpeg' // dropbox path
def upload_file(file_from, file_to):
dbx = dropbox.Dropbox(access_token)
f = open(file_from, 'rb')
dbx.files_upload(f.read(), file_to)
upload_file(file_from,file_to)
The only way to authenticate calls to the Dropbox API is to use OAuth, which involves the user giving permission to your app. We don't allow third-party apps to handle user credentials (username and password).
If this is just for your account, note that you can easily get an OAuth token for your own account and just use that. See https://www.dropbox.com/developers/blog/94/generate-an-access-token-for-your-own-account.
If this is for other users, they'll need to authorize your app once via the browser for you to get an OAuth token. Once you have the token, you can keep using it, though, so each user should only have to do this once.
Sorry if im missing something but cant you just download the dropbox application for your OS and then save the file (in windows) in:
C:\Users\<UserName>\Dropbox\<FileName>
i just ceated a python program to save a text file, checked my dropbox and it saves them fine.
For Dropbox Business API below python code helps uploading files to dropbox.
def dropbox_file_upload(access_token,dropbox_file_path,local_file_name):
'''
The function upload file to dropbox.
Parameters:
access_token(str): Access token to authinticate dropbox
dropbox_file_path(str): dropboth file path along with file name
Eg: '/ab/Input/f_name.xlsx'
local_file_name(str): local file name with path from where file needs to be uploaded
Eg: 'f_name.xlsx' # if working directory
Returns:
Boolean:
True on successful upload
False on unsuccessful upload
'''
try:
dbx = dropbox.DropboxTeam(access_token)
# get the team member id for common user
members = dbx.team_members_list()
for i in range(0,len(members.members)):
if members.members[i].profile.name.display_name == logged_in_user:
member_id = members.members[i].profile.team_member_id
break
# connect to dropbox with member id
dbx = dropbox.DropboxTeam(access_token).as_user(member_id)
# upload local file to dropbox
f = open(local_file_name, 'rb')
dbx.files_upload(f.read(),dropbox_file_path)
return True
except Exception as e:
print(e)
return False
If you need to upload a BIG file, you need to break up the file into chunks and upload the chunks one by one as follows. Inspired by this great medium artcle:
def upload_a_big_file(local_file_path: str, remote_file_path: str):
# grab your authenticated client
dbx = get_dropbox_client()
file_size = os.path.getsize(local_file_path)
# Upload 8 MB chunks at a time
CHUNK_SIZE = 8 * 1024 * 1024
with open(local_file_path, 'rb') as local_file:
uploaded_size = 0
upload_session_start_result = dbx.files_upload_session_start(local_file.read(CHUNK_SIZE))
cursor = dropbox.files.UploadSessionCursor(
session_id=upload_session_start_result.session_id,
offset=local_file.tell()
)
commit = dropbox.files.CommitInfo(
path=remote_file_path,
mode=dropbox.files.WriteMode.overwrite
)
print("Starting Upload.")
while local_file.tell() <= file_size:
if ((file_size - local_file.tell()) <= CHUNK_SIZE):
# Last chunk remaining, so commit
dbx.files_upload_session_finish(
local_file.read(CHUNK_SIZE),
cursor,
commit
)
print("Done uploading !")
break
else:
dbx.files_upload_session_append_v2(
local_file.read(CHUNK_SIZE),
cursor
)
cursor.offset = local_file.tell()
uploaded_size += CHUNK_SIZE
uploaded_percent = 100*uploaded_size/file_size
print('Uploaded {:.2f}%'.format(uploaded_percent))
Here is the code for uploading livevideo on dropbox using python in windows.
Hope this will help you.
import numpy as np
import cv2
import dropbox
import os
from glob import iglob
access_token = 'paste your access token here' #paste your access token in-between ''
client = dropbox.client.DropboxClient(access_token)
print 'linked account: ', client.account_info()
PATH = ''
cap = cv2.VideoCapture(0)
# Define the codec and create VideoWriter object
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('C:\python27\output1.avi',fourcc, 20.0, (640,480))
#here output1.avi is the filename in which your video which is captured from webcam is stored. and it resides in C:\python27 as per the path is given.
while(cap.isOpened()):
ret, frame = cap.read()
if ret==True:
#frame = cv2.flip(frame,0) #if u want to flip your video
# write the (unflipped or flipped) frame
out.write(frame)
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
# Release everything if job is finished
cap.release()
out.release()
cv2.destroyAllWindows()
for filename in iglob(os.path.join(PATH, 'C:/Python27/output1.avi')):
print filename
try:
f = open(filename, 'rb')
response = client.put_file('/livevideo1.avi', f)
print "uploaded:", response
f.close()
#os.remove(filename)
except Exception, e:
print 'Error %s' % e
Here is the code for uploading existing video on your dropbox account using python in windows.
Hope this will help you.
# Include the Dropbox SDK
import dropbox
# Get your app key and secret from the Dropbox developer website
app_key = 'paste your app-key here'
app_secret = 'paste your app-secret here'
flow = dropbox.client.DropboxOAuth2FlowNoRedirect(app_key, app_secret)
# Have the user sign in and authorize this token
authorize_url = flow.start()
print '1. Go to: ' + authorize_url
print '2. Click "Allow" (you might have to log in first)'
print '3. Copy the authorization code.'
code = raw_input("Enter the authorization code here: ").strip()
# This will fail if the user enters an invalid authorization code
access_token, user_id = flow.finish(code)
client = dropbox.client.DropboxClient(access_token)
print 'linked account: ', client.account_info()
f = open('give full path of the video which u want to upload on your dropbox account(ex: C:\python27\examples\video.avi)', 'rb')
response = client.put_file('/video1.avi', f) #video1.avi is the name in which your video is shown on your dropbox account. You can give any name here.
print 'uploaded: ', response
folder_metadata = client.metadata('/')
print 'metadata: ', folder_metadata
f, metadata = client.get_file_and_metadata('/video1.avi')
out = open('video1.avi', 'wb')
out.write(f.read())
out.close()
print metadata
Now for uploading images, the same code will be used.
Only write your image file name which you want to upload for ex: image.jpg in place of video name . Also change the name of video1.avi and write name for image in which your uploaded image will be shown in your dropbox for ex:image1.jpg.