I am currently setting up a website where I get a file uploaded from the user , do some processing on it and provide a link for the user to download the processed file from. I presently want to provide a path to the file on my local system, I am new to web2py, and am having trouble doing this.
Could someone please help me do this?
Regards
see this link for some hint: webpy: how to stream files , and may be add some code like this:
BUF_SIZE = 262144
class download:
def GET(self):
file_name = # get from url
file_path = os.path.join('/path to your file', file_name)
f = None
try:
f = open(file_path, "rb")
webpy.header('Content-Type','application/octet-stream')
webpy.header('Content-disposition', 'attachment; filename=%s' % file_name)
while True:
c = f.read(BUF_SIZE)
if c:
yield c
else:
break
except Exception, e:
# throw 403 or 500 or just leave it
pass
finally:
if f:
f.close()
Related
As the title says, I have access to a shared folder where some files are uploaded. I just want to donwload an specific file, called "db.dta". So, I have this script:
def download_file(url, filename):
url = url
file_name = filename
with open(file_name, "wb") as f:
print("Downloading %s" % file_name)
response = requests.get(url, stream=True)
total_length = response.headers.get('content-length')
if total_length is None: # no content length header
f.write(response.content)
else:
dl = 0
total_length = int(total_length)
for data in response.iter_content(chunk_size=4096):
dl += len(data)
f.write(data)
done = int(50 * dl / total_length)
sys.stdout.write("\r[%s%s]" % ('=' * done, ' ' * (50-done)) )
sys.stdout.flush()
print(" ")
print('Descarga existosa.')
It actually download shares links of files if I modify the dl=0 to 1, like this:
https://www.dropbox.com/s/ajklhfalsdfl/db_test.dta?dl=1
The thing is, I dont have the share link of this particular file in this shared folder, so if I use the url of the file preview, I get an error of denied access (even if I change dl=0 to 1).
https://www.dropbox.com/sh/a630ksuyrtw33yo/LKExc-MKDKIIWJMLKFJ?dl=1&preview=db.dta
Error given:
dropbox.exceptions.ApiError: ApiError('22eaf5ee05614d2d9726b948f59a9ec7', GetSharedLinkFileError('shared_link_access_denied', None))
Is there a way to download this file?
If you have the shared link to the parent folder and not the specific file you want, you can use the /2/sharing/get_shared_link_file endpoint to download just the specific file.
In the Dropbox API v2 Python SDK, that's the sharing_get_shared_link_file method (or sharing_get_shared_link_file_to_file). Based on the error output you shared, it looks like you are already using that (though not in the particular code snippet you posted).
Using that would look like this:
import dropbox
dbx = dropbox.Dropbox(ACCESS_TOKEN)
folder_shared_link = "https://www.dropbox.com/sh/a630ksuyrtw33yo/LKExc-MKDKIIWJMLKFJ"
file_relative_path = "/db.dat"
res = dbx.sharing_get_shared_link_file(url=folder_shared_link, path=file_relative_path)
print("Metadata: %s" % res[0])
print("File data: %s bytes" % len(res[1].content))
(You mentioned both "db.dat" and "db.dta" in your question. Make sure you use whichever is actually correct.)
Additionally, note if you using a Dropbox API app registered with the "app folder" access type: there's currently a bug that can cause this shared_link_access_denied error when using this method with an access token for an app folder app.
I am trying to upload a zip folder to dropbox. The zip is a backup, with a custom name according to the current date time. The folder is correctly zipped, named and stored. Sadly, there is always an issue uploading to dropbox. I have tested a readme.txt using the same code, which works. I don’t understand where this is going wrong. Thanks for any help.
import dropbox
import os
import datetime
dt = ('{:%Y%m%d_%H%M}'.format(datetime.datetime.now()))
name = dt + "_pa.zip"
os.system("zip -r " + name + " /home/obliss")
class TransferData:
def __init__(self, access_token):
self.access_token = access_token
def upload_file(self, file_from, file_to):
dbx = dropbox.Dropbox(self.access_token)
with open(file_from, 'rb') as f:
dbx.files_upload(f.read(), file_to, mode=dropbox.files.WriteMode.overwrite)
access_token = "[hidden]"
file_from = "/home/olbliss/"+name
file_to = "/Work/Python Anywhere Backups/"+name
transferData = TransferData(access_token)
try:
transferData.upload_file(file_from, file_to)
except:
os.remove(name)
print('uploaded failed, '+name+' removed from /home/olbliss/')
try:
os.remove(name)
except:
pass
Failure message:
The 413 status code indicates that the payload was too large. The files_upload method only officially supports files up to 150 MB in size. You'll need to use upload_sessions for larger files.
Here's a basic example that uses the Dropbox Python SDK to upload a file to the Dropbox API from the local file as specified by file_path to the remote path as specified by dest_path. It also chooses whether or not to use an upload session based on the size of the file:
f = open(file_path)
file_size = os.path.getsize(file_path)
CHUNK_SIZE = 8 * 1024 * 1024
if file_size <= CHUNK_SIZE:
print dbx.files_upload(f.read(), dest_path)
else:
upload_session_start_result = dbx.files_upload_session_start(f.read(CHUNK_SIZE))
cursor = dropbox.files.UploadSessionCursor(session_id=upload_session_start_result.session_id,
offset=f.tell())
commit = dropbox.files.CommitInfo(path=dest_path)
while f.tell() <= file_size:
if ((file_size - f.tell()) <= CHUNK_SIZE):
print dbx.files_upload_session_finish(f.read(CHUNK_SIZE),
cursor,
commit)
break
else:
dbx.files_upload_session_append_v2(f.read(CHUNK_SIZE),
cursor)
cursor.offset = f.tell()
f.close()
Note: this should only serve as an example. It hasn't been extensively tested and doesn't implement error handling.
This is my code for uploading image that I have defined in a class based view,
def _handle_uploaded_file(self, request):
folder = settings.MEDIA_ROOT
uploaded_filename = request.FILES['img_fl'].name
BASE_PATH ='/home/admin1/Desktop/virtualenv-12.1.1/mapfied'
# create the folder if it doesn't exist.
try:
os.mkdir(os.path.join(BASE_PATH, folder))
except Exception as e:
pass
# save the uploaded file inside that folder.
full_filename = os.path.join(BASE_PATH, folder, uploaded_filename)
fd = open(full_filename, 'wb')
file_content = ContentFile( request.FILES['img_fl'].read() )
try:
for chunk in file_content.chunks():
fout.write(chunk)
fout.close()
html = "<html><body>SAVED</body></html>"
print(html)
except Exception as e:
print(e)
The image file is getting saved to correct location with name, but it is corrupted.I am unable to find the exact reason for this , Am I doing something wrong here?
This is what I had from a previous project for writing upload files to disk:
def view_handling_function(request):
for key, value in request.FILES.iteritems():
full_path = ...
save_uploadfile_to_disk(full_path, value)
def save_uploadfile_to_disk(full_path, file):
with open(full_path, 'w+') as destination:
for chunk in file.chunks():
destination.write(chunk)
I think since you're looking to write a binary upload you need to open the file with writable binary mode which is actually wb+.
You could also tidy up a bit by using the 'with' keyword; see Django example here.
Sidenote: if you're persisting the file as a FileField (or a derived class) you could just provide the 'upload_to' function that returns the full path and file name for where you'd like to store the file. That'll let the framework take care of the file io for you.
I have multiple URLs that returns zip files. Most of the files, I'm able to download using urllib2 library as follows:
request = urllib2.urlopen(url)
zip_file = request.read()
The problem I'm having is that one of the files is 35Mb in size (zipped) and I'm never able to finish downloading it using this library. I'm able to download it using wget and the browser normally.
I have tried downloading the file in chuncks like this:
request = urllib2.urlopen(url)
buffers = []
while True:
buffer = request.read(8192)
if buffer:
buffers.append(buffer)
else:
break
final_file = ''.join(buffers)
But this also does not finish the download. No error is raised, so it's hard to debug what is happening. Unfortunately, I can't post an example of the url / file here.
Any suggestions / advices?
This is copy / paste from my application which downloads it's own update installer. It reads the file in blocks and immediately saves the blocks in output file on the disk.
def DownloadThreadFunc(self):
try:
url = self.lines[1]
data = None
req = urllib2.Request(url, data, {})
handle = urllib2.urlopen(req)
self.size = int(handle.info()["Content-Length"])
self.actualSize = 0
name = path.join(DIR_UPDATES, url.split("/")[-1])
blocksize = 64*1024
fo = open(name, "wb")
while not self.terminate:
block = handle.read(blocksize)
self.actualSize += len(block)
if len(block) == 0:
break
fo.write(block)
fo.close()
except (urllib2.URLError, socket.timeout), e:
try:
fo.close()
except:
pass
error("Download failed.", unicode(e))
I use self.size and self.actualSize to show the download progress in GUI thread and self.terminate to cancel the download from the GUI button if needed.
I have dilemma.. I'm uploading files both in scribd store and blobstore using tipfy as framework.
I have webform with action is not created by blobstore.create_upload_url (i'm just using url_for('myhandler')). I did it because if i'm using blobstore handler the POST response parsed and I cannot use normal python-scribd api to upload file into scribd store.
Now I have working scribd saver:
class UploadScribdHandler(RequestHandler, BlobstoreUploadMixin):
def post(self):
uploaded_file = self.request.files.get('upload_file')
fname = uploaded_file.filename.strip()
try:
self.post_to_scribd(uploaded_file, fname)
except Exception, e:
# ... get the exception message and do something with it
msg = e.message
# ...
# reset the stream to zero (beginning) so the file can be read again
uploaded_file.seek(0)
#removed try-except to see debug info in browser window
# Create the file
file_name = files.blobstore.create(_blobinfo_uploaded_filename=fname)
# Open the file and write to it
with files.open(file_name, 'a') as f:
f.write(uploaded_file.read())
# Finalize the file. Do this before attempting to read it.
files.finalize(file_name)
# Get the file's blob key
blob_key = files.blobstore.get_blob_key(file_name)
return Response('done')
def post_to_scribd(self, uploaded_file, fname):
errmsg =''
uploaded_file = self.request.files.get('upload_file')
fname = uploaded_file.filename.strip()
fext = fname[fname.rfind('.')+1:].lower()
if (fext not in ALLOWED_EXTENSION):
raise Exception('This file type does not allowed to be uploaded\n')
if SCRIBD_ENABLED:
doc_title = self.request.form.get('title')
doc_description = self.request.form.get('description')
doc_tags = self.request.form.get('tags')
try:
document = scribd.api_user.upload(uploaded_file, fname, access='private')
#while document.get_conversion_status() != 'DONE':
# time.sleep(2)
if not doc_title:
document.title = fname[:fname.rfind('.')]
else:
document.title = doc_title
if not doc_description:
document.description = 'This document was uploaded at ' + str(datetime.datetime.now()) +'\n'
else:
document.description = doc_description
document.tags = doc_tags
document.save()
except scribd.ResponseError, err:
raise Exception('Scribd failed: error code:%d, error message: %s\n' % (err.errno, err.strerror))
except scribd.NotReadyError, err:
raise Exception('Scribd failed: error code:%d, error message: %s\n' % (err.errno, err.strerror))
except:
raise Exception('something wrong exception')
As you can see it also saves file into blobstore.. But If i'm uploading big file (i.e. 5Mb) I'm receiving
RequestTooLargeError: The request to API call file.Append() was too large.
Request: docs.upload(access='private', doc_type='pdf', file=('PK\x03\x04\n\x00\x00\x00\x00\x00"\x01\x10=\x00\x00(...)', 'test.pdf'))
How can I fix it?
Thanks!
You need to make multiple, smaller calls to the file API, for instance like this:
with files.open(file_name, 'a') as f:
data = uploaded_file.read(65536)
while data:
f.write(data)
data = uploaded_file.read(65536)
Note that the payload size limit on regular requests to App Engine apps is 10MB; if you want to upload larger files, you'll need to use the regular blobstore upload mechanism.
finally i found solution.
Nick Johneson's answer occurred attribute error because uploaded_file is treated as string.
string didn't have read() method.
Cause string doesn't have method read(), i spliced file string and write it just like he wrote.
class UploadRankingHandler(webapp.RequestHandler):
def post(self):
fish_image_file = self.request.get('file')
file_name = files.blobstore.create(mime_type='image/png', _blobinfo_uploaded_filename="testfilename.png")
file_str_list = splitCount(fish_image_file,65520)
with files.open(file_name, 'a') as f:
for line in file_str_list:
f.write(line)
you can check about splitCount(). here
http://www.bdhwan.com/entry/gaewritebigfile