How to send zip files in the python Flask framework? - python

I have a flask server that grabs binary data for several different files from a database and puts them into a python 'zipfile' object. I want to send the generated zip file with my code using flask's "send_file" method.
I was originally able to send non-zip files successfully by using the BytesIO(bin) as the first argument to send_file, but for some reason I can't do the same thing with my generated zip file. It gives the error:
'ZipFile' does not have the buffer interface.
How do I send this zip file object to the user with Flask?
This is my code:
#app.route("/getcaps",methods=['GET','POST'])
def downloadFiles():
if request.method == 'POST':
mongo = MongoDAO('localhost',27017)
identifier = request.form['CapsuleName']
password = request.form['CapsulePassword']
result = mongo.getCapsuleByIdentifier(identifier,password)
zf = zipfile.ZipFile('capsule.zip','w')
files = result['files']
for individualFile in files:
data = zipfile.ZipInfo(individualFile['fileName'])
data.date_time = time.localtime(time.time())[:6]
data.compress_type = zipfile.ZIP_DEFLATED
zf.writestr(data,individualFile['fileData'])
return send_file(BytesIO(zf), attachment_filename='capsule.zip', as_attachment=True)
return render_template('download.html')

BytesIO() needs to be passed bytes data, but a ZipFile() object is not bytes-data; you actually created a file on your harddisk.
You can create a ZipFile() in memory by using BytesIO() as the base:
memory_file = BytesIO()
with zipfile.ZipFile(memory_file, 'w') as zf:
files = result['files']
for individualFile in files:
data = zipfile.ZipInfo(individualFile['fileName'])
data.date_time = time.localtime(time.time())[:6]
data.compress_type = zipfile.ZIP_DEFLATED
zf.writestr(data, individualFile['fileData'])
memory_file.seek(0)
return send_file(memory_file, attachment_filename='capsule.zip', as_attachment=True)
The with statement ensures that the ZipFile() object is properly closed when you are done adding entries, causing it to write the required trailer to the in-memory file object. The memory_file.seek(0) call is needed to 'rewind' the read-write position of the file object back to the start.

Related

Django: download created zip from function. Zipfile returns empty/not an zip-archive file

I am trying to give the user a "Save as" option when the user clicks the download button in my Django app. When the user clicks the button it kicks-off the following function. The function gets some CSVs from a blob container in Azure and adds them to a zip. That zip should then be offered to download and store in a location of the user's choice.
def create_downloadable_zip():
container_client = az.container_client(container_name=blob_generator.container_name)
blobs = container_client.list_blobs()
zip_file = zipfile.ZipFile(f'{models.AppRun.client_name}.zip', 'w')
for blob in blobs:
if blob.name.endswith(".csv"):
downloaded_blob = container_client.download_blob(blob)
blob_data = downloaded_blob.readall()
zip_file.writestr(blob.name, blob_data)
zip_file.close()
return zip_file
My views.py looks like follow:
def download_file(request):
if request.method == 'POST':
zip = create_downloadable_zip()
response = HttpResponse(zip, content_type='application/zip')
response['Content-Disposition'] = 'attachement;' f'filename={zip}.zip'
return response
#
# else:
# # return a 404 response if this is a POST request
# return HttpResponse(status=404)
return render(request, "download_file.html")
The functionality works, but it returns an empty non-zip file when the "Save as" window pop-ups. However, the actual zip file contains the files is being saved in the root folder of the Django project.
I really don't get why I doesn't return the zip file from memory, but rather directly stores that zip file in root and returns an empty non-zip file with the download functionality.
Someone knows what I am doing wrong?
zipfile is used to open a file, but it is not the actual file, simply a zipfile object as #b-remmelzwaal mentioned. You will need to create a file like object, and return that instead. This can be done using io.BytesIO.
from io import BytesIO
from zipfile import ZipFile
def create_zip():
container_client = az.container_client(container_name=blob_generator.container_name)
blobs = container_client.list_blobs()
buffer = BytesIO()
with ZipFile(buffer, 'w') as zip_file:
for blob in blobs:
if blob.name.endswith(".csv"):
downloaded_blob = container_client.download_blob(blob)
blob_data = downloaded_blob.readall()
zip_file.writestr(blob.name, blob_data)
return buffer.getvalue()
Note we are returning the file like object, not the zip file object. This is because buffer represents the actual file you've created.
You don't have to use a context manager, but I find them very useful.
Also, check your spelling for the line:
# attachment instead attachement
response['Content-Disposition'] = 'attachment;' f'filename={zip}.zip'
BytesIO Documentation

FileResponse with file opened using smart-open

I'm trying to let a user download a file from my webpage. The file is in an S3 bucket I'm accessing using smart-open. The issue I'm having is how to combine that with FileReader. Presently I'm getting a TypeError: "expected str, bytes or os.PathLike object, not Reader". The filetypes will be .txt, .pdf and .doc/.docx
My view:
#api_view(['GET'])
def download_attachment(request, pk):
session = boto3.Session(aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY)
obj = Attachment.objects.get(id=pk)
with smart_opener(f's3://______media/public/{str(obj)}',"rb",transport_params={'client':session.client('s3')}) as attachment:
response = FileResponse(open(attachment, 'rb'))
response['Content-Disposition'] = f'attachment; filename={obj.file.name}'
return response
open function expects a filesystem path as the first positional argument to open a file located in your computer's filesystem. That is why you are getting TypeError. I don't know what smart_opener() does but if it returns a file-like object this object must be passed to FileResponse directly.

How to get file path from UploadFile in FastAPI?

Basically, I'm trying to create an endpoint to Upload files to Amazon S3.
async def upload_files(filepath: str, upload_file_list: List[UploadFile] = File(...)):
for upload_file in upload_file_list:
abs_file_path = "/manual/path/works" + upload_file.path
# Replace above line to get absolute file path from UploadFile
response = s3_client.upload_file(abs_file_path,bucket_name,
os.path.join(dest_path, upload_file.filename))
Above is my code to upload multiple files to the S3 bucket.
s3_client.upload_file() accepts an absolute file path of the file to upload.
It is working when I manually put the full path.
This, however, didn't work:
response = s3_client.upload_file(upload_file.filename, bucket_name,
os.path.join(dest_path, upload_file.filename))
Is there a way to get this absolute path in FastAPI? Or, any alternative with temp_path without copying or writing the file?
If not, then any alternative with boto3 to upload files to S3 using FastAPI?
UploadFile uses Python's SpooledTemporaryFile, which is a "file stored in memory", and "is destroyed as soon as it is closed". You can either read the file contents (i.e., using contents = file.file.read() or for async read/write have a look at this answer), and then upload these bytes to your server (if it permits), or copy the contents of the uploaded file into a NamedTemporaryFile, as explained here. Unlike SpooledTemporaryFile, a NamedTemporaryFile "is guaranteed to have a visible name in the file system" that "can be used to open the file". That name can be retrieved from the name attribute (i.e., temp.name). Example:
from fastapi import HTTPException
#app.post("/upload")
def upload(file: UploadFile = File(...)):
temp = NamedTemporaryFile(delete=False)
try:
try:
contents = file.file.read()
with temp as f:
f.write(contents);
except Exception:
raise HTTPException(status_code=500, detail='Error on uploading the file')
finally:
file.file.close()
# Here, upload the file to your S3 service using `temp.name`
s3_client.upload_file(temp.name, 'local', 'myfile.txt')
except Exception:
raise HTTPException(status_code=500, detail='Something went wrong')
finally:
#temp.close() # the `with` statement above takes care of closing the file
os.remove(temp.name) # Delete temp file
Update
Additionally, one can access the actual Python file using the .file attribute. As per the documentation:
file: A SpooledTemporaryFile (a file-like object). This is the actual
Python file that you can pass directly to other functions or libraries
that expect a "file-like" object.
Thus, you could also try using upload_fileobj function and passing upload_file.file:
response = s3_client.upload_fileobj(upload_file.file, bucket_name, os.path.join(dest_path, upload_file.filename))
or, passing a file-like object using the ._file attribute of the SpooledTemporaryFile, which returns either an io.BytesIO or io.TextIOWrapper object (depending on whether binary or text mode was specified).
response = s3_client.upload_fileobj(upload_file.file._file, bucket_name, os.path.join(dest_path, upload_file.filename))
Update 2
You could even keep the bytes in an in-memory buffer (i.e., BytesIO), use it to upload the contents to the S3 bucket, and finally close it ("The buffer is discarded when the close() method is called."). Remember to call seek(0) method to reset the cursor back to the beginning of the file after you finish writing to the BytesIO stream.
contents = file.file.read()
temp_file = io.BytesIO()
temp_file.write(contents)
temp_file.seek(0)
s3_client.upload_fileobj(temp_file, bucket_name, os.path.join(dest_path, upload_file.filename))
temp_file.close()

How to read contents of zip file in memory on a file upload in python?

I have a zip file that I receive when the user uploads a file. The zip essentially contains a json file which I want to read and process without having to create the zip file first, then unzipping it and then reading the content of the inner file.
Currently I only the longer process which is something like below
import json
import zipfile
#csrf_exempt
def get_zip(request):
try:
if request.method == "POST":
try:
client_file = request.FILES['file']
file_path = "/some/path/"
# first dump the zip file to a directory
with open(file_path + '%s' % client_file.name, 'wb+') as dest:
for chunk in client_file.chunks():
dest.write(chunk)
# unzip the zip file to the same directory
with zipfile.ZipFile(file_path + client_file.name, 'r') as zip_ref:
zip_ref.extractall(file_path)
# at this point we get a json file from the zip say `test.json`
# read the json file content
with open(file_path + "test.json", "r") as fo:
json_content = json.load(fo)
doSomething(json_content)
return HttpResponse(0)
except Exception as e:
return HttpResponse(1)
As you can see, this involves 3 steps to finally get the content from the zip file into memory. What I want is get the content of the zip file and load directly into memory.
I did find some similar questions in stack overflow like this one https://stackoverflow.com/a/2463819 . But I am not sure at what point do I invoke this operation mentioned in the post
How can I achieve this?
Note: I am using django in backend.
There will always be one json file in the zip.
From what I understand, what #jason is trying to say here is to first open a zipFile just like you have done here with zipfile.ZipFile(file_path + client_file.name, 'r') as zip_ref:.
class zipfile.ZipFile(file[, mode[, compression[, allowZip64]]])
Open a ZIP file, where file can be either a path to a file (a string) or a file-like object.
And then use BytesIO read in the bytes of a file-like object. But from above you are reading in r mode and not rb mode. So change it as follows.
with open(filename, 'rb') as file_data:
bytes_content = file_data.read()
file_like_object = io.BytesIO(bytes_content)
zipfile_ob = zipfile.ZipFile(file_like_object)
Now zipfile_ob can be accessed from memory.
The first argument to zipfile.ZipFile() can be a file object rather than a pathname. I think the Django UploadedFile object supports this use, so you can read directly from that rather than having to copy into a file.
You can also open the file directly from the zip archive rather than extracting that into a file.
import json
import zipfile
#csrf_exempt
def get_zip(request):
try:
if request.method == "POST":
try:
client_file = request.FILES['file']
# unzip the zip file to the same directory
with zipfile.ZipFile(client_file, 'r') as zip_ref:
first = zip_ref.infolist()[0]
with zip_ref.open(first, "r") as fo:
json_content = json.load(fo)
doSomething(json_content)
return HttpResponse(0)
except Exception as e:
return HttpResponse(1)

In Flask, is it possible to make a zip file on the fly and send it to the user? [duplicate]

I have a flask server that grabs binary data for several different files from a database and puts them into a python 'zipfile' object. I want to send the generated zip file with my code using flask's "send_file" method.
I was originally able to send non-zip files successfully by using the BytesIO(bin) as the first argument to send_file, but for some reason I can't do the same thing with my generated zip file. It gives the error:
'ZipFile' does not have the buffer interface.
How do I send this zip file object to the user with Flask?
This is my code:
#app.route("/getcaps",methods=['GET','POST'])
def downloadFiles():
if request.method == 'POST':
mongo = MongoDAO('localhost',27017)
identifier = request.form['CapsuleName']
password = request.form['CapsulePassword']
result = mongo.getCapsuleByIdentifier(identifier,password)
zf = zipfile.ZipFile('capsule.zip','w')
files = result['files']
for individualFile in files:
data = zipfile.ZipInfo(individualFile['fileName'])
data.date_time = time.localtime(time.time())[:6]
data.compress_type = zipfile.ZIP_DEFLATED
zf.writestr(data,individualFile['fileData'])
return send_file(BytesIO(zf), attachment_filename='capsule.zip', as_attachment=True)
return render_template('download.html')
BytesIO() needs to be passed bytes data, but a ZipFile() object is not bytes-data; you actually created a file on your harddisk.
You can create a ZipFile() in memory by using BytesIO() as the base:
memory_file = BytesIO()
with zipfile.ZipFile(memory_file, 'w') as zf:
files = result['files']
for individualFile in files:
data = zipfile.ZipInfo(individualFile['fileName'])
data.date_time = time.localtime(time.time())[:6]
data.compress_type = zipfile.ZIP_DEFLATED
zf.writestr(data, individualFile['fileData'])
memory_file.seek(0)
return send_file(memory_file, attachment_filename='capsule.zip', as_attachment=True)
The with statement ensures that the ZipFile() object is properly closed when you are done adding entries, causing it to write the required trailer to the in-memory file object. The memory_file.seek(0) call is needed to 'rewind' the read-write position of the file object back to the start.

Categories

Resources