I'm trying to let a user download a file from my webpage. The file is in an S3 bucket I'm accessing using smart-open. The issue I'm having is how to combine that with FileReader. Presently I'm getting a TypeError: "expected str, bytes or os.PathLike object, not Reader". The filetypes will be .txt, .pdf and .doc/.docx
My view:
#api_view(['GET'])
def download_attachment(request, pk):
session = boto3.Session(aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY)
obj = Attachment.objects.get(id=pk)
with smart_opener(f's3://______media/public/{str(obj)}',"rb",transport_params={'client':session.client('s3')}) as attachment:
response = FileResponse(open(attachment, 'rb'))
response['Content-Disposition'] = f'attachment; filename={obj.file.name}'
return response
open function expects a filesystem path as the first positional argument to open a file located in your computer's filesystem. That is why you are getting TypeError. I don't know what smart_opener() does but if it returns a file-like object this object must be passed to FileResponse directly.
Related
I am trying to give the user a "Save as" option when the user clicks the download button in my Django app. When the user clicks the button it kicks-off the following function. The function gets some CSVs from a blob container in Azure and adds them to a zip. That zip should then be offered to download and store in a location of the user's choice.
def create_downloadable_zip():
container_client = az.container_client(container_name=blob_generator.container_name)
blobs = container_client.list_blobs()
zip_file = zipfile.ZipFile(f'{models.AppRun.client_name}.zip', 'w')
for blob in blobs:
if blob.name.endswith(".csv"):
downloaded_blob = container_client.download_blob(blob)
blob_data = downloaded_blob.readall()
zip_file.writestr(blob.name, blob_data)
zip_file.close()
return zip_file
My views.py looks like follow:
def download_file(request):
if request.method == 'POST':
zip = create_downloadable_zip()
response = HttpResponse(zip, content_type='application/zip')
response['Content-Disposition'] = 'attachement;' f'filename={zip}.zip'
return response
#
# else:
# # return a 404 response if this is a POST request
# return HttpResponse(status=404)
return render(request, "download_file.html")
The functionality works, but it returns an empty non-zip file when the "Save as" window pop-ups. However, the actual zip file contains the files is being saved in the root folder of the Django project.
I really don't get why I doesn't return the zip file from memory, but rather directly stores that zip file in root and returns an empty non-zip file with the download functionality.
Someone knows what I am doing wrong?
zipfile is used to open a file, but it is not the actual file, simply a zipfile object as #b-remmelzwaal mentioned. You will need to create a file like object, and return that instead. This can be done using io.BytesIO.
from io import BytesIO
from zipfile import ZipFile
def create_zip():
container_client = az.container_client(container_name=blob_generator.container_name)
blobs = container_client.list_blobs()
buffer = BytesIO()
with ZipFile(buffer, 'w') as zip_file:
for blob in blobs:
if blob.name.endswith(".csv"):
downloaded_blob = container_client.download_blob(blob)
blob_data = downloaded_blob.readall()
zip_file.writestr(blob.name, blob_data)
return buffer.getvalue()
Note we are returning the file like object, not the zip file object. This is because buffer represents the actual file you've created.
You don't have to use a context manager, but I find them very useful.
Also, check your spelling for the line:
# attachment instead attachement
response['Content-Disposition'] = 'attachment;' f'filename={zip}.zip'
BytesIO Documentation
How to serializer a file object while using json.dumps ?
I 'm using pytest for testing file upload in django and I 've this function
def test_file_upload(self):
# file_content is a bytest object
request = client.patch(
"/fake-url/",
json.dumps({"file" : file_content}),
content_type="application/json",
)
I 've tried to set the file_content as a bytes object but I 'm getting this error TypeError: Object of type bytes is not JSON serializable I need to send the whole file to my endpoint as json serialized
You can use it mock library for testing file upload;
from mock import MagicMock
from django.core.files import File
mock_image = magicMock(file=File)
mock_image.name="sample.png"
# Another test operations...
def test_file_upload(self):
# file_content is a bytest object
request = client.patch(
"/fake-url/",
{"file" : mock_image},
format="multipart",
)
Detailed another answer; how to unit test file upload in django
Your API endpoint expects a multipart form containing file. Below is the function I use to send a multipart form from a local file for testing. If you already have file bytes, skip the open line and just use file_content in ContentFile.
def send_multipart_form(self, filename):
with open(filename, "rb") as f:
file_data = ContentFile(f.read(), os.path.basename(filename))
res = self.client.put(
self.url,
data=encode_multipart(boundary=BOUNDARY, data={"file": file_data}),
content_type=MULTIPART_CONTENT,
)
return res
I need help uploading a file directly from an HTML form to an API. I've seen this being done for remote URLs, but I don't know how to do this for local files? I tried writing this, but its not working:
uploadmedia = request.files['fileupload']
client = Client('thisismykey')
with open(uploadmedia, 'rb') as file:
new_upload = client.uploads('<space-id>').create(file)
The line client.uploads is what is specified in the API docs here. I just need to be able to get the file path.
The comments suggest the following:
# you can use either a file-like object or a path.
# If you use a path, the SDK will open it, create the upload and
# close the file afterwards.
I am assuming that request.files['fileupload'] is a file like object, so I just passed that along.
The above code gives me the following error:
File "D:\Gatsby\submission\flask-tailwindcss-starter\app\__init__.py", line 28, in index
with open(uploadmedia, 'rb') as file:
TypeError: expected str, bytes or os.PathLike object, not FileStorage
I know that in this example, uploadmedia.filename would get me the file's name, but what is the attribute for the path? How do I do that?
The request.files['file'] is an instance of a FileStorage class. refer to api, you cannot use with open(uploadmedia, 'rb') as file: .
try using stream attribute :
uploadmedia = request.files['fileupload']
client = Client('thisismykey')
new_upload = client.uploads('<space-id>').create(uploadmedia.stream)
Basically, I'm trying to create an endpoint to Upload files to Amazon S3.
async def upload_files(filepath: str, upload_file_list: List[UploadFile] = File(...)):
for upload_file in upload_file_list:
abs_file_path = "/manual/path/works" + upload_file.path
# Replace above line to get absolute file path from UploadFile
response = s3_client.upload_file(abs_file_path,bucket_name,
os.path.join(dest_path, upload_file.filename))
Above is my code to upload multiple files to the S3 bucket.
s3_client.upload_file() accepts an absolute file path of the file to upload.
It is working when I manually put the full path.
This, however, didn't work:
response = s3_client.upload_file(upload_file.filename, bucket_name,
os.path.join(dest_path, upload_file.filename))
Is there a way to get this absolute path in FastAPI? Or, any alternative with temp_path without copying or writing the file?
If not, then any alternative with boto3 to upload files to S3 using FastAPI?
UploadFile uses Python's SpooledTemporaryFile, which is a "file stored in memory", and "is destroyed as soon as it is closed". You can either read the file contents (i.e., using contents = file.file.read() or for async read/write have a look at this answer), and then upload these bytes to your server (if it permits), or copy the contents of the uploaded file into a NamedTemporaryFile, as explained here. Unlike SpooledTemporaryFile, a NamedTemporaryFile "is guaranteed to have a visible name in the file system" that "can be used to open the file". That name can be retrieved from the name attribute (i.e., temp.name). Example:
from fastapi import HTTPException
#app.post("/upload")
def upload(file: UploadFile = File(...)):
temp = NamedTemporaryFile(delete=False)
try:
try:
contents = file.file.read()
with temp as f:
f.write(contents);
except Exception:
raise HTTPException(status_code=500, detail='Error on uploading the file')
finally:
file.file.close()
# Here, upload the file to your S3 service using `temp.name`
s3_client.upload_file(temp.name, 'local', 'myfile.txt')
except Exception:
raise HTTPException(status_code=500, detail='Something went wrong')
finally:
#temp.close() # the `with` statement above takes care of closing the file
os.remove(temp.name) # Delete temp file
Update
Additionally, one can access the actual Python file using the .file attribute. As per the documentation:
file: A SpooledTemporaryFile (a file-like object). This is the actual
Python file that you can pass directly to other functions or libraries
that expect a "file-like" object.
Thus, you could also try using upload_fileobj function and passing upload_file.file:
response = s3_client.upload_fileobj(upload_file.file, bucket_name, os.path.join(dest_path, upload_file.filename))
or, passing a file-like object using the ._file attribute of the SpooledTemporaryFile, which returns either an io.BytesIO or io.TextIOWrapper object (depending on whether binary or text mode was specified).
response = s3_client.upload_fileobj(upload_file.file._file, bucket_name, os.path.join(dest_path, upload_file.filename))
Update 2
You could even keep the bytes in an in-memory buffer (i.e., BytesIO), use it to upload the contents to the S3 bucket, and finally close it ("The buffer is discarded when the close() method is called."). Remember to call seek(0) method to reset the cursor back to the beginning of the file after you finish writing to the BytesIO stream.
contents = file.file.read()
temp_file = io.BytesIO()
temp_file.write(contents)
temp_file.seek(0)
s3_client.upload_fileobj(temp_file, bucket_name, os.path.join(dest_path, upload_file.filename))
temp_file.close()
I'm trying to upload a photo from my hard drive to the telegraph. In the documentation it says to use the upload_file():
telegraph.upload.upload_file(f)
Upload file to Telegra.ph’s servers. Returns a list of links.
Allowed only .jpg, .jpeg, .png, .gif and .mp4 files.
Parameters: f (file, str or list) – filename or file-like object.
But I don't understand what "f (file, str or list) – filename or file-like object" means. That is, what do I need to do with the photo so that it can be passed to this function
my attempts to solve the problem:
upload_file(open('1.png', 'rb'))
error:
telegraph.exceptions.TelegraphException: File type invalid
myf = io.StringIO()
myf.write(open(f'photo/{i}.png', 'rb'))
print(upload_file(myf))
myf.close()
error:
TypeError: string argument expected, got '_io.BufferedReader'
I am loading images using this method
def upload_image_telegraph(path_image):
with open(path_image, 'rb') as f:
result_requests = requests.post(
'http://telegra.ph/upload',
files={'file': ('file', f, 'image/jpg')} # image/gif, image/jpeg,image/jpg, image/png, video/mp4
).json()
return result_requests
You can also download using the library Python html_telegraph_poster
With its help you can transfer both local images and a link to them
Exemple
from html_telegraph_poster import upload_image
res = upload_image('local path image or link image')