FastAPI fileupload to S3 - python

I am writing a FastAPI endpoint to upload a file to S3 Object Store. Following is the code snippet. However, I get an exception "expected string or bytes-like object"
I don't want to save the file temporarily and then upload it.
I reead in FastAPI documentation that the UploadFile has file attribute which is an "actual Python file (SpooledTemporaryFile) that you can pass directly to other functions or libraries that expect a "file-like" object". Since the upload_fileobj function of boto3 expects a string or bytes-like argument, I converted the file-like object to BytesIO object using the _file attribute of the SpooledTemporaryFile. But still the error "expected string or bytes-like object"
Any pointers will be appreciated.
#app.post("/uploadFile")
async def upload_file(fileobject: UploadFile = File(...),filename: str = Body(default=None),key: str = Body(default=None),):
extension = fileobject.filename.rsplit(".", 1)[1].lower()
if key is None:
raise HTTPException(status_code=400, MissingError="Key is missing")
if filename is None:
filename = fileobject.filename
if fileobject.filename.endswith(tuple(ALLOWED_EXTENSIONS)):
data = (
fileobject.file._file
) # Using _file attribute convert tempfile.SpooledTemporaryFile to io.BytesIO
try:
# Upload the file to Spaces
upload_result = await client.upload_fileobj(
data,
BUCKET,
f"{key}/{filename}",
ExtraArgs={
"ACL": "public",
"ContentType": FILE_CONTENT_TYPES[extension],
},
)
if upload_result:
object_url = f"{OBJ_URL}/{key}/{file.filename}"
doc = [{"file_url": object_url}]
else:
raise HTTPException(status_code=400, detail="Failed to upload in S3")
except Exception as e:
logging.error(f"Exception while uploading file - {e}")
raise HTTPException(
status_code=400, detail=f"Exception {e} while uploading the file"
)
else:
raise HTTPException(
status_code=400, detail=f"File of type {extension} is not allowed"
)

I had a similar issue, but the issue was with the bucket name, mine was None/empty, the error message is rather annoying, the bucket should be a non empty string.

Related

How to pass a video uploaded via FastAPI to OpenCV VideoCapture?

I am trying to upload an mp4 video file using UploadFile in FastAPI.
However, the uploaded format is not readable by OpencCV (cv2).
This is my endpoint:
from fastapi import FastAPI, File, UploadFile
from fastapi.responses import PlainTextResponse
#app.post("/video/test", response_class=PlainTextResponse)
async def detect_faces_in_video(video_file: UploadFile):
contents = await video_file.read()
print(type(video_file)) # <class 'starlette.datastructures.UploadFile'>
print(type(contents)) # <class 'bytes'>
return ""
and the two file formats (i.e., bytes and UploadFile) are not readable by OpenCV.
You are trying to pass either the file contents (bytes) or UploadFile object; however, VideoCapture() accepts either a video filename, capturing device or or an IP video stream.
UploadFile is basically a SpooledTemporaryFile (a file-like object) that operates similar to a TemporaryFile. However, it does not have a visible name in the file system. As you mentioned that you wouldn't be keeping the files on the server after processing them, you could copy the file contents to a NamedTemporaryFile that "has a visible name in the file system, which can be used to open the file" (using the name attribute), as described here and here. As per the documentation:
Whether the name can be used to open the file a second time, while the
named temporary file is still open, varies across platforms (it can be
so used on Unix; it cannot on Windows). If delete is true (the
default), the file is deleted as soon as it is closed.
Hence, on Windows you need to set the delete argument to False when instantiating a NamedTemporaryFile, and once you are done with it, you can manually delete it, using the os.remove() or os.unlink() method.
Below are given two options on how to do that. Option 1 implements a solution using a def endpoint, while Option 2 uses an async def endpoint (utilising the aiofiles library). For the difference between def and async def, please have a look at this answer. If you are expecting users to upload rather large files in size that wouldn't fit into memory, have a look at this and this answer on how to read the uploaded video file in chunks instead.
Option 1 - Using def endpoint
from fastapi import FastAPI, File, UploadFile
from tempfile import NamedTemporaryFile
import os
#app.post("/video/detect-faces")
def detect_faces(file: UploadFile = File(...)):
temp = NamedTemporaryFile(delete=False)
try:
try:
contents = file.file.read()
with temp as f:
f.write(contents);
except Exception:
return {"message": "There was an error uploading the file"}
finally:
file.file.close()
res = process_video(temp.name) # Pass temp.name to VideoCapture()
except Exception:
return {"message": "There was an error processing the file"}
finally:
#temp.close() # the `with` statement above takes care of closing the file
os.remove(temp.name)
return res
Option 2 - Using async def endpoint
from fastapi import FastAPI, File, UploadFile
from tempfile import NamedTemporaryFile
from fastapi.concurrency import run_in_threadpool
import aiofiles
import asyncio
import os
#app.post("/video/detect-faces")
async def detect_faces(file: UploadFile = File(...)):
try:
async with aiofiles.tempfile.NamedTemporaryFile("wb", delete=False) as temp:
try:
contents = await file.read()
await temp.write(contents)
except Exception:
return {"message": "There was an error uploading the file"}
finally:
await file.close()
res = await run_in_threadpool(process_video, temp.name) # Pass temp.name to VideoCapture()
except Exception:
return {"message": "There was an error processing the file"}
finally:
os.remove(temp.name)
return res

Getting AccessDenied when trying to retrieve a file from S3

I am trying to retrieve a file from an S3 bucket (using boto3), but I keep getting "An error occurred (AccessDenied) when calling the GetObject operation: Access Denied". Below is how I created the S3 bucket:
# S3
bucket = s3.Bucket(
self,
"testS3Bucket",
bucket_name=f"test_s3_bucket",
versioned=True,
)
bucket.grant_read_write(service_lambda)
and the method I used to get the file from S3:
def download_file(self, file_name, s3_bucket):
try:
file = self.s3.Bucket(s3_bucket).Object(file_name)
file_content = file.get()["Body"].read()
except ClientError as e:
if e.response["Error"]["Code"] == "404":
log.error("File does not exist for partner")
return {}
else:
raise e
except Exception as e:
raise e
return file_content.decode()
I even went as far as to add "s3:*" action to the IAM policy statement, but I still got the same error. I was able to use the command below to successfully retrieve the file's metadata so I don't think it is a permissions error.
aws s3api head-object --bucket test_s3_bucket --key configuration.txt

Result: Failure Exception: TypeError: argument should be a bytes-like object or ASCII string, not 'dict'

I'm having a problem with this homework where I'm sending a post request with an encoded image base64 as a json object in postman. I'm supposed to decode the json body and save it as an image on azure blob storage. I was successfully creating the blobs as txt but I haven't got much luck this time. Any help will be much appreciated
Result: Failure
Exception: TypeError: argument should be a bytes-like object or ASCII string, not 'dict'
If you want to upload image file to Azure blob storage with Azure function, you can try to use the form to send your image to Azure function. For example
add Storage connection string in local.settings.json
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "",
"FUNCTIONS_WORKER_RUNTIME": "python",
"ConnectionString" : "",
"ContainerName": ""
}
}
Code
import logging
import os
import azure.functions as func
from azure.storage.blob import BlobServiceClient, BlobClient
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
try:
file= req.files.get('the key value in your form')
logging.info(file.filename)
connect_str=os.environ["ConnectionString"]
container=os.environ["ContainerName"]
blob_service_client = BlobServiceClient.from_connection_string(connect_str)
blob_client =blob_service_client.get_blob_client(container=container,blob=file.filename)
blob_client.upload_blob(file)
except Exception as ex:
logging.info(ex.args)
return func.HttpResponse("ok")
Test in Postman
Update
According to my test, if we use base64.b64decode() to decode, we will get bytes object. So we need to use create_blob_from_bytes to upload. For example
My code
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
# get image base64 string
file=req.get_json()
image=file['image'].replace(' ', '+')
#decode base64 string
data=base64.b64decode(image)
logging.info(type(data))
#upload
block_blob_service = BlockBlobService(account_name='blobstorage0516', account_key='')
container_name='test'
blob_name='test.jpeg'
block_blob_service.create_blob_from_bytes(container_name, blob_name, data)
return func.HttpResponse(f"OK!")
this is how I send the POST request and needs to be handle in my function app. I'm successfully able to create the blobs but I only get a broken image in the url from the blobs.enter image description here

Upload multiple pdf files through API Gateway Lambda AWS- Python

I am having an issue passing the file name while uploading it to S3 using API Gateway-Lambda. How should I pass files name dynamically, meaning users should put multiple files and original file names should be there upon landing to S3.Here is my code,
def lambda_handler(event, context):
try:
s3 = boto3.client("s3")
buffer = BytesIO()
get_file_content = event["body"]
content = base64.b64decode(get_file_content)
s3_upload = s3.put_object(Bucket=BUCKET_NAME, Key= "test.pdf", Body = content )
return {
"statuscode" : 200,
"body" : json.dumps("complete")
}
except Exception as e:
logger.error(
"Error uploading files"
": {}".format(e)
)
raise e

JIRA Python add_attachment() 405 Method Not Allowed

I am trying to upload a file to JIRA via its REST API using the python lib found here: jira python documentation
It seems pretty straight forward I wrote a method that allows me to pass an issue and then it attaches a filename. and one that lets me retrieve an issue from JIRA.
from jira.client import JIRA
class JIRAReport (object):
def attach(self,issue):
print 'Attaching... '
attachment = self.jira.add_attachment(issue, attachment=self.reportpath, filename='Report.xlsx')
print 'Success!'
def getissue(self):
if not self.issue == None:
return self.jira.issue(self.issue)
return None
then in my main script I am getting the issue and attaching the file to an issue I retrieved from JIRA
report = JiraReport()
report.issue = 'ProjectKey-1'
report.reportpath = '../report_upload/tmp/' + filename
issue = report.getissue()
if not issue == None:
report.attach(issue)
else:
print "No Issue with Key Found"
I am able to get the issue/create issues if needed but when using the self.jira.add_attachment() method I am getting 405 Method Not Allowed.
The file exists and is able to be opened.
Here is the add_attachment() method from the source code:
def add_attachment(self, issue, attachment, filename=None):
"""
Attach an attachment to an issue and returns a Resource for it.
The client will *not* attempt to open or validate the attachment; it expects a file-like object to be ready
for its use. The user is still responsible for tidying up (e.g., closing the file, killing the socket, etc.)
:param issue: the issue to attach the attachment to
:param attachment: file-like object to attach to the issue, also works if it is a string with the filename.
:param filename: optional name for the attached file. If omitted, the file object's ``name`` attribute
is used. If you aquired the file-like object by any other method than ``open()``, make sure
that a name is specified in one way or the other.
:rtype: an Attachment Resource
"""
if isinstance(attachment, string_types):
attachment = open(attachment, "rb")
# TODO: Support attaching multiple files at once?
url = self._get_url('issue/' + str(issue) + '/attachments')
fname = filename
if not fname:
fname = os.path.basename(attachment.name)
content_type = mimetypes.guess_type(fname)[0]
if not content_type:
content_type = 'application/octet-stream'
files = {
'file': (fname, attachment, content_type)
}
r = self._session.post(url, files=files, headers=self._options['headers'])
raise_on_error(r)
attachment = Attachment(self._options, self._session, json.loads(r.text)[0])
return attachment
It is mentioned in documentation that as a argument they expect file-like object.
Try to do something like :
file_obj = open('test.txt','rb')
jira.add_attachment(issue,file_obj,'test.txt')
file_obj.close()
Check that the URL that you are specifying for JIRA (if using the on-demand service) is https://instance.atlassian.net.
I just hit this as well, and it sends a POST request to http://instance.atlassian.net and gets redirected to https://instance.atlassian.net, but the client sends a GET request to the redirected address (see: https://softwareengineering.stackexchange.com/questions/99894/why-doesnt-http-have-post-redirect for more information)

Categories

Resources