Can you temporarily copy a Google Cloud image file in Python? - python

For a project, I'm trying to get an uploaded image file, stored in a bucket. I'm trying to have Python save a copy temporarily, just to perform a few tasks on this file (read, decode and give the decoded file back as JSON). After this is done, the temp file needs to be deleted.
I'm using Python 3.8, if that helps at all.
If you want some snippets of what I tried, I'm happy to provide :)
#edit
So far, I tried just downloading the file from the bucket, which works. But I can't seem to figure out how to temporarily save it to just decode (I got an API that will decode the image and get data from that file). This is the code for downloading
def download_file_from_bucket(blob_name, file_path, bucket_name):
try:
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(blob_name)
with open(file_path, 'wb') as f:
storage_client.download_blob_to_file(blob, f)
except Exception as e:
print(e)
return False
bucket_name = 'white-cards-with-qr'
download_file_from_bucket('My first Blob Image', os.path.join(os.getcwd(), 'file2.jpg'), bucket_name)

for object store in cloud environment, you can sign your object to give access for ones who don't have account for that object, you may read this for google cloud

You can use the tempfile library. This is a really basic snippet. You can also name the file or read it after writing it.
import tempfile
temp = tempfile.TemporaryFile()
try:
temp.write(blob)
finally:
temp.close()

Related

Open an image in python and upload it to S3

I was trying to open a file/image in python/django and upload it to s3 but I get different errors depending on what I try. I can get it to work when I send the image using the front end html form but not when opening the file on the back end. I get errors such as "'bytes' object has no attribute 'file'" Any ideas how to open an image and upload it to s3? I wasn't sure if I was using the correct upload function, but it worked when I received the file from an html form instead of opening it directly.
image = open(fileURL, encoding="utf-8")
S3_BUCKET = settings.AWS_BUCKET
session = boto3.Session(
aws_access_key_id=settings.AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY,
)
s3 = session.resource('s3')
s3.Bucket(S3_BUCKET).put_object(Key='folder/%s' % fileName, Body=image)
Thanks.
The open command return a file object. Therefore Body=image does not contain the actual contents of the object.
Since you want to upload an existing object, you could use:
Key = 'folder/' + fileName
s3.Object(S3_BUCKET, Key).upload_file(fileURL)

How to get file path from UploadFile in FastAPI?

Basically, I'm trying to create an endpoint to Upload files to Amazon S3.
async def upload_files(filepath: str, upload_file_list: List[UploadFile] = File(...)):
for upload_file in upload_file_list:
abs_file_path = "/manual/path/works" + upload_file.path
# Replace above line to get absolute file path from UploadFile
response = s3_client.upload_file(abs_file_path,bucket_name,
os.path.join(dest_path, upload_file.filename))
Above is my code to upload multiple files to the S3 bucket.
s3_client.upload_file() accepts an absolute file path of the file to upload.
It is working when I manually put the full path.
This, however, didn't work:
response = s3_client.upload_file(upload_file.filename, bucket_name,
os.path.join(dest_path, upload_file.filename))
Is there a way to get this absolute path in FastAPI? Or, any alternative with temp_path without copying or writing the file?
If not, then any alternative with boto3 to upload files to S3 using FastAPI?
UploadFile uses Python's SpooledTemporaryFile, which is a "file stored in memory", and "is destroyed as soon as it is closed". You can either read the file contents (i.e., using contents = file.file.read() or for async read/write have a look at this answer), and then upload these bytes to your server (if it permits), or copy the contents of the uploaded file into a NamedTemporaryFile, as explained here. Unlike SpooledTemporaryFile, a NamedTemporaryFile "is guaranteed to have a visible name in the file system" that "can be used to open the file". That name can be retrieved from the name attribute (i.e., temp.name). Example:
from fastapi import HTTPException
#app.post("/upload")
def upload(file: UploadFile = File(...)):
temp = NamedTemporaryFile(delete=False)
try:
try:
contents = file.file.read()
with temp as f:
f.write(contents);
except Exception:
raise HTTPException(status_code=500, detail='Error on uploading the file')
finally:
file.file.close()
# Here, upload the file to your S3 service using `temp.name`
s3_client.upload_file(temp.name, 'local', 'myfile.txt')
except Exception:
raise HTTPException(status_code=500, detail='Something went wrong')
finally:
#temp.close() # the `with` statement above takes care of closing the file
os.remove(temp.name) # Delete temp file
Update
Additionally, one can access the actual Python file using the .file attribute. As per the documentation:
file: A SpooledTemporaryFile (a file-like object). This is the actual
Python file that you can pass directly to other functions or libraries
that expect a "file-like" object.
Thus, you could also try using upload_fileobj function and passing upload_file.file:
response = s3_client.upload_fileobj(upload_file.file, bucket_name, os.path.join(dest_path, upload_file.filename))
or, passing a file-like object using the ._file attribute of the SpooledTemporaryFile, which returns either an io.BytesIO or io.TextIOWrapper object (depending on whether binary or text mode was specified).
response = s3_client.upload_fileobj(upload_file.file._file, bucket_name, os.path.join(dest_path, upload_file.filename))
Update 2
You could even keep the bytes in an in-memory buffer (i.e., BytesIO), use it to upload the contents to the S3 bucket, and finally close it ("The buffer is discarded when the close() method is called."). Remember to call seek(0) method to reset the cursor back to the beginning of the file after you finish writing to the BytesIO stream.
contents = file.file.read()
temp_file = io.BytesIO()
temp_file.write(contents)
temp_file.seek(0)
s3_client.upload_fileobj(temp_file, bucket_name, os.path.join(dest_path, upload_file.filename))
temp_file.close()

Image error, not loading for S3 image retrieval

I have written code on my backend (hosted on Elastic Beanstalk) to retrieve a file from an S3 bucket and save it back to the bucket under a different name. I am using boto3 and have created an s3 client called 's3'.
bucketname is the name of the bucket, keyname is name of the key. I am also using the tempfile module
tmp = tempfile.NamedTemporaryFile()
with open(tmp.name, 'wb') as f:
s3.download_fileobj(bucketname, keyname, f)
s3.upload_file(tmp, bucketname, 'fake.jpg')
I was wondering if my understanding was off (still debugging why there is an error) - I created a tempfile and opened and saved within it the contents of the object with the keyname and bucketname. Then I uploaded that temp file to the bucket under a different name. Is my reasoning correct?
The upload_file() command is expecting a filename (as a string) in the first parameter, not a file object.
Instead, you should use upload_fileobj().
However, I would recommend something different...
If you simply wish to make a copy of an object, you can use copy_object:
response = client.copy_object(
Bucket='destinationbucket',
CopySource='/sourcebucket/HappyFace.jpg',
Key='HappyFaceCopy.jpg',
)

open and Save excel file in S3 using Python

I have some problem with excel(xlsx) file.I want to just open and save operation using python code.I have tried with python but couldn't found
cursor = context.cursor()
s3 = boto3.resource('s3')
bucket = s3.Bucket('bucket')
objects = bucket.objects.all()
for obj in objects:
if obj.key.startswith('path/filename'):
filename=obj.key
openok=open(obj)
readok = openok.readlines()
readok.close()
print ('file open and close sucessfully')```
You can't read/interact with files directly on s3 as far as I know.
I'd recommend downloading it locally, and then opening it. You can use the builtin tempfile module if you want to save it to a temporary path.
with tempfile.TemporaryDirectory() as tmpdir:
local_file_path = os.path.join(tmpdir, "tmpfile")
bucket.download_file(obj.key, local_file_path)
openok=open(local_file_path)
readok = openok.readlines()
readok.close()

Save uploaded image to S3 with Django

I'm attempting to save an image to S3 using boto. It does save a file, but it doesn't appear to save it correctly. If I try to open the file in S3, it just shows a broken image icon. Here's the code I'm using:
# Get and verify the file
file = request.FILES['file']
try:
img = Image.open(file)
except:
return api.error(400)
# Determine a filename
filename = file.name
# Upload to AWS and register
s3 = boto.connect_s3(aws_access_key_id=settings.AWS_KEY_ID,
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY)
bucket = s3.get_bucket(settings.AWS_BUCKET)
f = bucket.new_key(filename)
f.set_contents_from_file(file)
I've also tried replacing the last line with:
f.set_contents_from_string(file.read())
But that didn't work either. Is there something obvious that I'm missing here? I'm aware django-storages has a boto backend, but because of complexity with this model, I do not want to use forms with django-storages.
Incase you don't want to go for django-storages and just want to upload few files to s3 rather then all the files then below is the code:
import boto3
file = request.FILES['upload']
s3 = boto3.resource('s3', aws_access_key_id=settings.AWS_ACCESS_KEY, aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY)
bucket = s3.Bucket('bucket-name')
bucket.put_object(Key=filename, Body=file)
You should use django-storages which uses boto internally.
You can either swap the default FileSystemStorage, or create a new storage instance and manually save files. Based on your code example I guess you really want to go with the first option.
Please consider using django's Form instead of directly accessing the request.

Categories

Resources