In my Django project I use Django-storageS to save media files in my Amazon S3.
I followed this tutorial (I use also Django-rest-framework). This works well for me: I can upload some images and I can see these on my S3 storage.
But, if I try to remove an instance of my model (that contains an ImageField) this not removes the corresponding file in S3. Is correct this? I need t remove also the resource in S3.
Deleting a record will not automatically delete the file in the S3 Bucket. In order to delete the S3 resource you need to call the following method on your file field:
model.filefield.delete(save=False) # delete file in S3 storage
You can perform this either in
The delete method of your model
A pre_delete signal
Here is an example of how you can achieve this in the delete model method:
def delete(self):
self.filefield.delete(save=False)
super().delete()
You can delete S3 files by offering its id (filename in the S3 storage) using following code:
import boto
from boto.s3.key import Key
from django.conf import settings
def s3_delete(id):
s3conn = boto.connect_s3(settings.AWS_ACCESS_KEY,
settings.AWS_SECRET_ACCESS_KEY)
bucket = s3conn.get_bucket(settings.S3_BUCKET)
k = Key(bucket)
k.key = str(id)
k.delete()
Make sure that you setup S3 variable correctly in settings.py including: AWS_ACCESS_KEY, AWS_SECRET_ACCESS_KEY, and S3_BUCKET.
This works for me in aws s3,hope it helps
import os
#receiver(models.signals.post_delete, sender=YourModelName)
def auto_delete_file_on_delete(sender, instance, **kwargs):
if instance.image:
instance.image.delete(save=False) ## use for aws s3
# if os.path.isfile(instance.image.path): ## use this in development
# os.remove(instance.image.path)
I ended up making a function to Django admin panel since in my case I don't remove files frequently.
If you want to delete files using API you may write your own destroy() in your serializer.
BUCKET_NAME = os.environ.get("AWS_STORAGE_BUCKET_NAME")
s3 = boto3.client('s3')
class UserFileAdmin(admin.ModelAdmin):
list_display = ('file')
actions = ['delete_completely']
def delete_completely(self, request, queryset):
for filemodel in queryset:
s3.delete_object(Bucket=BUCKET_NAME, Key=str(filemodel.file))
filemodel.delete()
delete_completely.short_description = 'Delete pointer and real file together'
Related
I am trying to load a file to s3 bucket through streamlit web UI. Also next step is to read or delete files present in s3 bucket through streamlit app in an interactive way .. I have written a code to upload file on streamlit app but failing in loading it to s3 bucket.
We have a pretty detailed doc on this. You can use s3fs to make the connection, e.g.:
# streamlit_app.py
import streamlit as st
import s3fs
import os
# Create connection object.
# `anon=False` means not anonymous, i.e. it uses access keys to pull data.
fs = s3fs.S3FileSystem(anon=False)
# Retrieve file contents.
# Uses st.experimental_memo to only rerun when the query changes or after 10 min.
#st.experimental_memo(ttl=600)
def read_file(filename):
with fs.open(filename) as f:
return f.read().decode("utf-8")
content = read_file("testbucket-jrieke/myfile.csv")
# Print results.
for line in content.strip().split("\n"):
name, pet = line.split(",")
st.write(f"{name} has a :{pet}:")
In my flask application, I am using a function to upload file to Amazon s3, using Boto.
Its working fine most of the cases, but some times its uploading files as zero byte file with no extension.
Why its failing sometimes,
I am validating user image file in form.
FileField('Your photo',validators=[FileAllowed(['jpg', 'png'], 'Images only!')])
My image upload function.
def upload_image_to_s3(image_from_form):
#upload pic to amazon
source_file_name_photo = secure_filename(image_from_form.filename)
source_extension = os.path.splitext(source_file_name_photo)[1]
destination_file_name_photo = uuid4().hex + source_extension
s3_file_name = destination_file_name_photo
# Connect to S3 and upload file.
conn = boto.connect_s3('ASJHjgjkhSDJJHKJKLSDH','GKLJHASDJGFAKSJDGJHASDKJKJHbbvhjcKJHSD')
b = conn.get_bucket('mybucket')
# Connect to S3 and upload file.
sml = b.new_key("/".join(["myfolder",destination_file_name_photo]))
sml.set_contents_from_string(image_from_form.read())
acl='public-read'
sml.set_acl(acl)
return s3_file_name
How large are your assets? If there is too large of an upload, you may have to multipart/chunk it otherwise it will timeout.
bucketObject.initiate_multipart_upload('/local/object/as/file.ext')
it means you will not be using set_contents_from_string but rather store and upload. You may have to use something to chuck the file, like FileChuckIO.
An example is here if this applies to you : http://www.bogotobogo.com/DevOps/AWS/aws_S3_uploading_large_file.php
Also, you may want to edit your post above and alter your AWS keys.
I'm attempting to save an image to S3 using boto. It does save a file, but it doesn't appear to save it correctly. If I try to open the file in S3, it just shows a broken image icon. Here's the code I'm using:
# Get and verify the file
file = request.FILES['file']
try:
img = Image.open(file)
except:
return api.error(400)
# Determine a filename
filename = file.name
# Upload to AWS and register
s3 = boto.connect_s3(aws_access_key_id=settings.AWS_KEY_ID,
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY)
bucket = s3.get_bucket(settings.AWS_BUCKET)
f = bucket.new_key(filename)
f.set_contents_from_file(file)
I've also tried replacing the last line with:
f.set_contents_from_string(file.read())
But that didn't work either. Is there something obvious that I'm missing here? I'm aware django-storages has a boto backend, but because of complexity with this model, I do not want to use forms with django-storages.
Incase you don't want to go for django-storages and just want to upload few files to s3 rather then all the files then below is the code:
import boto3
file = request.FILES['upload']
s3 = boto3.resource('s3', aws_access_key_id=settings.AWS_ACCESS_KEY, aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY)
bucket = s3.Bucket('bucket-name')
bucket.put_object(Key=filename, Body=file)
You should use django-storages which uses boto internally.
You can either swap the default FileSystemStorage, or create a new storage instance and manually save files. Based on your code example I guess you really want to go with the first option.
Please consider using django's Form instead of directly accessing the request.
I want to store files and images that I get from an api in the blobstore (or rather so that they are accessible from the blobstore api). Since the file-api is deprecated, how do I do this?
One way is to store images in Cloud Storage (gcs) and access them via the blogstore api. Basically you call gcs.open() and write the file. Then when you need to use the blobstore api you call blobkey = blobstore.create_gs_key(). With that you can do things such as use the images api with calls like images.get_serving_url(blobkey, secure_url=False).
How you do that depends on what you're particular goals are. I am using to serve images in a gallery that I upload. To do that I have an file upload on an html form on the front end, which sends the file. On the backend I am doing this (these are just the broad strokes):
# inside the webapp2.RequestHandler get method:
import mimetypes
file_data = self.request.get("photoUpload", default_value = None)
filename = self.request.POST["photoUpload"].filename
folder = "someFolderName"
content_type = mimetypes.guess_type(self.filename)[0]
Then save the file data to GCS:
from google.appengine.api import app_identity
import cloudstorage as gcs
# gcs_filename must be unique so I'm using bucket/folder/file
# it would be smart to check uniqueness before proceeding
gcs_filename = '/%s%s/%s' % (bucket or app_identity.get_default_gcs_bucket_name(), folder, filename)
gcs.open(gcs_filename, 'w', content_type=content_type or b'binary/octet-stream', options={b'x-goog-acl': b'public-read'}) as f:
f.write(file_data)
Now I can access using the GCS api with calls like:
gcs.delete(gcs_filename)
Or use the Blobstore API by getting the previously mentioned blocky:
blobkey = blobstore.create_gs_key()
I am following this help guide, which explains how to save a file to an S3 bucket after creating it. I cannot find, however, an explanation of how to save to an existing bucket. More specifically, I am unsure how to reference a preexisting bucket. I believe replacing create_bucket with get_bucket does the trick. This allows me to save the file but because the documentation says that get_bucket "retrieves a bucket by name" I wanted to check here and make sure that it only retreives the bucket's metadata and does not does not download all of the contents of the bucket to my computer. Am I doing this right or is there a more pythonic way?
import boto
s3 = boto.connect_s3()
bucket = s3.get_bucket('mybucket')
from boto.s3.key import Key
k = Key(bucket)
k.key = 'foobar'
k.set_contents_from_string('This is a test of S3')
Your code looks reasonable. The get_bucket method will either return a Bucket object or, if the specified bucket name does not exist, it will raise an S3ResponseError.
You could simplify your code a little:
import boto
s3 = boto.connect_s3()
bucket = s3.get_bucket('mybucket')
k = bucket.new_key('foobar')
k.set_contents_from_string('This is a test of S3')
But it accomplishes the same thing.