In a Django project of mine, users upload video files. Initially, I was uploading them directly to Azure Blob Storage (equivalent to storing it on Amazon S3). I.e. in models.py I had:
class Video(models.Model):
video_file = models.FileField(upload_to=upload_path, storage=OverwriteStorage())
Where OverwriteStorage overrides Storage in django.core.files.storage, and essentially uploads the file onto Azure.
Now I need to upload this file to a separate Linux server (not the same one that serves my Django web application). In this separate server, I'll perform some operations on the video file (compression, format change), and then I'll upload it to Azure Storage like before.
My question is: given my goal, how do I change the way I'm uploading the file in models.py? An illustrative example would be nice. I'm thinking I'll need to change FileField.upload_to, but all the examples I've seen indicate it's only to define a local filesystem path. Moreover, I don't want to let the user upload the content normally and then run a process to upload the file to another server. Doing it directly is my preference. Any ideas?
I've solved a similar issue with Amazon's S3, but the concept should be the same.
First, I use django-storages, and by default, upload my media files to S3 (django-storages also supports Azure). Then, my team set up an NFS share mount on our Django web servers from the destination server we occasional need to write user uploads to. Then we simply override django-storages by using "upload_to" to the local path that is a mount from the other server.
This answer has a quick example of how to set up an NFS share from one server on another: https://superuser.com/questions/300662/how-to-mount-a-folder-from-a-linux-machine-on-another-linux-machine
There are a few ways to skin the cat, but this one seemed easiest to our team. Good luck!
Related
We have deployed a django server (nginx/gunicorn/django) but to scale the server there are multiple instances of same django application running.
Here is the diagram (architecture):
Each blue rectangle is a Virtual Machine.
HAProxy sends all request to example.com/admin to Server 3.other requests are divided between Server 1 and Server 2.(load balance).
Old Problem:
Each machine has a media folder and when admin Uploads something the uploaded media is only on Server 3. (normal users can't upload anything)
We solved this by sending all requests to example.com/media/* to Server 3 and nginx from Server3 serves all static files and media.
Problem right now
We are also using sorl-thumbnail.
When a requests comes for example.com/,sorl-thumbnail tries to access the media file but it doesn't exist on this machine because it's on Server3.
So now all requests to that machine(server 1 or 2) get 404 for that media file.
One solution that comes to mind is to make a shared partition between all 3 machines and use it as media.
Another solution is to sync all media folders after each upload but this solution has problem and that is we have almost 2000 requests per second and sometimes sync might not be fast enough and sorl-thumbnail creates the database record of empty file and 404 happens.
Thanks in advance and sorry for long question.
You should use an object store to save and serve your user uploaded files. django-storages makes the implementation really simple.
If you don’t want to use cloud based AWS S3 or equivalent, you can host your own on-prem S3 compatible object store with minio.
On your current setup I don’t see any easy way to fix where the number of vm s are dynamic depending on load.
If you have deployment automation then maybe try out rsync so that the vm takes care of syncing files with other vms.
Question: What was the problem?
we got 404 on other machines because normal requests (requests asking for a template) would get a 404 not found on thumbnail media.
real problem was with sorl-thumbnail template tags.
Here is what we ended up doing:
In models that needed a thumbnail, we added functions to create that specific thumbnail.
and using a post-save signal in the admin machine called all those functions to make sure all the thumbnails were created after save and the table for sorl-thumbnail is filled.
now in templates instead of calling sorl-thumbnail template tags now we call a function in model.
I have been researching about uploading data (by external users) to my Dash app and it seems the only way is the dcc.Upload component (a drag-and-drop component on the UI side - https://dash.plotly.com/dash-core-components/upload) … To clarify it is this uploaded file that will be read into pandas and fed into the callbacks for analysis and visualisation.
I also read about Heroku simple-file-upload config (https://devcenter.heroku.com/articles/simple-file-upload) and the AWS S3 bucket (https://devcenter.heroku.com/articles/s3) as the necessary way to store static data uploaded to the app. Nowhere is it mentioned in the Dash dcc.Upload docs about the storage of the uploaded file, i.e. the web server part and the UI are not linked together in any documentation I could find.
Can anyone explain to a total web dev newbie, once deployed to Heroku, does the dcc.Upload require the set up of the Heroku simple-file-upload config or an S3 storage bucket ? If not, how does it deal with the storage of the file? Is there any other way for a user to upload data to be used in the web app?
PS I am not even sure the data file the user will upload is a static file or a dynamic one, as it will obviously be processed within the code for the analysis to happen (ie group, sort, filter, etc)
The Upload component from dash core components holds the data in the browser itself (as a base64 encoded string), i.e. you don't need any storage on Heroku.
Another option would be dash-uploader, which is able to handle larger files. However, this component holds the data on the server.
So I am working on a Flask application which is pretty much a property manager that involves allowing users to upload images of their properties. I am new to Flask and have never had to deal with images before. From a lot of Googling I understand that there are various ways to manage static files like images.
One way is to allow users to upload images directly to the file system, and then displaying it by retrieving the file location in the static folder using something like:
<img src="static/images/filename.jpg">
However, is this really an efficient way since this means storing generating and storing the location of each image URL in the database? Especially when it comes to deploying the application? Another way I discovered was using base64 encoding and storing the image directly into the database, which also doesn't sound very efficient either.
Another way, which I think might be the best to go about this, is to use an AWS S3 bucket. The user would then be able to upload an image directly to that bucket and be assigned a URL to that image. This URL is stored in the database and can then be used to display the image similarly to the file system method. Is my understanding of this correct? Is there a better way to go about this? And is there something similar to django-storages that can be used to connect Flask to S3?
Any input or pointing me in the right direction would be much appreciated. Thank you!
If you want to store the images in the web server then the best approach for you is to use nginx as proxy in front of flask and let nginx serve the static folder for all the images.
Nginx is pretty much enough for a small website. Don't try to serve the file using flask. It is too slow.
If you want to store the images in s3 ,then you just need to store the name of image in bucket in the database. You can tell flask to use s3 bucket as the static folder. You can use boto3 library in python to access s3.
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html
If you are concerned of exposing s3 bucket to users, then you can use cloudfront distribution. It is cheaper in terms of price to serve and also hides your bucket.
The app I am currently hosting on Heroku allows users to submit photos. Initially, I was thinking about storing those photos on the filesystem, as storing them in the database is apparently bad practice.
However, it seems there is no permanent filesystem on Heroku, only an ephemeral one. Is this true and, if so, what are my options with regards to storing photos and other files?
It is true. Heroku allows you to create cloud apps, but those cloud apps are not "permanent" - they are instances (or "slugs") that can be replicated multiple times on Amazon's EC2 (that's why scaling is so easy with Heroku). If you were to push a new version of your app, then the slug will be recompiled, and any files you had saved to the filesystem in the previous instance would be lost.
Your best bet (whether on Heroku or otherwise) is to save user submitted photos to a CDN. Since you are on Heroku, and Heroku uses AWS, I'd recommend Amazon S3, with optionally enabling CloudFront.
This is beneficial not only because it gets around Heroku's ephemeral "limitation", but also because a CDN is much faster, and will provide a better service for your webapp and experience for your users.
Depending on the technology you're using, your best bet is likely to stream the uploads to S3 (Amazon's storage service). You can interact with S3 with a client library to make it simple to post and retrieve the files. Boto is an example client library for Python - they exist for all popular languages.
Another thing to keep in mind is that Heroku file systems are not shared either. This means you'll have to be putting the file to S3 with the same application as the one handling the upload (instead of say, a worker process). If you can, try to load the upload into memory, never write it to disk and post directly to S3. This will increase the speed of your uploads.
Because Heroku is hosted on AWS, the streams to S3 happen at a very high speed. Keep that in mind when you're developing locally.
I would like for a user, without having to have an Amazon account, to be able to upload mutli-gigabyte files to an S3 bucket of mine.
How can I go about this? I want to enable a user to do this by giving them a key or perhaps through an upload form rather than making a bucket world-writeable obviously.
I'd prefer to use Python on my serverside, but the idea is that a user would need nothing more than their web browser or perhaps opening up their terminal and using built-in executables.
Any thoughts?
You are attempting to proxy the file thorough your python backend to S3, that too large files. Instead you can configure S3 to accept files from user directly (without proxying through your backend code).
It is explained here: Browser Uploads to S3 using HTML POST Forms. This way your server need not handle any upload load at all.
If you also want your users to use their elsewhere ID (google/FB etc) to achieve this workflow, that too is possible. They will be able to upload these files to a sub-folder (path) in your bucket without exposing other parts of your bucket. This is detailed here: Web Identity Federation with Mobile Applications. Though it says mobile, you can apply the same to webapps.
Having said all that, as #Ratan points out, large file uploads could break in between when you try from a browser and it cant retry "only the failed parts". This is where a dedicated app's need come in. Another option is to ask your users to keep the files in their Dropbox/BOX.com account and your server can read from there - these services already take care of large file upload with all retries etc using their apps.
This answer is relevant to .Net as language.
We had such requirement, where we had created an executable. The executable internally called a web method, which validated the app authenticated to upload files to AWS S3 or NOT.
You can do this using a web browser too, but I would not suggest this, if you are targeting big files.