I have a Flask app that lets users download MP3 files. How can I make it so the URL for the download is only valid for a certain time period?
For example, instead of letting anyone simply go to example.com/static/sound.mp3 and access the file, I want to validate each request to prevent an unnecessary amount of bandwidth.
I am using an Apache server although I may consider switching to another if it's easier to implement this. Also, I don't want to use Flask to serve the file because this would cause a performance overhead by forcing Flask to directly serve the file to the user. Rather, it should use Apache to serve the file.
You could use S3 to host the file with a temporary URL. Use Flask to upload the file to S3 (using boto3), but use a dynamically-generated temporary key.
Example URL: http://yourbucket.s3.amazon.com/static/c258d53d-bfa4-453a-8af1-f069d278732c/sound.mp3
Then, when you tell the user where to download the file, give them that URL. You can then delete the S3 file at the end of the time period using a cron job.
This way, Amazon S3 is serving the file directly, resulting in a complete bypass of your Flask server.
Related
I have a python app (specifically a dash plotly dashboard) that I have deployed on Heroku. I have static files (csv/maps in the form of html etc.) that are input files for my app. However I am unable to get my python script to read these files when the heroku app starts.
I have already done the initial authentication piece of allowing heroku to access my aws bucket and set permissions.
Ok the steps are like this. This has to be a serverless application. Upon clicking the submit button on your website, an api should be called. Get/Post depending on your need.
(1)
An API will invoke a Lambda function that will take the csv file and store in s3.
Create a rest api using apigateway, connect with lamdba then store in s3.
You can use boto3 library if you pick python for lambda.
(2) Another way, if you don't need to manipulate the data on backend. You can create an API that takes a file (less than 6mb) and stores directly to s3 bucket.
If you are familiar with terraform this might help.
Best wishes.
I am working on a script that takes files from a gcp bucket and uploads them to another server.
Currently my script downloads all of the files from the gcp bucket into my local storage using blob.download_to_filename and then sends a POST request (using requests library) to upload those files to my server.
I know that it is possible to download the files as a string and then re-construct the file. But what about for files that, for example, have pictures? This bucket could fill up with any type of file format and I need to make sure all files will be uploaded exactly how they look in GCP to my server.
Is there some way to temporarily store a file so that I can send it from GCP to my server without having to download it to my computer?
Looking for a way to refer to the file in the POST request that will let me upload it to my server without it being on my local storage?
Thank you so much in advance!
You will need to write some code that:
Authenticates your request(s) to GCS
Downloads the objects (perhaps to memory; perhaps in chunks)
optionally: Authenticates your request to your destination
Uploads the objects (perhaps in chunks)
You tagged Python and you can do this using Google's Python library for GCS. See Streaming downloads and the recommendation to use ChunkedDownloads.
With ChunkedDownloads, you'd:
Authenticate using the library
Iterate over the GCS bucket's content
Download the objects (files) in chunks (you decide the chunk size) to memory
Preferably upload/stream the chunks to your destination
It's very likely that there are utilities that support migrating from GCS to your preferred destination.
I'm unfamiliar with any of these and encourage you to validate any options before proceeding to ensure it doesn't steal your credentials.
I am trying to serve files securely (images in this case) to my users. I would like to do this using flask and preferably amazon s3 however I would be open to another cloud storage solution if required.
I have managed to get my flask static files like css and such on s3 however this is all non-secure. So everyone who has the link can open the static files. This is obviously not what I want for secure content. I can't seems to figure out how I can make a file available to just authenticated user that 'owns' the file.
For example: When I log into my dropbox account and copy a random file's download link. Then go over to anther computer and use this link it will denie me access. Even though I am still logged in and the download link is available to user on the latter pc.
Make the request to your Flask application, which will authenticate the user and then issue a redirect to the S3 object. The trick is that the redirect should be to a signed temporary URL that expires in a minute or so, so it can't be saved and used later or by others.
You can use boto.s3.key.generate_url function in your Flask app to create the temporary URL.
I would like for a user, without having to have an Amazon account, to be able to upload mutli-gigabyte files to an S3 bucket of mine.
How can I go about this? I want to enable a user to do this by giving them a key or perhaps through an upload form rather than making a bucket world-writeable obviously.
I'd prefer to use Python on my serverside, but the idea is that a user would need nothing more than their web browser or perhaps opening up their terminal and using built-in executables.
Any thoughts?
You are attempting to proxy the file thorough your python backend to S3, that too large files. Instead you can configure S3 to accept files from user directly (without proxying through your backend code).
It is explained here: Browser Uploads to S3 using HTML POST Forms. This way your server need not handle any upload load at all.
If you also want your users to use their elsewhere ID (google/FB etc) to achieve this workflow, that too is possible. They will be able to upload these files to a sub-folder (path) in your bucket without exposing other parts of your bucket. This is detailed here: Web Identity Federation with Mobile Applications. Though it says mobile, you can apply the same to webapps.
Having said all that, as #Ratan points out, large file uploads could break in between when you try from a browser and it cant retry "only the failed parts". This is where a dedicated app's need come in. Another option is to ask your users to keep the files in their Dropbox/BOX.com account and your server can read from there - these services already take care of large file upload with all retries etc using their apps.
This answer is relevant to .Net as language.
We had such requirement, where we had created an executable. The executable internally called a web method, which validated the app authenticated to upload files to AWS S3 or NOT.
You can do this using a web browser too, but I would not suggest this, if you are targeting big files.
I use django to run my website and nginx for front webserver ,
but when i upload a very large file to my site,
it take me very long time ,
there is some thing wrong when nginx hand upload large file;
the nginx will send the file to django after receive all my post file;
so this will take me more time;
i want to find some other webserver to replace the nginx;
wish your suggest?
You problem not in nginx you problem in nginx settings.
If you want handle files with django - you should change some params
Timeout when uploading a large file?
Else nginx may handle files itself
http://www.grid.net.ru/nginx/upload.en.html
Nginx is probably the best http server, there is no need to replace it. I will advise you to upload very large files via ftp or nfs share.
If you want to not pass file to your django application, then you should use:
fastcgi_pass_request_body off;
Also you may want to use the upload module: http://www.grid.net.ru/nginx/upload.en.html
Look at tornado at http://www.tornadoweb.org/ You may use it beside the django and handle file upload.
On my project I successfully use django with tornado, that handles API calls and long ajax requests.