I have a Django app that will invoke an API call (a Python function) to create a folder with a private S3 bucket. To invoke the function, a user will have to be logged in first.
The S3 bucket will look like this:
s3://somebucket/user1
s3://somebucket/user2
and so on
API call: https://somedomain.com/createfolder/user1
I don't want to allow user2 to call the API by substituting the user1 string with his username.
Assuming I can secure this API call, my Python function will then just go ahead and create a folder with the username given to it.
user1 will also have the ability to download only his/her files.
users can log in to my app via OAuth2 or username/password.
The questions:
a. How do I secure my API so that user2 cannot commit actions on user1?
b. Similarly, how do I create pre-signed URLs pointing to user1's file (a specific file, which could be a zipped form of a folder?
My research: I have heard of JWT and pre-signed URLs, but I'm having difficulties in understanding how to practically implement them in Django.
Related
I have a python app (specifically a dash plotly dashboard) that I have deployed on Heroku. I have static files (csv/maps in the form of html etc.) that are input files for my app. However I am unable to get my python script to read these files when the heroku app starts.
I have already done the initial authentication piece of allowing heroku to access my aws bucket and set permissions.
Ok the steps are like this. This has to be a serverless application. Upon clicking the submit button on your website, an api should be called. Get/Post depending on your need.
(1)
An API will invoke a Lambda function that will take the csv file and store in s3.
Create a rest api using apigateway, connect with lamdba then store in s3.
You can use boto3 library if you pick python for lambda.
(2) Another way, if you don't need to manipulate the data on backend. You can create an API that takes a file (less than 6mb) and stores directly to s3 bucket.
If you are familiar with terraform this might help.
Best wishes.
What is the recommended way to let users access files that are stored on AWS S3?
I've currently found two ways:
using the access key ID and secret access key
making the whole bucket public and then accessing the files thru a public link.
Both options aren't really satisfying. Either I reveal the access/secret keys or all files are publicly available. Is there another third more secure way?
The software, that needs access to S3, will be running on Raspberry Pis. I was thinking of encrypting the credentials file, so a user wouldn't be able to read it easily.
The first thing is, to create a new individual AWS user with access key ID and secret for your project with and give the individual S3 permission to that user. Don't use admin credentials for a project. Every project should have a particular AWS user with its credentials and permissions. The second thing is, to rotate keys. So you create a new key with an old one and delete old key. More about rotation you can refer to Managing Access Keys for IAM Users documentation. Indeed, you can also encrypt it and AWS has the service AWS KMS. You can make research in AWS KMS service, it is a great tool for encryption. You can even encrypt your application vulnerable secret keys or passwords.
I am trying to serve files securely (images in this case) to my users. I would like to do this using flask and preferably amazon s3 however I would be open to another cloud storage solution if required.
I have managed to get my flask static files like css and such on s3 however this is all non-secure. So everyone who has the link can open the static files. This is obviously not what I want for secure content. I can't seems to figure out how I can make a file available to just authenticated user that 'owns' the file.
For example: When I log into my dropbox account and copy a random file's download link. Then go over to anther computer and use this link it will denie me access. Even though I am still logged in and the download link is available to user on the latter pc.
Make the request to your Flask application, which will authenticate the user and then issue a redirect to the S3 object. The trick is that the redirect should be to a signed temporary URL that expires in a minute or so, so it can't be saved and used later or by others.
You can use boto.s3.key.generate_url function in your Flask app to create the temporary URL.
I would like for a user, without having to have an Amazon account, to be able to upload mutli-gigabyte files to an S3 bucket of mine.
How can I go about this? I want to enable a user to do this by giving them a key or perhaps through an upload form rather than making a bucket world-writeable obviously.
I'd prefer to use Python on my serverside, but the idea is that a user would need nothing more than their web browser or perhaps opening up their terminal and using built-in executables.
Any thoughts?
You are attempting to proxy the file thorough your python backend to S3, that too large files. Instead you can configure S3 to accept files from user directly (without proxying through your backend code).
It is explained here: Browser Uploads to S3 using HTML POST Forms. This way your server need not handle any upload load at all.
If you also want your users to use their elsewhere ID (google/FB etc) to achieve this workflow, that too is possible. They will be able to upload these files to a sub-folder (path) in your bucket without exposing other parts of your bucket. This is detailed here: Web Identity Federation with Mobile Applications. Though it says mobile, you can apply the same to webapps.
Having said all that, as #Ratan points out, large file uploads could break in between when you try from a browser and it cant retry "only the failed parts". This is where a dedicated app's need come in. Another option is to ask your users to keep the files in their Dropbox/BOX.com account and your server can read from there - these services already take care of large file upload with all retries etc using their apps.
This answer is relevant to .Net as language.
We had such requirement, where we had created an executable. The executable internally called a web method, which validated the app authenticated to upload files to AWS S3 or NOT.
You can do this using a web browser too, but I would not suggest this, if you are targeting big files.
I have the django application, which needs refresh some data. This data should be downloaded from my dropbox account (file name and path is the same each time). How can I implement this?
I start with using dropbox api, create application, etc - but this method has one big defect - it needs user go to the generated link and authorize to dropbox account. But I need automatic work, script should be executed by cron each day without userinteraction.
I think about using Selenium to open this link, enter login and password, confirm using application. But I also think this is hard way, should be another way:-)
Or maybe I can simply generate link to file one time and then use it every time I want to download file?
You could use the API and connect with pre-authorized access token which you authorized manually once (as opposed to having the user authorize their own account). You could then download the file from your account, but be sure not to revoke the access token, e.g. via https://www.dropbox.com/account/applications .
If you do just need to download files though, using a shared link may be easier:
https://www.dropbox.com/help/167/en
https://www.dropbox.com/help/201/en
They don't expire, but they can be revoked via https://www.dropbox.com/links .
Or if you prefer to use the Public folder, same idea:
https://www.dropbox.com/help/16/en