When using sessions, Flask requires a secret key. In every example I've seen, the secret key is somehow generated and then stored either in source code or in configuration file.
What is the reason to store it permanently? Why not simply generate it when the application starts?
app.secret_key = os.urandom(50)
The secret key is used to sign the session cookie. If you had to restart your application, and regenerated the key, all the existing sessions would be invalidated. That's probably not what you want (or at least, not the right way to go about invalidating sessions). A similar case could be made for anything else that relies on the secret key, such as tokens generated by itsdangerous to provide reset password urls (for example).
The application might need to be restarted because of a crash, or because the server rebooted, or because you are pushing a bug fix or new feature, or because the server you're using spawns new processes, etc. So you can't rely on the server being up forever.
The standard practice is to have some throwaway key commited to the repo (so that there's something there for dev machines) and then to set the key in the local config when deploying. This way, the key isn't leaked and doesn't need to be regenerated.
There's also the case of running secondary systems that depend on the app context, such as Celery for running background tasks, or multiple load balanced instances of the application. If each running instance of the application has different settings, they may not work together correctly in some cases.
Related
When using sessions, Flask requires a secret key. In every example I've seen, the secret key is somehow generated and then stored either in source code or in configuration file.
What is the reason to store it permanently? Why not simply generate it when the application starts?
app.secret_key = os.urandom(50)
The secret key is used to sign the session cookie. If you had to restart your application, and regenerated the key, all the existing sessions would be invalidated. That's probably not what you want (or at least, not the right way to go about invalidating sessions). A similar case could be made for anything else that relies on the secret key, such as tokens generated by itsdangerous to provide reset password urls (for example).
The application might need to be restarted because of a crash, or because the server rebooted, or because you are pushing a bug fix or new feature, or because the server you're using spawns new processes, etc. So you can't rely on the server being up forever.
The standard practice is to have some throwaway key commited to the repo (so that there's something there for dev machines) and then to set the key in the local config when deploying. This way, the key isn't leaked and doesn't need to be regenerated.
There's also the case of running secondary systems that depend on the app context, such as Celery for running background tasks, or multiple load balanced instances of the application. If each running instance of the application has different settings, they may not work together correctly in some cases.
A django settings file includes sensitive information such as the secret_key, password for database access etc which is unsafe to keep hard-coded in the setting file. I have come across various suggestions as to how this information can be stored in a more secure way including putting it into environment variables or separate configuration files. The bottom line seems to be that this protects the keys from version control (in addition to added convenience when using in different environments) but that in a compromised system this information can still be accessed by a hacker.
Is there any extra benefit from a security perspective if sensitive settings are kept in a data vault / password manager and then retrieved at run-time when settings are loaded?
For example, to include in the settings.py file (when using pass):
import subprocess
SECRET_KEY=subprocess.check_output("pass SECRET_KEY", shell=True).strip().decode("utf-8")
This spawns a new shell process and returns output to Django. Is this more secure than setting through environment variables?
I think a data vault/password manager solution is a matter of transferring responsibility but the risk is still here. When deploying Django in production, the server should be treated as importantly as a data vault. Firewall in place, fail to ban, os up to date... must be in place. Then, in my opinion, there is nothing wrong or less secure than having a settings.py file with a config parser reading a config.ini file (declared in your .gitignore!) where all your sensitive information is present.
Assume a Flask application that allows to build an object (server-side) through a number of steps (wizard-like ; client-side).
I would like to create an initial object server-side an build it up step by step given the client-side input, keeping the object 'alive' throughout the whole build-process. A unique id will be associated with the creation of each new object / wizard.
Serving the Flask application with the use of WSGI on Apache, requests can go through multiple instance of the Flask application / multiple threads.
How do I keep this object alive server-side, or in other words how to keep some kind of global state?
I like to keep the object in memory, not serialize/deserialize it to/from disk. No cookies either.
Edit:
I'm aware of the Flask.g object but since this is on per request basis this is not a valid solution.
Perhaps it is possible to use some kind of cache layer, e.g.:
from werkzeug.contrib.cache import SimpleCache
cache = SimpleCache()
Is this a valid solution? Does this layer live across multiple app instances?
You're looking for sessions.
You said you don't want to use cookies, but did you mean you didn't want to store the data as a cookie or are you avoiding cookies entirely? For the former case, take a look at server side sessions, e.g. Flask-KVSession
Instead of storing data on the client, only a securely generated ID is stored on the client, while the actual session data resides on the server.
I have a flask running at domain.com
I also have another flask instance on another server running at username.domain.com
Normally user logs in through domain.com
However, for paying users they are suppose to login at username.domain.com
Using flask, how can I make sure that sessions are shared between domain.com and username.domain.com while ensuring that they will only have access to the specifically matching username.domain.com ?
I am concerned about security here.
EDIT:
Later, after reading your full question I noticed the original answer is not what you're looking for.
I've left the original at the bottom of this answer for Googlers, but the revised version is below.
Cookies are automatically sent to subdomains on a domain (in most modern browsers the domain name must contain a period (indicating a TLD) for this behavior to occur). The authentication will need to happen as a pre-processor, and your session will need to be managed from a centralised source. Let's walk through it.
To confirm, I'll proceed assuming (from what you've told me) your setup is as follows:
SERVER 1:
Flask app for domain.com
SERVER 2:
Flask app for user profiles at username.domain.com
A problem that first must be overcome is storing the sessions in a location that is accessible to both servers. Since by default sessions are stored on disk (and both servers obviously don't share the same hard drive), we'll need to do some modifications to both the existing setup and the new Flask app for user profiles.
Step one is to choose where to store your sessions, a database powered by a DBMS such as MySQL, Postgres, etc. is a common choice, but people also often choose to put them somewhere more ephemeral such as Memcachd or Redis for example.
The short version for choosing between these two starkly different systems breaks down to the following:
Database
Databases are readily available
It's likely you already have a database implemented
Developers usually have a pre-existing knowledge of their chosen database
Memory (Redis/Memchachd/etc.)
Considerably faster
Systems often offer basic self-management of data
Doesn't incur extra load on existing database
You can find some examples database sessions in flask here and here.
While Redis would be more difficult to setup depending on each users level of experience, it would be the option I recommend. You can see an example of doing this here.
The rest I think is covered in the original answer, part of which demonstrates the matching of username to database record (the larger code block).
Old solution for a single Flask app
Firstly, you'll have to setup Flask to handle subdomains, this is as easy as specifying a new variable name in your config file. For example, if your domain was example.com you would append the following to your Flask configuration.
SERVER_NAME = "example.com"
You can read more about this option here.
Something quick here to note is that this will be extremely difficult (if not impossible) to test if you're just working off of localhost. As mentioned above, browsers often won't bother to send cookies to subdomains of a domain without dots in the name (a TLD). Localhost also isn't set up to allow subdomains by default in many operating systems. There are ways to do this like defining your own DNS entries that you can look into (/etc/hosts on *UNIX, %system32%/etc/hosts on Windows).
Once you've got your config ready, you'll need to define a Blueprint for a subdomain wildard.
This is done pretty easily:
from flask import Blueprint
from flask.ext.login import current_user
# Create our Blueprint
deep_blue = Blueprint("subdomain_routes", __name__, subdomain="<username>")
# Define our route
#deep_blue.route('/')
def user_index(username):
if not current_user.is_authenticated():
# The user needs to log in
return "Please log in"
elif username != current_user.username:
# This is not the correct user.
return "Unauthorized"
# It's the right user!
return "Welcome back!"
The trick here is to make sure the __repr__ for your user object includes a username key. For eg...
class User(db.Model):
username = db.Column(db.String)
def __repr__(self):
return "<User {self.id}, username={self.username}>".format(self=self)
Something to note though is the problem that arises when a username contains special characters (a space, #, ?, etc.) that don't work in a URL. For this you'll need to either enforce restrictions on the username, or properly escape the name first and unescape it when validating it.
If you've got any questions or requests, please ask. Did this during my coffee break so it was a bit rushed.
You can do this with the builtin Flask sessions, which are cookie-based client-side sessions. To allow users to login to multiple subdomains in '.domain.com', you need only to specify
app.config['SESSION_COOKIE_DOMAIN'] = '.domain.com'
and the client's browser will have a session cookie that allows him to login to every Flask instance that is at 'domain.com'.
This only works if every instance of Flask has the same app.secret_key
For more information, also see
Same Flask login session across two applications
I have been debugging this half a day now... anybody have ideas?
I wrote a python script to monitor active sessions, found this:
sessions = Session.objects.filter(expire_date__gte=datetime.now())
for session in sessions:
data = session.get_decoded()
id = data.get('_auth_user_id', None)
ses = session.session_key
if id:
name = User.objects.get(id=id)
gives nice list... ok. But -- if user logs out or in, the above code does not reflect the change. It just keeps repeating the original, outdated list.
Is there a caching issue? Think not -- disabled memcached, and no change.
Tried file and memcache based session storage -- strange result: the code above seems to read db-based session storage.
So, I suspect the initialization is not correct for the 1.4.3 -- as there seem to have been various ways to initialize environment. I believe 1.4. does not require anything but the environment variable DJANGO_SETTINGS_MODULE to be set to the app.
Next, if this does not resolve.. must use file based session storage and poll the directory.. that seems to be alive and kicking in realtime :)
Your problem is caused by transaction isolation. By default, each connection to the db runs inside a transaction. Usually, that equates to a view, and the transaction middleware takes care of opening and closing the transaction. In a standalone script, you'll need to manage that yourself.