In my pylons application I want to add some data on setup. ( a user)
To secure passwords in the database i've hashed the passwords with a salt, this salt is stored in the configuration file.
If I want to get the saltkey from configuration I do this (shortened example):
from pylons import config
saltkey = config.get("saltkey")
If this code is placed in for example a model, it returns the saltkey. In the User-model this code is used to create a hash with the salt.
However if I want to create an instance of this model in "websetup.py" the config has a different contents and it cannot retrieve the saltkey (resulting in an error)
def setup_app(command, conf, vars):
load_environment(conf.global_conf, conf.local_conf)
Base.metadata.create_all(bind=Session.bind)
user = User('admin', 'password123', 'test#test.com')
Session.add(user)
Session.commit()
My question is: Why has the config a different content? And how do I fix this problem without an ugly hack?
You can access your config file in this step. The from pylons import config method is best suited to doing so in the context of a WSGI request. However, you're not dealing with a WSGI request, so it's unavailable. Fortunately, you have a very easy way of accessing config during websetup.py's operation. The setup_app() function already consumes the config file, and Paster has already parsed it and turned it into a dictionary for you.
You can access your config file as the conf.local_conf dictionary, and that will make the data you want available.
With all of that said - you should not store the salt in your config.ini file, that's a bad idea and you should avoid wheel-reinvention like that.
Related
A django settings file includes sensitive information such as the secret_key, password for database access etc which is unsafe to keep hard-coded in the setting file. I have come across various suggestions as to how this information can be stored in a more secure way including putting it into environment variables or separate configuration files. The bottom line seems to be that this protects the keys from version control (in addition to added convenience when using in different environments) but that in a compromised system this information can still be accessed by a hacker.
Is there any extra benefit from a security perspective if sensitive settings are kept in a data vault / password manager and then retrieved at run-time when settings are loaded?
For example, to include in the settings.py file (when using pass):
import subprocess
SECRET_KEY=subprocess.check_output("pass SECRET_KEY", shell=True).strip().decode("utf-8")
This spawns a new shell process and returns output to Django. Is this more secure than setting through environment variables?
I think a data vault/password manager solution is a matter of transferring responsibility but the risk is still here. When deploying Django in production, the server should be treated as importantly as a data vault. Firewall in place, fail to ban, os up to date... must be in place. Then, in my opinion, there is nothing wrong or less secure than having a settings.py file with a config parser reading a config.ini file (declared in your .gitignore!) where all your sensitive information is present.
At present my application related variables like external server IP / default pattern etc.. These variables are specific to my application. It includes my external server username and password.
Now how can I have a single common place so that I can externalise this from my application.
The options which I thought are below:
Have one conf.ini file and use configparser and read this during the start of the django app
But I am not sure from where I should have method to read this so that it will be done in the startup.
Other option is to save this in a py file itself and import this file in my modules
Please suggests me the good and standard approach for above issue.
Save the important secret details in an ini file and place it in etc/project_name/my_settings.ini. Read these settings from settings.py. This way it will be secure. Can be read directly in the settings file itself
Or better way is to set them in bashrc, read the env vars from it.
Check this: Setting envs
I've read that it's best practice for security reason to store things like API keys in the instance/settings.py file in Flask - why is this so and what is the mechanism that makes it so. I haven't been able to find much documentation about this online.
As #davidism suggested, Config files are never meant to be tracked since these are your secret keys and anybody with access to your code will have access to your keys if tracked.
But, there is no hard and fast rule in Flask to keep your settings file in a specific location. You can keep them anywhere and name them anything.
But, when adding the config to the app, the correct file path must be given.
I have a flask running at domain.com
I also have another flask instance on another server running at username.domain.com
Normally user logs in through domain.com
However, for paying users they are suppose to login at username.domain.com
Using flask, how can I make sure that sessions are shared between domain.com and username.domain.com while ensuring that they will only have access to the specifically matching username.domain.com ?
I am concerned about security here.
EDIT:
Later, after reading your full question I noticed the original answer is not what you're looking for.
I've left the original at the bottom of this answer for Googlers, but the revised version is below.
Cookies are automatically sent to subdomains on a domain (in most modern browsers the domain name must contain a period (indicating a TLD) for this behavior to occur). The authentication will need to happen as a pre-processor, and your session will need to be managed from a centralised source. Let's walk through it.
To confirm, I'll proceed assuming (from what you've told me) your setup is as follows:
SERVER 1:
Flask app for domain.com
SERVER 2:
Flask app for user profiles at username.domain.com
A problem that first must be overcome is storing the sessions in a location that is accessible to both servers. Since by default sessions are stored on disk (and both servers obviously don't share the same hard drive), we'll need to do some modifications to both the existing setup and the new Flask app for user profiles.
Step one is to choose where to store your sessions, a database powered by a DBMS such as MySQL, Postgres, etc. is a common choice, but people also often choose to put them somewhere more ephemeral such as Memcachd or Redis for example.
The short version for choosing between these two starkly different systems breaks down to the following:
Database
Databases are readily available
It's likely you already have a database implemented
Developers usually have a pre-existing knowledge of their chosen database
Memory (Redis/Memchachd/etc.)
Considerably faster
Systems often offer basic self-management of data
Doesn't incur extra load on existing database
You can find some examples database sessions in flask here and here.
While Redis would be more difficult to setup depending on each users level of experience, it would be the option I recommend. You can see an example of doing this here.
The rest I think is covered in the original answer, part of which demonstrates the matching of username to database record (the larger code block).
Old solution for a single Flask app
Firstly, you'll have to setup Flask to handle subdomains, this is as easy as specifying a new variable name in your config file. For example, if your domain was example.com you would append the following to your Flask configuration.
SERVER_NAME = "example.com"
You can read more about this option here.
Something quick here to note is that this will be extremely difficult (if not impossible) to test if you're just working off of localhost. As mentioned above, browsers often won't bother to send cookies to subdomains of a domain without dots in the name (a TLD). Localhost also isn't set up to allow subdomains by default in many operating systems. There are ways to do this like defining your own DNS entries that you can look into (/etc/hosts on *UNIX, %system32%/etc/hosts on Windows).
Once you've got your config ready, you'll need to define a Blueprint for a subdomain wildard.
This is done pretty easily:
from flask import Blueprint
from flask.ext.login import current_user
# Create our Blueprint
deep_blue = Blueprint("subdomain_routes", __name__, subdomain="<username>")
# Define our route
#deep_blue.route('/')
def user_index(username):
if not current_user.is_authenticated():
# The user needs to log in
return "Please log in"
elif username != current_user.username:
# This is not the correct user.
return "Unauthorized"
# It's the right user!
return "Welcome back!"
The trick here is to make sure the __repr__ for your user object includes a username key. For eg...
class User(db.Model):
username = db.Column(db.String)
def __repr__(self):
return "<User {self.id}, username={self.username}>".format(self=self)
Something to note though is the problem that arises when a username contains special characters (a space, #, ?, etc.) that don't work in a URL. For this you'll need to either enforce restrictions on the username, or properly escape the name first and unescape it when validating it.
If you've got any questions or requests, please ask. Did this during my coffee break so it was a bit rushed.
You can do this with the builtin Flask sessions, which are cookie-based client-side sessions. To allow users to login to multiple subdomains in '.domain.com', you need only to specify
app.config['SESSION_COOKIE_DOMAIN'] = '.domain.com'
and the client's browser will have a session cookie that allows him to login to every Flask instance that is at 'domain.com'.
This only works if every instance of Flask has the same app.secret_key
For more information, also see
Same Flask login session across two applications
I tried to figure it out, the most secure and flexible solution for storing in config file some credentials for database connection and other private info.
This is inside a python module for logging into different handlers (mongodb, mysqldb, files,etc) the history of users activity in the system.
This logging module, is attached with a handler and its there where I need to load the config file for each handler. I.E. database, user, pass, table, etc.
After some research in the web and stackoverflow, I just saw mainly the security risks comparison between Json and CPickle, but concerning the eval method and the types restriction, more than the config file storage issue.
I was wondering if storing credentials in json is a good idea, due to the security risks involved in having a .json config file in the server (from which the logging handler will read the data). I know that this .json file could be retrieved by an http request. If the parameters are stored in a python object inside a .py code, I guess there is more security due to the fact that any request of this file will be interpreted first by the server, but I am loosing the flexibility of modularization and easy modification of this data.
What would you suggest for this kind of Security issues while storing this kind of config files in the server and accessed by some Python class?
Thanks in advance,
Luchux.
I'd think about encrypting the credentials file. The process that uses it will need a key/password to decrypt it, and you can store that somewhere else-- or even enter it interactively on server start-up. That way you don't have a single point of failure (though of course a determined intruder can eventually put the pieces together).
(Naturally you should also try to secure the server so that your credentials can't just be fetched by http request)