Currently my manage.py file is hardcoded to import my local.py - development settings file. Is this the 'industry standard' way to set this up? When I deploy to the server do I just change manage.py to point to my production settings file? Or should I set this up another way?
#!/usr/bin/env python
import os
import sys
if __name__ == "__main__":
# Hard coded imports local settings file
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings.local")
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
Structure:
project/
manage.py
settings/
local.py
shared.py
production.py
No. manage.py has nothing whatsoever to do with running Django in production, so changing it won't help at all.
I think you want to avoid editing manage.py if possible.
Another way to handle this is to use the default settings.py file, but to extend it using a second, local_settings.py file.
You can do this by putting the following at the end of your settings.py file:
locset = os.path.join(os.path.dirname(__file__), 'local_settings.py')
if os.path.exists(locset):
with open(locset) as f:
code = compile(f.read(), "local_settings.py", 'exec')
exec(code)
I typically keep the DEBUG and database settings in this local_settings.py file.
When doing this, you should be sure to add local_settings.py to your .gitignore.
I also include an example version of this file alongside the settings.py file as local_settings.py.sample minus any sensitive password / username info.
This file is included in the repo so new folks can create their DB / user and just fill in the missing parts. They just need to rename it minus the .sample extension and they're good to go.
This is a simple and effective way to have variant settings for different environments, whether local, production or between local among team members.
My applications have one settings file, but the values are read from a config file instead of being hardcoded. For example, the DATABASES section looks like this:
import ConfigParser
config = ConfigParser.ConfigParser()
config.read('app.conf')
DATABASES = {
'default': {
'ENGINE': config.get('database', 'engine'),
'NAME': config.get('database', 'name'),
'USER': config.get('database', 'user'),
'PASSWORD': config.get('database', 'password'),
'HOST': config.get('database', 'host'),
'PORT': config.get('database', 'port'),
}
}
And the development and production servers each get their own app.conf file (which are excluded from version control as a nice side benefit).
Related
[NOTE: This is using Django 4.0.2, Python 3.8.2, and Postgres 14.2.]
I have successfully set up Django and Postgres, and I can get them to work together when I put all of the Postgres parameters in the Django settings.py file. However, as shown in the Django documentation, I want to use a Postgres connection service file instead. I've created a service file (C:\Program Files\PostgreSQL\14\etc\pg_service.conf) that looks like this:
[test_svc_1]
host=localhost
user=django_admin
dbname=MyDbName
port=5432
Launching Postgres from the command line with this file seems to work fine, as it prompts me for the password:
> psql service=test_svc_1
Password for user django_admin:
However, when I try to make migrations with Django, I get the following error:
Traceback (most recent call last):
File "C:\...\django\db\backends\base\base.py", line 219, in ensure_connection
self.connect()
File "C:\...\django\utils\asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "C:\...\django\db\backends\base\base.py", line 200, in connect
self.connection = self.get_new_connection(conn_params)
File "C:\...\django\utils\asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "C:\...\django\db\backends\postgresql\base.py", line 187, in get_new_connection
connection = Database.connect(**conn_params)
File "C:\Users\...\psycopg2\__init__.py",line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: definition of service "test_svc_1" not found
There were other exceptions related to this one, such as:
django.db.utils.OperationalError: definition of service "test_svc_1" not found
but they all pointed back to not finding the service "test_svc_1".
Here is an excerpt from my Django settings.py file. Adding the NAME parameter got me a little further along, but I shouldn't need to include it once Django(?) finds the connection service file.
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'HOST': 'localhost',
'NAME': 'MyDbName',
'OPTIONS': {
'service': 'test_svc_1',
'passfile': '.my_pgpass',
},
},
}
Any thoughts as to what I'm missing? Worst case, I guess that I can revert to using environment variables and have the settings.py file refer to them. But I'd like to understand what I'm doing wrong rather than giving up.
Thanks for any guidance.
In case anyone runs across this question, I finally got it to work with service and password files thanks to some guidance from Ken Whitesell on the Django forum. Here is what I needed to do:
I had to create Windows environment variables for PGPASSFILE and PSERVICEFILE that point to the two files. I placed them both in C:\Program Files\PostgreSQL\14\etc, although I’m guessing that they can go into other directories as long as the environment variables point to them.
Next, I had to modify my database record in Django's settings.py as follows (which is different from what the Django docs specify):
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'HOST': 'localhost',
'NAME': 'MyDbName',
'OPTIONS': {
'service': 'test_svc_1',
},
},
}
The service name in OPTIONS needs to match the service name in the PGSERVICEFILE.
Two differences from the Django docs:
I needed to include the database name.
I couldn’t get it to work with a ‘passfile’ entry in the OPTIONS dictionary, but it does work when I delete that. Perhaps I was doing something wrong, but I tried several different options and folders.
EDIT: Let me add some explanation points to avoid misunderstanding.
Django documentation is correct, but I would add one note about defining path on Windows (with forward slashes /) as they noted in other documents, where need to write path to file.
Your way above (without passfile key is working because you added PGPASSFILE environment variable and psycopg2 reads path from it.
But you can specify path to pgpass.conf directly (or retrieve it from environment variable, see commented line below).
Also DB host and DB name should be placed in .pg_service.conf file. See all parameter key words in PostgreSQL docs.
Considering this, the following files example should work (taking your settings values):
# C:\Program Files\PostgreSQL\14\etc\.pg_service.conf
[test_svc_1]
host=localhost
user=django_admin
dbname=MyDbName
port=5432
# Django `settings.py`
import os
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'OPTIONS': {
'service': 'test_svc_1',
# 'passfile': os.getenv("PGPASSFILE"),
'passfile': "C:/Program Files/PostgreSQL/14/etc/pgpass.conf",
},
}
}
And of course, line with correct credentials in pgpass.conf must be present:
#hostname:port:database:username:password
localhost:5432:MyDbName:django_admin:your_secret
Turns out that the service name on the ~/.pg_service.conf file must be the database name, i.e: [djangodb] . After that it was not necessary to pass DBNAME. And the environment variable should call the service name only, not the file path. I also was having problems in Django 4 to find my env vars on the settings.py file with os.environ.get('VAR'). After changing to str(os.getenv('VAR')) things got smooth. Bye.
Using Process Monitor I found that the location of the file is in %APPDATA%\Roaming\postgresql\.pg_service.conf
Putting the file in that location didn't have any problems and I didn't have to set any system variables.
In the file I put the host, port, user and password and in django settings I only specified the database and the service.
The postgresql documentation is apparently not correct.
I'm hosting a Django app on Heroku and I do not want my database credentials in the settings.py file to be visible in GitHub. Rather, I want them to be accessed in Heroku's config vars. Heroku's docs says it's possible to access the config vars in Python code like this:
from boto.s3.connection import S3Connection
s3 = S3Connection(os.environ['S3_KEY'], os.environ['S3_SECRET'])
So I set the relevant config vars like this:
heroku config:set PASSWORD=madeuppassword
And I tried to access the config vars in the source code like is says like this:
DATABASES = {
'default': {
'ENGINE': os.environ['ENGINE'],
'NAME': os.environ['NAME'],
'USER': os.environ['USER'],
'PORT': os.environ['PORT'],
'PASSWORD': os.environ['PASSWORD'],
'HOST': os.environ['HOST']
}
}
However when running the server I get this error in my command prompt:
raise KeyError(key) from None
KeyError: 'ENGINE'
To the best of my knowledge I set and accessed Heroku's config vars as it's demonstrated in their docs but I keep getting this error. What am I possibly doing wrong that it keeps bringing up this error?
Any help is appreciated!
A buddy and I are developing a Django app and are using git.
As we work, we make fake accounts on our site, login, and upload content to the database, etc..for testing purposes. Every time we merge branches, we get merge conflicts in our database file. The database file is in the repository, and, since we're testing separately, the local copies of the file develop differently.
How do I prevent the database file from being tracked, so we can each keep our local copies?
With the following, we've been able to avoid using a local path:
## settings.py
from os.path import dirname, join
PROJECT_DIR = dirname(__file__)
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': join(PROJECT_DIR, 'foo.db'),
'USER': '',
'PASSWORD': '',
'HOST': '',
'PORT': '',
}
}
What would be ideal, is something like:
## settings.py
from os.path import dirname, join
PROJECT_DIR = dirname(__file__)
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': join('../../../', PROJECT_DIR, 'foo.db'), # this path is outside the repository (ie, 'Users/sgarza62/foo.db')
'USER': '',
'PASSWORD': '',
'HOST': '',
'PORT': '',
}
}
How can we keep our database files from being committed?
Add your database file to .gitignore. Then you can keep it in its current location, but it will not be under version control.
First off, you'll want to remove the database file from your git repository.
git rm <database_file>
To prevent the file from being added to your repository, create a file named ".gitignore" inside your checkout of the repository, add the database file to .gitignore, and add .gitignore to your repository. (Documentation)
To prevent conflicts with settings.py, I also add settings.py to .gitignore. I then create a file called "settings.production.py", which contains all of the settings for the production server, and add it to the repository. On my local checkout, I simply copy this file into settings.py and modify variables as needed. On my production server, I make a symlink to settings.production.py.
ln -s settings.production.py settings.py
WARNING:
If your repository is public, it should never store secret keys, passwords, certificates, etc. You don't want others to have access to these files.
You should also verify that your web server does not serve ".git" folders. A hacker could gain access to your source code if http://example.com/.git is accessible.
When you work on projects with other ppl sharing repo, you have to make local_settings.py and keep there all local settings :) Then in settings.py just add from local_settings import *.
And add local_settings.py and database file to .gitignore file.
In example if your file name is database.db then in directory with this file create file with name .gitignore and write in it database.db or *.db to ignore all db files.
This is a common problem. I would recommend not checking in the database and loading and saving data fixtures as needed. (https://docs.djangoproject.com/en/dev/howto/initial-data/)
Create a test_data directory and run the following commands to export your database to a database agnostic json file:
./manage.py dumpdata > test_data/test_file_1.json
Check that file in to source. At any point if you want to restore the database to that point simply run:
./manage.py loaddata test_data/test_file_1.json
This also has the advantage of being used for unit tests (read Loading fixtures in django unit tests)
from django.test import TestCase
class MyTestCase(TestCase):
fixtures = ['/myapp/fixtures/dump.json',]
I'm working with some friends to build a PostgreSQL/SQLAlchemy Python app and have the following line:
engine = create_engine('postgresql+pg8000://oldmba#localhost/helloworld')
Newbie question: Instead of having to edit in "oldmba" (my username) all the time whenever I git pull someone else's code, what's the simple way to make that line equally applicable to all users so we don't have to constantly edit it? Thanks in advance!
have a config file with your settings.
It can store data in python config dictionary or variables
The config file can import from a local_settings.py file. This file can be ignored in your gitignore. It can contain your individdual settings , username , password, database urls, pretty much anything that you need to configure and that may differ depending on your enviornment (production vs devel)
This is how settings in django projects are usually handled. It allows for multiple users to devlop on the same project with different settings. You might want a 'database_url' field or something too so on production if you need to set your database to a different server but on development you use 'localhost'
# config.py
database = {
'username': 'production_username',
'password': 'production_password'
}
try:
from local_config import *
catch ImportError:
pass
# local_config.py
database = {
'username': 'your_username',
'password': 'your_password'
}
from config import *
engine = create_engine('postgresql+pg8000://{0}#localhost/helloworld'.format(database['username']))
A question on app callables, WSGI servers and Flask circular imports
I am (possibly) confused. I want to safely create Flask / WSGI apps
from app-factories and still be able to use them in WSGI servers easily.
tl;dr
Can I safely avoid creating an app on import of init (as
recommended)and instead create it later (ie with a factory method)
How do I make that app work neatly with a WSGI server? Especially
when I am passing in the config and other settings not pulling them
from ENV
For example::
def make_app(configdict, appname):
app = Flask(appname)
app.config.update(configdict)
init_db(configdict)
set_app_in_global_namespace(app)
#importing now will allow from pkg import app
from mypackage import views
return app
I would like to use the above "factory", because I want to easily contorl config for testing etc.
I then presumably want to create a wsgi.py module that provides the app to a WSGI server.
So eventually things look a bit like this
init.py::
app = None
def make_app(configdict, appname):
flaskapp = Flask(appname)
flaskapp.config.update(configdict)
init_db(configdict)
global app
app = flaskapp
#importing now will allow from pkg import app
from mypackage import views
return flaskapp
wsgi.py::
from mypackage import app
app = make_app(configfromsomewhere, "myname")
uWSGI::
uwsgi --module=mypackage.wsgi:app
But still wsgi.py is NOT something I can call like wsgi.py --settings=x --host=10.0.0.1
So I don't really know how to pass the config in.
I am asking because while this seems ... OK ... it also is a bit messy.
Life was easier when everything was in the ENV.
And not only but also:
So what is unsafe about using an app-factory
The advice given here <http://flask.pocoo.org/docs/patterns/packages>_
is ::
1. the Flask application object creation has to be in the
__init__.py file. That way each module can import it safely and
the __name__ variable will resolve to the correct package.
2. all the view functions (the ones with a route() decorator on
top) have to be imported in the __init__.py file. Not the object
itself, but the module it is in. Import the view module after
the application object is created.
re: 2., clearly the route decorator expects certain abilities from
an instantiated app and cannot function without them. Thats fine.
re: 1., OK we need the name correct. But what is unsafe ? And
why? Is it unsafe to import and use the app if it is uninitialised?
Well it will break but thats not unsafe.
Is it the much vaunted thread-local? Possibly. But if I am plucking
app instances willy-nilly from random modules I should expect trouble.
Implications - we do not reference the app object from anything other than the views - essentially we keep our modularisation nice and tight, and pass around
dicts, error objects, or even WebObs.
http://flask.pocoo.org/docs/patterns/appdispatch
http://flask.pocoo.org/docs/deploying/#deployment
http://flask.pocoo.org/docs/patterns/packages/#larger-applications
http://flask.pocoo.org/docs/becomingbig
According to the Flask Documentation, an application factory is good because:
Testing. You can have instances of the application with different settings to test every case.
Multiple instances. Imagine you want to run different versions of the same application. Of course you could have multiple instances with different configs set up in your webserver, but if you use factories, you can have multiple instances of the same application running in the same application process which can be handy.
But, as is stated in the Other Testing Tricks section of the documentation, if you're using application factories the functions before_request() and after_request() will be not automatically called.
In the next paragraphs I will show how I've been using the application factory pattern with the uWSGI application server and nginx (I've only used those, but I can try to help you configure it with another server).
The Application Factory
So, let's say you have your application inside the folder yourapplication and inside it there's the __init__.py file:
import os
from flask import Flask
def create_app(cfg=None):
app = Flask(__name__)
load_config(app, cfg)
# import all route modules
# and register blueprints
return app
def load_config(app, cfg):
# Load a default configuration file
app.config.from_pyfile('config/default.cfg')
# If cfg is empty try to load config file from environment variable
if cfg is None and 'YOURAPPLICATION_CFG' in os.environ:
cfg = os.environ['YOURAPPLICATION_CFG']
if cfg is not None:
app.config.from_pyfile(cfg)
Now you need a file to create an instance of the app:
from yourapplication import create_app
app = create_app()
if __name__ == "__main__":
app.run()
In the code above I'm assuming there's an environment variable set with the path to the config file, but you could give the config path to the factory, like this:
app = create_app('config/prod.cfg')
Alternatively, you could have something like a dictionary with environments and corresponding config files:
CONFIG_FILES = {'development': 'config/development.cfg',
'test' : 'config/test.cfg',
'production' : 'config/production.cfg' }
In this case, the load_config function would look like this:
def load_config(app, env):
app.config.from_pyfile('config/default.cfg')
var = "YOURAPPLICATION_ENV"
if env is None and var in os.environ:
env = os.environ[var]
if env in CONFIG_FILES:
app.config.from_pyfile(CONFIG_FILES[env])
Nginx and uWSGI
Here's an example of a configuration file for nginx:
server {
listen 80;
server_name yourapplication.com;
access_log /var/www/yourapplication/logs/access.log;
error_log /var/www/yourapplication/logs/error.log;
location / {
try_files $uri #flask;
}
location #flask {
include uwsgi_params;
uwsgi_pass unix:/tmp/yourapplication.sock;
# /env is the virtualenv directory
uwsgi_param UWSGI_PYHOME /var/www/yourapplication/env;
# the path where the module run is located
uwsgi_param UWSGI_CHDIR /var/www/yourapplication;
# the name of the module to be called
uwsgi_param UWSGI_MODULE run;
# the variable declared in the run module, an instance of Flask
uwsgi_param UWSGI_CALLABLE app;
}
}
And the uWSGI configuration file looks like this:
[uwsgi]
plugins=python
vhost=true
socket=/tmp/yourapplication.sock
env = YOURAPPLICATION_ENV=production
logto = /var/www/yourapplication/logs/uwsgi.log
How to use before_request() and after_request()
The problem with those functions is that if your are calling them in other modules, those modules cannot be imported before the application has been instantiated. Again, the documentation has something to say about that:
The downside is that you cannot use the application object in the blueprints at import time. You can however use it from within a request. How do you get access to the application with the config? Use current_app:
from flask import current_app, Blueprint, render_template
admin = Blueprint('admin', __name__, url_prefix='/admin')
#admin.route('/')
def index():
return render_template(current_app.config['INDEX_TEMPLATE'])
Or you could consider creating an extension, then you could import the class without any existent instances of Flask, as the class extension would only use the Flask instance after being created.