"real" global variables in Python for programwide settings - python

I have this in my main execution file:
databaseUrl = appPath + '/db/timey.db'
which is pointing to my SQLite DB.
I have done some encapsulation for accessing my model (Data/DB). So until databaseURL finally gets used I would need to pass it from main -> view.py -> model.py -> db.py.
This would be stupid because e.g. my view class doesn't need to know about my database or its path. So what would be a proper approach to actually make this path "globally" accessable without passing it around all the time?
I tried to make databaseUrl global, but I don't like the idea and haven't really gotten it to work like that anyway.
Storing the information in an external file would be an overkill, as it is the only (constant!) global used variable anyway.
Thank you!

I don't think storing it in an external file would be overkill at all. I think it is helpful to have a settings or config file
# settings.py
import os
PROJECT_ROOT = os.path.abspath(os.path.dirname(__file__))
DATABASE_URL = ''
DATABASE_USER = ''
DATABSE_PASSWORD = ''
etc.
from myproject import settings
settings.DATABASE_URL
This is what django and scrapy do to store a projects database config settings, and all other project settings
Also, if you have a database class it might make sense to store these settings inside of it? I don't know.
The settings.py file let you easily set up a development / local database urls.
TO do this you can have a local_settings.py file that will never be put under version control or packaged in your project
in your settings.py file
try:
from local_settings import *
except ImportError:
pass
then in local_settings.py you can override the DATABASE_URL to be your dev database!! This is a django project convention

Related

How to reorganize django settings file?

I have this standard django generated settings:
project/project
| __init__.py
| settings.py
| urls.py
| wsgi.py
And it's working. However, I want to reorganize this layout to this:
project/project/settings
| __init__.py
| base.py
| __init__.py
| urls.py
| wsgi.py
When I do it of course it's not working:
django.core.exceptions.ImproperlyConfigured: The SECRET_KEY setting must not be empty.
So where and what I need to change in order this to work?
Well according to your error which says, you have no SECRET_KEY defined. You need to add one to your settings.py.
Django will refuse to start if SECRET_KEY is not set. You can read more about it in docs
The SECRET_KEY can be anything...but if you want to use Django to generate one, you can do the following from the python shell:
>>> from django.utils.crypto import get_random_string
>>> chars = 'abcdefghijklmnopqrstuvwxyz0123456789!##$%^&*(-_=+)'
>>> SECRET_KEY = get_random_string(50, chars)
>>> print SECRET_KEY
Copy the secret_key to your setting file.
Hope this will help you! :)
It's very simple, just edit manage.py:
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings")
to
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings.base")
and also wsgi.py
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings")
to
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings.base")
If you moved your old settings.py into settings/base.py, I assume you want to split your Django settings into multiple files.
So, the only thing you have to do is to ensure all your settings are available at module level. In settings/__init__.py, write something like
from .base import *
from .another_setting_file import *
# etc.
Then, all your settings variables (settings.SECRET_KEY, settings.DEBUG, etc) will be available to the rest of the Django app, without need to modify default DJANGO_SETTINGS_MODULE value.
Bonus: using this method, you can override a setting defined in base.py in another file as soon as you import base first in __init__.py
Django is looking into DJANGO_SETTINGS_MODULE env variable by default. It's set to
'{your_project_name}.settings' module or package (again, by default). So you can modify that to be '{your_project_name}.settings.base' to read settings from base.py module.
But you can also accomplish that without modifying environment variables. With your folder structure Django is looking into settings package you've created. So, you just need to import all the settings defined in base.py there. Just place this code into settings/__init__.py module:
from .base import *
This allows you to import settings from different modules and perform some more complex logic like checking the environment variables and importing settings based on your current environment. For example importing settings by checking in which environment the app is running:
import logging
import os
# Default is local environment
environment = os.getenv('MACHINE_ENV', 'local')
if environment:
if environment.lower() == 'local':
from .local import *
elif environment.lower() == 'staging':
from .staging import *
elif environment.lower() == 'prod':
from .prod import *

Using globals() to augment django settings

It seems to be relatively normal in the Django world to import a local environment settings override file at the end of settings.py, something like:
from settings_local.py import *
This has a couple of issues, most notably for me being that you cannot inspect or modify base settings, only override them - which leads to some painful duplication and headaches over time.
Instead, is there anything wrong with importing an init function from a local file and injecting the output of globals() to allow that local file to read and modify the settings a bit more intelligently?
settings.py:
from settings_local.py import init
init( globals() )
settings_local.py:
def init( config ):
config['INSTALLED_APPS'].append( 'some.app' )
This seems to address a number of the concerns related to using import *, but I'd like to know if this is a bad idea for a reason I am not yet aware of.
The more common and readable way to handle this is to have multiple settings files, where you put all the common settings in one module and import that at the beginning of the environment-specific settings files:
settings
__init__.py
base.py
development.py
staging.py
production.py
This allows you the mutations you are referring to:
# base.py
INSTALLED_APPS = [
'foo',
'bar'
]
# development.py
from .base import *
INSTALLED_APPS += [
'debug_toolbar'
]
Then you run your app locally with DJANGO_SETTINGS_MODULE=settings.development

How can I set a variable in my Django project settings based on modules in my apps?

I really like django-pipeline but I wish to set my assets inside each app. It's cleaner and don't mess with the settings.py. So In the __init__.py of a "core" app I have the code below.
# __init__.py file
from django.conf import settings
from django.utils.importlib import import_module
GLOBAL_PP_JS = {}
GLOBAL_PP_CSS = {}
for app in settings.INSTALLED_APPS:
try:
mod = import_module('%s.compressed' % app)
except:
continue
try:
GLOBAL_PP_JS.update(mod.PIPELINE_JS)
except:
pass
try:
GLOBAL_PP_CSS.update(mod.PIPELINE_CSS)
except:
pass
PIPELINE_JS = setattr(settings, 'PIPELINE_JS', GLOBAL_PP_JS)
PIPELINE_CSS = setattr(settings, 'PIPELINE_CSS', GLOBAL_PP_CSS)
It searches for compressed.py modules in each app.
# compressed.py file
PIPELINE_JS = {
'js_group': {
'source_filenames': (
'js/base.js',
),
'output_filename': 'js/group.js',
}
}
Well, it isn't working since the settings has security features to prevent overriding its variables.
Can someone point me some Django pattern or any workaround to make this code work?
I'm using Django 1.7 and Django-Pipeline 1.4.3 .
Django settings are only write-protected outside the settings file. As long as you don't create circular imports, it is fine to dynamically set PIPELINE_JS and PIPELINE_CSS based on the compressed modules in your installed apps. Obviously, the order matters, and INSTALLED_APPS needs to be complete (or at least include all apps that have a compressed module) before you load the js and css files.
Another way is to use a descriptor object (with __get__ and __set__ methods) as your PIPELINE_JS and PIPELINE_CSS settings, that loads the app-based settings lazily. This will ensure that all INSTALLED_APPS are available.

Module imported twice cause recreation on an object inside it

I have a Django project. A part from this Django project was a reporting module that searches for reports directory inside all INSTALLED_APPS, very similar to autodiscover mechanism of the admin interface.
This module had a small registery class that registers classes found. In a very simplified way it looks something like this:
def autodiscover():
"""
Searches for reports module in all INSTALLED_APPS
"""
global REPORTSLOADING
if REPORTSLOADING:
return
REPORTSLOADING = True
import imp
from django.conf import settings
for app in settings.INSTALLED_APPS:
try:
app_path = import_module(app).__path__
except AttributeError:
continue
try:
imp.find_module('reports', app_path)
except ImportError:
continue
import_module("%s.reports" % app)
REPORTSLOADING = False
class ReportsRegistery(object):
.....
registery = ReportsRegistery()
If any of the INSTALLED_APPS need to register a Report class, we need a line inside reports/__init__.py:
import reports
reports.registery.register(SomeReportClass)
And inside the main urls.py i would do:
import reports
reports.autodiscover()
urlpatterns = patterns('',
....
(r'', include(reports.registery.urls)),
)
Now I decided to create a pluggable django application for it and placed the same code in __init__.py of the package. The problem I am facing is that reports module with the new structure gets imported twice, thus causing the recreation of the 'registery' object. So, no urls are actually registered. It's loaded one time from the import inside urls.py (as expected) and another one initiated by autodiscover. I have verified this by:
print hex(id(registery))
and found out it returned 2 different values.
I thought that the reports package will be imported once just like when it was just a module.
How can I prevent it from being loaded twice ? Or how can I ensure that we will have only one ReportsRegistery instance to work with ?
It's not uncommon for Django to import modules twice. There are two reasons for this:
The classic Django project layout encouraged you to have your working directory on the path twice at two different locations. This meant you could import something as project.module, or as app.project.module, which would confuse the import machinery.
The settings.py file is actually imported twice.
Fixes:
Double-check that all of your imports use the same style of path.
Don't import your module from settings.py

Access models in other project in a Django view cause "table doesn't exist" error

Base project structure
baseproject
baseapp
models.py
class BaseModel(models.Model)
...
Other project structure:
project
app
views.py
urls.py
project.app.views.py
import os
os.environ['DJANGO_SETTINGS_MODULE'] = 'project.settings'
from django.conf import settings
from baseproject.baseapp.models import BaseModel
print BaseModel.objects.count()
it raised "Table 'project.baseapp_baemodel' doesn't exist" error when run from command line: "python views.py".
import os
os.environ['DJANGO_SETTINGS_MODULE'] = 'baseproject.settings'
from django.conf import settings
from baseproject.baseapp.models import BaseModel
print BaseModel.objects.count()
After changed project.settings to baseproject.settings, it works in command line.
import os
os.environ['DJANGO_SETTINGS_MODULE'] = 'baseproject.settings'
from django.conf import settings
from baseproject.baseapp.models import BaseModel
def someview(request):
count = BaseModel.objects.count()
return render_to_response(...)
But it still raised "Table 'project.baseapp_baemodel' doesn't exist" error when access the view by opening corresponding url in browser.
What's wrong in above code?
You are fighting against the framework here, and you'll be better off if you rethink your architecture. Django is built around the assumption that a project = a given set of INSTALLED_APPS, and the project settings name a database to which those apps are synced. It's not clear here what problem you have with just doing things that way, but whatever you're trying to achieve, it can be achieved without trying to import models from an app that is not in your current project's INSTALLED_APPS. That is never going to work reliably.
If there's an app you want in both projects, you should put it on your PYTHONPATH (or in virtualenvs) so both projects can access it, and put it in the INSTALLED_APPS of both projects. If you also need its data shared between the projects, you might be able to point both projects at the same database (though you'd need to be careful of other conflicting app names that you might not want to share data). Or you could use the multi-database support that's now in Django trunk to have the one project use the other project's database only for that one app.
My guess is if you back up a step and explain what you're trying to do, there are even better solutions available than those.

Categories

Resources