I have a Django project. A part from this Django project was a reporting module that searches for reports directory inside all INSTALLED_APPS, very similar to autodiscover mechanism of the admin interface.
This module had a small registery class that registers classes found. In a very simplified way it looks something like this:
def autodiscover():
"""
Searches for reports module in all INSTALLED_APPS
"""
global REPORTSLOADING
if REPORTSLOADING:
return
REPORTSLOADING = True
import imp
from django.conf import settings
for app in settings.INSTALLED_APPS:
try:
app_path = import_module(app).__path__
except AttributeError:
continue
try:
imp.find_module('reports', app_path)
except ImportError:
continue
import_module("%s.reports" % app)
REPORTSLOADING = False
class ReportsRegistery(object):
.....
registery = ReportsRegistery()
If any of the INSTALLED_APPS need to register a Report class, we need a line inside reports/__init__.py:
import reports
reports.registery.register(SomeReportClass)
And inside the main urls.py i would do:
import reports
reports.autodiscover()
urlpatterns = patterns('',
....
(r'', include(reports.registery.urls)),
)
Now I decided to create a pluggable django application for it and placed the same code in __init__.py of the package. The problem I am facing is that reports module with the new structure gets imported twice, thus causing the recreation of the 'registery' object. So, no urls are actually registered. It's loaded one time from the import inside urls.py (as expected) and another one initiated by autodiscover. I have verified this by:
print hex(id(registery))
and found out it returned 2 different values.
I thought that the reports package will be imported once just like when it was just a module.
How can I prevent it from being loaded twice ? Or how can I ensure that we will have only one ReportsRegistery instance to work with ?
It's not uncommon for Django to import modules twice. There are two reasons for this:
The classic Django project layout encouraged you to have your working directory on the path twice at two different locations. This meant you could import something as project.module, or as app.project.module, which would confuse the import machinery.
The settings.py file is actually imported twice.
Fixes:
Double-check that all of your imports use the same style of path.
Don't import your module from settings.py
Related
I have a Django project containing some files which are, obviously, not automatically discovered by Django. My workaround is to import them in urls.py so that Django can see them. This is how my urls.py looks like:
from django.contrib import admin
from django.urls import path
from custom_file_1 import * # "unused" import
from custom_file_2 import * # "unused" import
urlpatterns = [
...
]
My IDE considers the commented imports unused, since they are not used, but they are vital so that Django can process those files.
And the question: is there any nice way to let Django see those files? And how to do that?
It is usually not a good idea to import things with wildcards. Imagine that in one of the custom files some object with the name path is present, then it will override the reference to the path function you imported from django.urls.
Usually one imports such files (that for example contain signals, etc.) in the AppConfig.
In the directory of the app, there is an __init__.py file. You can write
# app/__init__.py
default_app_config = 'app.config.CustomAppConfig'
In your app directory, you then define the config of your app:
# app/config.py
from django.apps import AppConfig
class CustomAppConfig(AppConfig):
name = 'app'
def ready(self):
import custom_file_1 # noqa
import custom_file_2 # noqa
Here # noqa is used to ignore the flake8 warnings.
I have an endpoint /docs in django that I only want to be visible when DEBUG = True in settings - otherwise, it should throw a 404. My setup looks like this
urls.py
urlpatterns = ...
if settings.DEBUG:
urlpatterns += [
url(r'^docs/$', SwaggerSchemaView.as_view(), name='api_docs'),
]
When doing testing, though, django doesn't automatically reload urls.py, which means simply overriding DEBUG to True or False doesn't work.
My tests look something like this
#override_settings(DEBUG=True)
#override_settings(ROOT_URLCONF='config.urls')
class APIDocsTestWithDebug(APITestCase):
# check for 200s
...
#override_settings(DEBUG=False)
#override_settings(ROOT_URLCONF='config.urls')
class APIDocsTestWithoutDebug(APITestCase):
# check for 404s
...
Now here's the weird part: When I run the tests individually using pytest path/to/test.py::APIDocsTestWithDebug and pytest path/to/test.py::APIDocsTestWithoutDebug, both tests pass. However, if I run the test file as a whole (pytest path/to/test.py), APIDocsTestWithDebug always fails. The fact that they work individually but not together tells me that the url override is working, but when the tests are in tandem, there is some bug that messes things up. I was wondering if anybody had come across a similar issue and either has an entirely different solution or can give me some tips as to what I'm doing wrong.
I struggled with the same issue. The thing is that Django loads your urlpatterns once while initializing - and overriding the settings with the decorator doesn't change what was initially loaded.
Here's what worked for me - try reloading your urls module (based on this) and clearing url caches with clear_url_caches() before the failing test cases:
import sys
from importlib import reload, import_module
from django.conf import settings
from django.core.urlresolvers import clear_url_caches # Or -> from django.urls import clear_url_caches
def reload_urlconf(urlconf=None):
clear_url_caches()
if urlconf is None:
urlconf = settings.ROOT_URLCONF
if urlconf in sys.modules:
reload(sys.modules[urlconf])
else:
import_module(urlconf)
PS: You might also want to restore the urlpatterns later - just run reload_urlconf within other settings.
You can use #pytest.mark.urls: https://pytest-django.readthedocs.io/en/latest/helpers.html#pytest.mark.urls
#pytest.mark.urls('myapp.test_urls')
def test_something(client):
assert 'Success!' in client.get('/some_url_defined_in_test_urls/').content
You could even define the URLs within the same file:
def some_view(request):
return HttpResponse(b"Success!")
urlpatterns = [
path("some-url/", some_view)
]
#pytest.mark.urls(__name__)
def test_something(client):
assert b'Success!' in client.get('/some-url/').content
It seems to be relatively normal in the Django world to import a local environment settings override file at the end of settings.py, something like:
from settings_local.py import *
This has a couple of issues, most notably for me being that you cannot inspect or modify base settings, only override them - which leads to some painful duplication and headaches over time.
Instead, is there anything wrong with importing an init function from a local file and injecting the output of globals() to allow that local file to read and modify the settings a bit more intelligently?
settings.py:
from settings_local.py import init
init( globals() )
settings_local.py:
def init( config ):
config['INSTALLED_APPS'].append( 'some.app' )
This seems to address a number of the concerns related to using import *, but I'd like to know if this is a bad idea for a reason I am not yet aware of.
The more common and readable way to handle this is to have multiple settings files, where you put all the common settings in one module and import that at the beginning of the environment-specific settings files:
settings
__init__.py
base.py
development.py
staging.py
production.py
This allows you the mutations you are referring to:
# base.py
INSTALLED_APPS = [
'foo',
'bar'
]
# development.py
from .base import *
INSTALLED_APPS += [
'debug_toolbar'
]
Then you run your app locally with DJANGO_SETTINGS_MODULE=settings.development
I have this in my main execution file:
databaseUrl = appPath + '/db/timey.db'
which is pointing to my SQLite DB.
I have done some encapsulation for accessing my model (Data/DB). So until databaseURL finally gets used I would need to pass it from main -> view.py -> model.py -> db.py.
This would be stupid because e.g. my view class doesn't need to know about my database or its path. So what would be a proper approach to actually make this path "globally" accessable without passing it around all the time?
I tried to make databaseUrl global, but I don't like the idea and haven't really gotten it to work like that anyway.
Storing the information in an external file would be an overkill, as it is the only (constant!) global used variable anyway.
Thank you!
I don't think storing it in an external file would be overkill at all. I think it is helpful to have a settings or config file
# settings.py
import os
PROJECT_ROOT = os.path.abspath(os.path.dirname(__file__))
DATABASE_URL = ''
DATABASE_USER = ''
DATABSE_PASSWORD = ''
etc.
from myproject import settings
settings.DATABASE_URL
This is what django and scrapy do to store a projects database config settings, and all other project settings
Also, if you have a database class it might make sense to store these settings inside of it? I don't know.
The settings.py file let you easily set up a development / local database urls.
TO do this you can have a local_settings.py file that will never be put under version control or packaged in your project
in your settings.py file
try:
from local_settings import *
except ImportError:
pass
then in local_settings.py you can override the DATABASE_URL to be your dev database!! This is a django project convention
Base project structure
baseproject
baseapp
models.py
class BaseModel(models.Model)
...
Other project structure:
project
app
views.py
urls.py
project.app.views.py
import os
os.environ['DJANGO_SETTINGS_MODULE'] = 'project.settings'
from django.conf import settings
from baseproject.baseapp.models import BaseModel
print BaseModel.objects.count()
it raised "Table 'project.baseapp_baemodel' doesn't exist" error when run from command line: "python views.py".
import os
os.environ['DJANGO_SETTINGS_MODULE'] = 'baseproject.settings'
from django.conf import settings
from baseproject.baseapp.models import BaseModel
print BaseModel.objects.count()
After changed project.settings to baseproject.settings, it works in command line.
import os
os.environ['DJANGO_SETTINGS_MODULE'] = 'baseproject.settings'
from django.conf import settings
from baseproject.baseapp.models import BaseModel
def someview(request):
count = BaseModel.objects.count()
return render_to_response(...)
But it still raised "Table 'project.baseapp_baemodel' doesn't exist" error when access the view by opening corresponding url in browser.
What's wrong in above code?
You are fighting against the framework here, and you'll be better off if you rethink your architecture. Django is built around the assumption that a project = a given set of INSTALLED_APPS, and the project settings name a database to which those apps are synced. It's not clear here what problem you have with just doing things that way, but whatever you're trying to achieve, it can be achieved without trying to import models from an app that is not in your current project's INSTALLED_APPS. That is never going to work reliably.
If there's an app you want in both projects, you should put it on your PYTHONPATH (or in virtualenvs) so both projects can access it, and put it in the INSTALLED_APPS of both projects. If you also need its data shared between the projects, you might be able to point both projects at the same database (though you'd need to be careful of other conflicting app names that you might not want to share data). Or you could use the multi-database support that's now in Django trunk to have the one project use the other project's database only for that one app.
My guess is if you back up a step and explain what you're trying to do, there are even better solutions available than those.