Resolving circular imports in celery and django - python

I have a Django app that uses Celery to offload some tasks. Mainly, it defers the computation of some fields in a database table.
So, I have a tasks.py:
from models import MyModel
from celery import shared_task
#shared_task
def my_task(id):
qs = MyModel.objects.filter(some_field=id)
for record in qs:
my_value = #do some computations
record.my_field = my_value
record.save()
And in models.py
from django.db import models
from tasks import my_task
class MyModel(models.Model):
field1 = models.IntegerField()
#more fields
my_field = models.FloatField(null=True)
#staticmethod
def load_from_file(file):
#parse file, set fields from file
my_task.delay(id)
Now obviously, this won't work because of a circular import (models imports tasks and tasks imports models).
I've resolved this for the moment by calling my_task.delay() from views.py, but it seems to make sense to keep the model logic within the model class. Is there a better way of doing this?

The solution posted by joshua is very good, but when I first tried it, I found that my #receiver decorators had no effect. That was because the tasks module wasn't imported anywhere, which was expected as I used task auto-discovery.
There is, however, another way to decouple tasks.py from modules.py. Namely, tasks can be sent by name and they don't have to be evaluated (imported) in the process that sends them:
from django.db import models
#from tasks import my_task
import celery
class MyModel(models.Model):
field1 = models.IntegerField()
#more fields
my_field = models.FloatField(null=True)
#staticmethod
def load_from_file(file):
#parse file, set fields from file
#my_task.delay(id)
celery.current_app.send_task('myapp.tasks.my_task', (id,))
send_task() is a method on Celery app objects.
In this solution it is important to take care of correct, predictable names for your tasks.

In your models instead of importing the my_task at the beginning of the file, you can import it just before you use it. It will solve circular imports problem.
from django.db import models
class MyModel(models.Model):
field1 = models.IntegerField()
#more fields
my_field = models.FloatField(null=True)
#staticmethod
def load_from_file(file):
#parse file, set fields from file
from tasks import my_task # import here instead of top
my_task.delay(id)
Alternatively, you can also do same thing in your tasks.py. You can import your models just before you use it instead of beginning.
Alternative:
You can use send_task method to call your task
from celery import current_app
from django.db import models
class MyModel(models.Model):
field1 = models.IntegerField()
#more fields
my_field = models.FloatField(null=True)
#staticmethod
def load_from_file(file):
#parse file, set fields from file
current_app.send_task('myapp.tasks.my_task', (id,))

Just to toss one more not-great solution into this list, what I've ended up doing is relying on django's now-built-in app registry.
So in tasks.py, rather than importing from models, you use apps.get_model() to gain access to the model.
I do this with a helper method with a healthy bit of documentation just to express why this is painful:
from django.apps import apps
def _model(model_name):
"""Generically retrieve a model object.
This is a hack around Django/Celery's inherent circular import
issues with tasks.py/models.py. In order to keep clean abstractions, we use
this to avoid importing from models, introducing a circular import.
No solutions for this are good so far (unnecessary signals, inline imports,
serializing the whole object, tasks forced to be in model, this), so we
use this because at least the annoyance is constrained to tasks.
"""
return apps.get_model('my_app', model_name)
And then:
#shared_task
def some_task(post_id):
post = _model('Post').objects.get(pk=post_id)
You could certainly just use apps.get_model() directly though.

Use signals.
tasks.py
from models import MyModel, my_signal
from celery import shared_task
from django.dispatch import receiver
#shared_task
def my_task(id):
qs = MyModel.objects.filter(some_field=id)
for record in qs:
my_value = #do some computations
record.my_field = my_value
record.save()
#receiver(my_signal)
def my_receiver(sender, **kwargs):
my_task.delay(kwargs['id'])
models.py
from django.db import models
from tasks import my_task
from django.dispatch import Signal
my_signal = Signal(providing_args=['id'])
class MyModel(models.Model):
field1 = models.IntegerField()
#more fields
my_field = models.FloatField(null=True)
#staticmethod
def load_from_file(file):
#parse file, set fields from file
my_signal.send(sender=?, id=?)

Not sure if this is anyone else's problem, but I took a few hours, and I found a solution...mainly, the key from the documentation:
Using the #shared_task decorator
The tasks you write will probably live in reusable apps, and reusable apps cannot depend on the project itself, so you also cannot import your app instance directly.
Basically what I was doing was this...
####
# project/coolapp/tasks.py -- DON'T DO THIS
import os
from celery import Celery
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings")
app = Celery("coolapp")
app.config_from_object("django.conf:settings", namespace="CELERY")
app.autodiscover_tasks()
#app.task(bind=True)
def some_task(self, some_id):
from coolapp.models import CoolPerson
####
# project/coolapp/__init__.py -- DON'T DO THIS
from __future__ import absolute_import, unicode_literals
from .tasks import app as celery_app
__all__ = ("celery_app",)
Therefore, I was getting weird errors about missing app labels (a clear indication of a circular import).
The solution...
Refactor project/coolapp/tasks.py -> project/project/tasks.py and project/coolapp/__init__.py -> project/project/__init__.py.
IMPORTANT: This does not (and should not) be added to INSTALLED_APPS. Otherwise, you'll get the circular import.
So then to start the woker:
celery -A project.project worker -l INFO
Also, a little debugging tip...
When you want to find out if your tasks are properly discovered, put this in project/project/app.py:
app.autodiscover_tasks()
assert "project.app.tasks.some_task" in app.tasks
Otherwise, you'll have to start up the worker only to realize your tasks aren't included in the app, then you'll have to wait for shutdown.

Related

Django Signals - creating folder on model create

I got a model where I would like to create a folder when a model is created:
The model:
class Drive(models.Model):
user = models.ManyToManyField(User)
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
path = models.CharField(max_length=150, editable=False,
default='C:/Users/User/Desktop/Python/RavNet/media/storage/drives/{}'.format(str(id)))
def save(self):
super().save()
I am trying to use signals, but I must admit this is my first ever attempt at making a signal not following a tutorial on point, and even after reading the documentation I am having a tough time.
from django.db.models.signals import post_save
from django.dispatch import receiver
from .models import Drive
import os
#receiver(post_save, sender=Drive)
def create_drive(sender, instance, created, **kwargs):
if created:
os.mkdir(Drive.path)
Nothing happens when I create a new drive model via the django admin. I have tested out my code in the shell and using a placeholder path (C:/Users/User/Desktop/Python/RavNet/media/storage/drives/test) that I know works in the signal in a try to debug, and have gotten as far as realising I am having two issues.
The first: When calling the Drive.path in the shell, I am getting the path:
'C:/Users/User/Desktop/Python/RavNet/media/storage/drives/<django.db.models.fields.UUIDField>'
Instead of a path with the actual id as I hoped for. How do I solve this?
Secondly, my signal isn't working. It's like it isn't getting called. What am I doing wrong?
You must import your signals in app.py. Please check your app.py. It must be like this:
from django.apps import AppConfig
class YourAppConfig(AppConfig):
name = '...'
def ready(self):
import your_project.your_app.signals
Also your app/__init__.py must include this code:
default_app_config = 'your_project.your_app.apps.YourAppConfig'
In case you have many signals and don't want the app.py defined in your app.
you could do so:
signals are in folder app/signals/__init__.py signal1.py signal2.py
import all files in __init__.py
from .signal1 import *
from .signal2 import *
from app.signals import * inside of app/views/__init__.py

Django signals duplicate with unique dispatch id

I have a problem with duplicate signals. I have looked-up the relevant part of Django docs and a similar question on Stackoverflow but I still can't get it working right - i.e. the action that I have planned (creation on an ActivityLog entry) is actually happening 4 times :/
I have added the dispatch_uid and the problem still persists, so I guess I'm doing something wrong. Can you please hint me to the error?
Here's my code:
signals.py
from patient.models import Patient
from .models import ActivityLog
#receiver(pre_save, sender=Patient)
def create_new_patient(sender, instance, **kwargs):
ActivityLog.objects.create(
user=instance.created_by_user,
event='created a new patient',
patient_id=instance.patient_id
)
and this is it's usage in the patient.apps module:
from django.apps import AppConfig
from django.db.models.signals import pre_save
app_name = "patient"
class PatientConfig(AppConfig):
name = 'patient'
verbose_name = "Patients"
def ready(self):
from activity_logger.signals import create_new_patient
print('Patient APP is ready!')
pre_save.connect(create_new_patient, sender='patient.Patient', dispatch_uid='patient')
The print Patient APP is ready! does appear twice, and the object gets created 4 times, despite setting the dispatch_uid. What have I misunderstood?
The #receiver(Signal,...) decorator is a shortcut for Signal.connect(...), so you indeed register your create_new_patient handler twice (once thru #receiver when importing your signals module, the second time with pre_save.connect().
Solution : in your App.ready() method, you should just import your app's signal.py module. This will trigger the registration of the handlers decorated with #receiver.

Cannot import models from another app in Django

so I have 2 apps running in the same project.
My files are structured as follows:
/project_codebase
/project
__init.py
settings.py
urls.py
wsgi.py
...
/app1
...
/app2
...
manage.py
So, I for some weird reason have a different name for my base directory (that is, it ends with codebase). Hopefully, that is not an issue.
In my settings.py, I have this:
INSTALLED_APPS = [
...
'app1',
'app2',
]
Ok, so in my models.py (from app2), I can easily import models from app1 with from app1.models import *, however, when I use from app2.models import * in my models.py (from app1), I get an ImportError.
Any solutions to this?
This might be due to circular import issues. To avoid this you should load the model dynamically:
For recent versions of django (1.7+) use the application registry:
from django.apps import apps
MyModel1 = apps.get_model('app1', 'MyModel1')
For earlier django versions (<1.7):
from django.db.models.loading import get_model
MyModel1 = get_model('app1', 'MyModel1')
Note 1: If you want to define a ForeignKey relationship, there is no need for a separate import statement. Django has you covered on this:
If app1 is an installed app, you should define the ForeignKey relationship as follows:
# in app2.py
class MyModel2(models.Model):
mymodel1 = models.ForeignKey('app1.MyModel1')
Note 2: The get_model only works if app1 is an installed app and MyModel1 is the model you want to import from app1.
Note 3: Try to avoid wildcard import (from ... import *), as this is bad practice.
It's definitely a circular import.
But i think is what you need is to use models as some sort of RetationFields(ForeignKey, ManyToManyField or OneToOneField) arguments. So you need to skip import and use as so:
# app1/models.py
class Model1(models.Model):
relation_field = models.ForeignKey('app2.Model2')
From docs:
If you need to create a relationship on a model that has not yet been defined, you can use the name of the model, rather than the model object itself
To refer to models defined in another application, you can explicitly specify a model with the full application label
Just put str object as first argument to relation fields that leeds to <app_name>.<Model_name>.
Note: it's better to avoid importing everything from module(from <module_name> import *)
If you want to import only some specific module then do not use import *.
It will take more time load your all library and so can affect the speed of your app also.
If you want to use few modules from your second app then just add module name instead of whole libraries something like this:
from app2.models import Module1, Module2
or it may be circular import issue as other clarify.
Thanks.
i use this code always and it's work :)
from position_app.models import Member
You need to specify the model names you want to import, for ex from app1.models import ModelName1, ModelName2.
Make sure there is no name clash between one of your apps and one of the modules installed in your Python environment. If you use pip, you can run pip freezeto see a list of installed modules.
I had the same error when one of my apps was named 'packaging', and the packaging python module was installed.
I also face this problem when I try to import my model from another app in (django2.2)
But at last I Imported It and Its successfully working.
here is my two app:
INSTALLED_APPS = [
...
'categories',
'videos',
]
and this is the code for how I Imported it into videos/models.py file as a ForeignKey Connectivity
from django.db import models
class Videos(models.Model):
categories = models.ForeignKey('categories.Categories', related_name='categories', on_delete=models.CASCADE)
If want to see my Categories Model from categories/models.py file, you can check this code otherwise neglect it
from django.db import models
class Categories(models.Model):
category_name = models.CharField(max_length=50)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
It is a circular import
In my case I needed the imported class not for a relation field, but for use it inside a method instead.
If that's the case, I suggest to import inside the method, otherwise an AppRegistryNotReady("Models aren't loaded yet.") is raised.
class Student(CustomUser):
""" Usuario alumno """
class Meta:
verbose_name = "Alumno"
verbose_name_plural = "Alumnos"
def get_current_school_class(self):
""" Obtiene el curso actual de un alumno """
from school.models import SchoolClass, StudentClass
# proceed with the method...
it's not necessary to import models from others apps
just put the app.models in the foreignkey field and that's work ;)
app 1:
class model1(models.Model):
field=models.field type ...
app 2:
class model2(models.Model):
field=models.ForeignKey('app1.model1', on_delete. ...)
to avoid code correction :D

How to avoid dependency injection in Django?

I'm trying to figure out how to avoid dependency injection in my project. There is a file notifications.py in app directory.
File notifications.py contains methods for sending emails to admin and users. To get admins email, I need to check object of SystemData model. But in models, I use notifications.
models
class SystemData(models.Model):
admin_alerts_email = models.EmailField(verbose_name=u'Emailová adresa admina')
contact_us_email = models.EmailField(verbose_name=u'Adresa kontaktujte nás')
waiting_threshold = models.PositiveSmallIntegerField(verbose_name=u'Maximálny počet minút čakania')
class SomeModel(models.Model):
....
def save(...):
notifications.send_message_to_admin('message')
notifications.py
from django.core.mail import EmailMessage
from models import SystemData
def send_message_to_admin(message):
mail = EmailMessage(subject, message, to=[SystemData.objects.all().first().admin_email])
mail.send()
Django returns that it can't import SystemData.
Do you know what to do?
EDIT:
stacktrace
You can solve circular dependencies in functions by using inline imports:
class SomeModel(models.Model):
....
def save(...):
from .notifications import send_message_to_admin
send_message_to_admin('message')
This will delay the import statement until the function is actually executed, so the models module has already been loaded. The notifications module can then safely import the models module.
Apart from using circular imports you can either do it like that:
from django.core.mail import EmailMessage
from django.db.models.signals import post_save
from django.dispatch import receiver
from .models import SystemData, SomeModel
#receiver(post_save, sender=SomeModel)
def send_message_to_admin(sender, instance, created, **kwargs):
message = 'message'
mail = EmailMessage(
subject,
message,
to=[SystemData.objects.all().first().admin_email]
)
mail.send()
and at the end of models.py put
from .notifications import *
or use newest approach with AppConfig to register signals (that's what your notifications actually do)
see: https://chriskief.com/2014/02/28/django-1-7-signals-appconfig/
that way it will load when app registry is ready and you'll avoid circular imports, so that line:
from .notifications import *
can be dropped from models.py
AppConfig can be used in a more generic way as well allowing you to import models like that:
from django.apps import apps
Model = apps.get_model('app_name', 'Model')

Django: How to create a model dynamically just for testing

I have a Django app that requires a settings attribute in the form of:
RELATED_MODELS = ('appname1.modelname1.attribute1',
'appname1.modelname2.attribute2',
'appname2.modelname3.attribute3', ...)
Then hooks their post_save signal to update some other fixed model depending on the attributeN defined.
I would like to test this behaviour and tests should work even if this app is the only one in the project (except for its own dependencies, no other wrapper app need to be installed). How can I create and attach/register/activate mock models just for the test database? (or is it possible at all?)
Solutions that allow me to use test fixtures would be great.
You can put your tests in a tests/ subdirectory of the app (rather than a tests.py file), and include a tests/models.py with the test-only models.
Then provide a test-running script (example) that includes your tests/ "app" in INSTALLED_APPS. (This doesn't work when running app tests from a real project, which won't have the tests app in INSTALLED_APPS, but I rarely find it useful to run reusable app tests from a project, and Django 1.6+ doesn't by default.)
(NOTE: The alternative dynamic method described below only works in Django 1.1+ if your test case subclasses TransactionTestCase - which slows down your tests significantly - and no longer works at all in Django 1.7+. It's left here only for historical interest; don't use it.)
At the beginning of your tests (i.e. in a setUp method, or at the beginning of a set of doctests), you can dynamically add "myapp.tests" to the INSTALLED_APPS setting, and then do this:
from django.core.management import call_command
from django.db.models import loading
loading.cache.loaded = False
call_command('syncdb', verbosity=0)
Then at the end of your tests, you should clean up by restoring the old version of INSTALLED_APPS and clearing the app cache again.
This class encapsulates the pattern so it doesn't clutter up your test code quite as much.
#paluh's answer requires adding unwanted code to a non-test file and in my experience, #carl's solution does not work with django.test.TestCase which is needed to use fixtures. If you want to use django.test.TestCase, you need to make sure you call syncdb before the fixtures get loaded. This requires overriding the _pre_setup method (putting the code in the setUp method is not sufficient). I use my own version of TestCase that lets me add apps with test models. It is defined as follows:
from django.conf import settings
from django.core.management import call_command
from django.db.models import loading
from django import test
class TestCase(test.TestCase):
apps = ()
def _pre_setup(self):
# Add the models to the db.
self._original_installed_apps = list(settings.INSTALLED_APPS)
for app in self.apps:
settings.INSTALLED_APPS.append(app)
loading.cache.loaded = False
call_command('syncdb', interactive=False, verbosity=0)
# Call the original method that does the fixtures etc.
super(TestCase, self)._pre_setup()
def _post_teardown(self):
# Call the original method.
super(TestCase, self)._post_teardown()
# Restore the settings.
settings.INSTALLED_APPS = self._original_installed_apps
loading.cache.loaded = False
I shared my solution that I use in my projects. Maybe it helps someone.
pip install django-fake-model
Two simple steps to create fake model:
1) Define model in any file (I usualy define model in test file near a test case)
from django_fake_model import models as f
class MyFakeModel(f.FakeModel):
name = models.CharField(max_length=100)
2) Add decorator #MyFakeModel.fake_me to your TestCase or to test function.
class MyTest(TestCase):
#MyFakeModel.fake_me
def test_create_model(self):
MyFakeModel.objects.create(name='123')
model = MyFakeModel.objects.get(name='123')
self.assertEqual(model.name, '123')
This decorator creates table in your database before each test and remove the table after test.
Also you may create/delete table manually: MyFakeModel.create_table() / MyFakeModel.delete_table()
I've figured out a way for test-only models for django 1.7+.
The basic idea is, make your tests an app, and add your tests to INSTALLED_APPS.
Here's an example:
$ ls common
__init__.py admin.py apps.py fixtures models.py pagination.py tests validators.py views.py
$ ls common/tests
__init__.py apps.py models.py serializers.py test_filter.py test_pagination.py test_validators.py views.py
And I have different settings for different purposes(ref: splitting up the settings file), namely:
settings/default.py: base settings file
settings/production.py: for production
settings/development.py: for development
settings/testing.py: for testing.
And in settings/testing.py, you can modify INSTALLED_APPS:
settings/testing.py:
from default import *
DEBUG = True
INSTALLED_APPS += ['common', 'common.tests']
And make sure that you have set a proper label for your tests app, namely,
common/tests/apps.py
from django.apps import AppConfig
class CommonTestsConfig(AppConfig):
name = 'common.tests'
label = 'common_tests'
common/tests/__init__.py, set up proper AppConfig(ref: Django Applications).
default_app_config = 'common.tests.apps.CommonTestsConfig'
Then, generate db migration by
python manage.py makemigrations --settings=<your_project_name>.settings.testing tests
Finally, you can run your test with param --settings=<your_project_name>.settings.testing.
If you use py.test, you can even drop a pytest.ini file along with django's manage.py.
py.test
[pytest]
DJANGO_SETTINGS_MODULE=kungfu.settings.testing
Quoting from a related answer:
If you want models defined for testing only then you should check out
Django ticket #7835 in particular comment #24 part of which
is given below:
Apparently you can simply define models directly in your tests.py.
Syncdb never imports tests.py, so those models won't get synced to the
normal db, but they will get synced to the test database, and can be
used in tests.
This solution works only for earlier versions of django (before 1.7). You can check your version easily:
import django
django.VERSION < (1, 7)
Original response:
It's quite strange but form me works very simple pattern:
add tests.py to app which you are going to test,
in this file just define testing models,
below put your testing code (doctest or TestCase definition),
Below I've put some code which defines Article model which is needed only for tests (it exists in someapp/tests.py and I can test it just with: ./manage.py test someapp ):
class Article(models.Model):
title = models.CharField(max_length=128)
description = models.TextField()
document = DocumentTextField(template=lambda i: i.description)
def __unicode__(self):
return self.title
__test__ = {"doctest": """
#smuggling model for tests
>>> from .tests import Article
#testing data
>>> by_two = Article.objects.create(title="divisible by two", description="two four six eight")
>>> by_three = Article.objects.create(title="divisible by three", description="three six nine")
>>> by_four = Article.objects.create(title="divisible by four", description="four four eight")
>>> Article.objects.all().search(document='four')
[<Article: divisible by two>, <Article: divisible by four>]
>>> Article.objects.all().search(document='three')
[<Article: divisible by three>]
"""}
Unit tests also working with such model definition.
I chose a slightly different, albeit more coupled, approach to dynamically creating models just for testing.
I keep all my tests in a tests subdirectory that lives in my files app. The models.py file in the tests subdirectory contains my test-only models. The coupled part comes in here, where I need to add the following to my settings.py file:
# check if we are testing right now
TESTING = 'test' in sys.argv
if TESTING:
# add test packages that have models
INSTALLED_APPS += ['files.tests',]
I also set db_table in my test model, because otherwise Django would have created the table with the name tests_<model_name>, which may have caused a conflict with other test models in another app. Here's my my test model:
class Recipe(models.Model):
'''Test-only model to test out thumbnail registration.'''
dish_image = models.ImageField(upload_to='recipes/')
class Meta:
db_table = 'files_tests_recipe'
Here's the pattern that I'm using to do this.
I've written this method that I use on a subclassed version of TestCase. It goes as follows:
#classmethod
def create_models_from_app(cls, app_name):
"""
Manually create Models (used only for testing) from the specified string app name.
Models are loaded from the module "<app_name>.models"
"""
from django.db import connection, DatabaseError
from django.db.models.loading import load_app
app = load_app(app_name)
from django.core.management import sql
from django.core.management.color import no_style
sql = sql.sql_create(app, no_style(), connection)
cursor = connection.cursor()
for statement in sql:
try:
cursor.execute(statement)
except DatabaseError, excn:
logger.debug(excn.message)
pass
Then, I create a special test-specific models.py file in something like myapp/tests/models.py that's not included in INSTALLED_APPS.
In my setUp method, I call create_models_from_app('myapp.tests') and it creates the proper tables.
The only "gotcha" with this approach is that you don't really want to create the models ever time setUp runs, which is why I catch DatabaseError. I guess the call to this method could go at the top of the test file and that would work a little better.
Combining your answers, specially #slacy's, I did this:
class TestCase(test.TestCase):
initiated = False
#classmethod
def setUpClass(cls, *args, **kwargs):
if not TestCase.initiated:
TestCase.create_models_from_app('myapp.tests')
TestCase.initiated = True
super(TestCase, cls).setUpClass(*args, **kwargs)
#classmethod
def create_models_from_app(cls, app_name):
"""
Manually create Models (used only for testing) from the specified string app name.
Models are loaded from the module "<app_name>.models"
"""
from django.db import connection, DatabaseError
from django.db.models.loading import load_app
app = load_app(app_name)
from django.core.management import sql
from django.core.management.color import no_style
sql = sql.sql_create(app, no_style(), connection)
cursor = connection.cursor()
for statement in sql:
try:
cursor.execute(statement)
except DatabaseError, excn:
logger.debug(excn.message)
With this, you don't try to create db tables more than once, and you don't need to change your INSTALLED_APPS.
If you are writing a reusable django-app, create a minimal test-dedicated app for it!
$ django-admin.py startproject test_myapp_project
$ django-admin.py startapp test_myapp
add both myapp and test_myapp to the INSTALLED_APPS, create your models there and it's good to go!
I have gone through all these answers as well as django ticket 7835, and I finally went for a totally different approach.
I wanted my app (somehow extending queryset.values() ) to be able to be tested in isolation; also, my package does include some models and I wanted a clean distinction between test models and package ones.
That's when I realized it was easier to add a very small django project in the package!
This also allows a much cleaner separation of code IMHO:
In there you can cleanly and without any hack define your models, and you know they will be created when you run your tests from in there!
If you are not writing an independent, reusable app you can still go this way: create a test_myapp app, and add it to your INSTALLED_APPS only in a separate settings_test_myapp.py!
Someone already mentioned Django ticket #7835, but there appears to be a more recent reply that looks much more promising for more recent versions of Django. Specifically #42, which proposes a different TestRunner:
from importlib.util import find_spec
import unittest
from django.apps import apps
from django.conf import settings
from django.test.runner import DiscoverRunner
class TestLoader(unittest.TestLoader):
""" Loader that reports all successful loads to a runner """
def __init__(self, *args, runner, **kwargs):
self.runner = runner
super().__init__(*args, **kwargs)
def loadTestsFromModule(self, module, pattern=None):
suite = super().loadTestsFromModule(module, pattern)
if suite.countTestCases():
self.runner.register_test_module(module)
return suite
class RunnerWithTestModels(DiscoverRunner):
""" Test Runner that will add any test packages with a 'models' module to INSTALLED_APPS.
Allows test only models to be defined within any package that contains tests.
All test models should be set with app_label = 'tests'
"""
def __init__(self, *args, **kwargs):
self.test_packages = set()
self.test_loader = TestLoader(runner=self)
super().__init__(*args, **kwargs)
def register_test_module(self, module):
self.test_packages.add(module.__package__)
def setup_databases(self, **kwargs):
# Look for test models
test_apps = set()
for package in self.test_packages:
if find_spec('.models', package):
test_apps.add(package)
# Add test apps with models to INSTALLED_APPS that aren't already there
new_installed = settings.INSTALLED_APPS + tuple(ta for ta in test_apps if ta not in settings.INSTALLED_APPS)
apps.set_installed_apps(new_installed)
return super().setup_databases(**kwargs)

Categories

Resources