Automatically create django model instances at startup with empty database - python

My django project requires some model instances to be created at startup if they do not exist.
I currently create the required model instances I need in an app config.
class MyAppConfig(AppConfig):
name = 'my_app'
def ready(self):
create_required_objects()
def create_required_objects():
from my_app.models import MyObject
for name in MyObject.reserved_names:
if not MyObject.objects.filter(name=name).exists():
MyObject.objects.create(name=name, not_editable=True)
This works perfectly when the sqlite database is initialized, however if I clear the database and then try to run the sever, I get the following error:
django.db.utils.OperationalError: no such table: my_app_object
I would like to be able to clear the database (preferably by just removing db.sqlite3) and run the server.

Use post_migrate signal to create a new instance when migrating to the new database:
Like:
from django.db.models.signals import post_migrate
from my_app.models import MyObject
def create_required_objects(sender, **kwargs):
for name in MyObject.reserved_names:
if not MyObject.objects.filter(name=name).exists():
MyObject.objects.create(name=name, not_editable=True)
class MyAppConfig(AppConfig):
name = 'my_app'
def ready(self):
post_migrate.connect(create_required_objects ,sender=self)
This code automatically generates the user after migrating to the database.

You can use model_bakery to populate some temp data. You may need to do makemigrations and migrate to set up all tables in your database, you can follow this workflow.
python manage.py makemigrations
python manage.py migrate
In terms of populating data, you can try the following code:
from model_bakery import baker
from my_app.models import MyObject
baker.make(MyObject)
Add baker.make(MyObject) in your create_required_objects function after installation of model_bakery:
pip install model_bakery

Related

Accessing django database from python script

I'm trying to access my Django database from within a regular Python script. So far what I did is:
import os
import django
from django.db import models
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "m_site.settings")
django.setup()
from django.apps import apps
ap=apps.get_model('py','Post')
q=ap.objects.all()
for a in q:
print(a.title)
There is an application in Django called py where I have many posts (models.py, which contains class Post(models.Model)).
I would like to have the possibility to access and update this database from a regular Python script. So far script above works fine, I can insert, update or query but it looks like it is a separate database and not the database from Django's py application. Am I doing something wrong or missing something? Any help would be very appreciated.
Consider writing your script as a custom management command. This will let you run it via manage.py, with Django all properly wired up for you, e.g.
python manage.py print_post_titles
Something like this should be a good start:
from django.core.management.base import BaseCommand
from py.models import Post
class Command(BaseCommand):
help = 'Prints the titles of all Posts'
def handle(self, *args, **options):
for post in Post.objects.all():
print(a.title)

How do you register a custom management command in a Django unittest

How do you register a custom Django management command that only exists within a unittest?
In Django 1.5, you could do this like:
from django.test import TestCase
from django.core.management import _commands
from myapp.testse.commands import MyTestCommand
class JobTestCase(TestCase):
fixtures = ['test_jobs.json']
def setUp(self):
# Install the test command; this little trick installs the command
# so that we can refer to it by name
_commands['test_command'] = MyTestCommand()
However, in Django 1.8, _commands no longer exists. Looking in the code, it seems these are now dynamically retrieved via django.core.management.get_commands, but I don't see a method to add or register an arbitrary command class. Is this possible?

Installing hstore extension in django nose tests

I've installed the hstore extension successfully, and everything works when I syncdb. (I'm using djorm-ext-hstore)
However, nose creates a new temp database to run tests in, and hstore is not installed in it.
I need to run CREATE EXTENSION HSTORE; on the test db right before nose syncs the db, but I can't find any info on how to do that.
Any ideas?
This is a non-issue: The best way to fix this is to apply the hstore extension on the default database, template1
psql -d template1 -c 'create extension hstore;'
Reference: How to create a new database with the hstore extension already installed?
With Django 1.8 (which is outdated now, but it still exists in 3.2):
from django.contrib.postgres.operations import HStoreExtension
class Migration(migrations.Migration):
...
operations = [
HStoreExtension(),
...
]
https://docs.djangoproject.com/en/3.2/ref/contrib/postgres/fields/#hstorefield
EDIT: Just note that there is also a JSONField which handles (un)marshalling of json already and inline search. The HStoreExtension is not necessary for it. Requires Django >=1.11 and Postgres >=9.4.
I'm assuming you're using django-nose. In this case you should create your own TestSuiteRunner:
from django.db import connections, DEFAULT_DB_ALIAS
from django_nose import NoseTestSuiteRunner
class MyTestSuiteRunner(NoseTestSuiteRunner):
def setup_databases(self):
result = super(MyTestSuiteRunner, self).setup_databases()
connection = connections[DEFAULT_DB_ALIAS]
cursor = connection.cursor()
cursor.execute('CREATE EXTENSION HSTORE')
return result
Then in settings.py you should specify your custom test runner:
TEST_RUNNER = 'path.to.my.module.MyTestSuiteRunner'
Also you can run sql command in a migration (Django 1.8):
class Migration(migrations.Migration):
# ...
operations = [
migrations.RunSQL('create extension hstore;'),
# ...
Since my last answer, Django deprecated and removed pre_syncdb signal. I've updated the answer to accommodate more recent versions. The basic mechanics are identical for newer versions as both methods rely on signals and the SQL code that only executes if HSTORE extension does not exist.
Django 1.8+
Since Django introduced DB migrations, pre_syncdb signals were marked deprecated in 1.7 and completely removed in 1.9. However, they introduced a new signal called pre_migrate which can be used the same way.
"""
This is an example models.py which contains all model definition.
"""
from django.dispatch import receiver
from django.db import connection, models
from django.db.models.signals import pre_migrate
import sys
# The sender kwarg is optional but will be called for every pre_syncdb signal
# if omitted. Specifying it ensures this callback to be called once.
#receiver(pre_migrate, sender=sys.modules[__name__])
def setup_postgres_hstore(sender, **kwargs):
"""
Always create PostgreSQL HSTORE extension if it doesn't already exist
on the database before syncing the database.
Requires PostgreSQL 9.1 or newer.
"""
cursor = connection.cursor()
cursor.execute("CREATE EXTENSION IF NOT EXISTS hstore")
# ...rest of your model definition goes here
class Foo(models.Model):
# ...field definitions, etc.
Django 1.6+ (original answer)
My suggestion is to use pre_syncdb signal hook.
See my answer on the other question.
"""
This is an example models.py which contains all model definition.
"""
from django.dispatch import receiver
from django.db import connection, models
from django.db.models.signals import pre_syncdb
import sys
# The sender kwarg is optional but will be called for every pre_syncdb signal
# if omitted. Specifying it ensures this callback to be called once.
#receiver(pre_syncdb, sender=sys.modules[__name__])
def setup_postgres_hstore(sender, **kwargs):
"""
Always create PostgreSQL HSTORE extension if it doesn't already exist
on the database before syncing the database.
Requires PostgreSQL 9.1 or newer.
"""
cursor = connection.cursor()
cursor.execute("CREATE EXTENSION IF NOT EXISTS hstore")
# ...rest of your model definition goes here
class Foo(models.Model):
# ...field definitions, etc.
The pre_syncdb signal is fired before the model table is created, making it ideal to ensure the extension is installed when the test database is being set up every time. The IF NOT EXISTS ensures that PostgreSQL ignores the command if the extension is already installed. You'll get an error if you run CREATE EXTENSION on an extension that already exists.
This works for the default Django unit test framework and will most likely work for Django nose tests.
More info on signals:
https://docs.djangoproject.com/en/1.6/ref/signals/#management-signals
It's work for me
https://gist.github.com/smcoll/bb2533e4b53ae570e11eb2ab011b887b
from django.db.backends.base import creation
from django.test.runner import DiscoverRunner
class CustomDiscovererRunner(DiscoverRunner):
def setup_databases(self, **kwargs):
def wrap_create_test_db(function):
def decorated_create_test_db(self, verbosity, autoclobber, keepdb):
test_database_name = function(self, verbosity, autoclobber, keepdb)
self.connection.close()
self.connection.settings_dict["NAME"] = test_database_name
cursor = self.connection.cursor()
cursor.execute('CREATE EXTENSION IF NOT EXISTS hstore')
return test_database_name
return decorated_create_test_db
creation.BaseDatabaseCreation._create_test_db = wrap_create_test_db(
creation.BaseDatabaseCreation._create_test_db
)
return super(CustomDiscovererRunner, self).setup_databases(**kwargs)
settings.py
TEST_RUNNER = 'my_project.settings_test.CustomDiscovererRunner'

Getting missing column error whenever a model is saved

I'm having trouble creating a model in django. I wrote a model like this:
from django.db import models
class FooModel(models.Model):
name = models.CharField(max_length = 255)
I run
manage.py syncdb
But when I'm in the shell, I can't save an instance. Every time I call it, it tells me it's missing a column
manage.py shell
>>from app.models import FooModel
>>foo = FooModel()
>>foo.name = 'foo'
>>foo.save()
DatabaseError: column "name" of relation "ecommerce_foomodel" does not exist
LINE 1: INSERT INTO "ecommerce_foomodel" ("name") VALUES (E'123123as...
We're using postgres.
The database table was created before you added the corresponding fields.
So, you can recreate all of the tables of that app (in case you don't have any useful data) using:
python manage.py reset ecommerce
Or, you should migrate the database to the latest version using South.

Django: How to create a model dynamically just for testing

I have a Django app that requires a settings attribute in the form of:
RELATED_MODELS = ('appname1.modelname1.attribute1',
'appname1.modelname2.attribute2',
'appname2.modelname3.attribute3', ...)
Then hooks their post_save signal to update some other fixed model depending on the attributeN defined.
I would like to test this behaviour and tests should work even if this app is the only one in the project (except for its own dependencies, no other wrapper app need to be installed). How can I create and attach/register/activate mock models just for the test database? (or is it possible at all?)
Solutions that allow me to use test fixtures would be great.
You can put your tests in a tests/ subdirectory of the app (rather than a tests.py file), and include a tests/models.py with the test-only models.
Then provide a test-running script (example) that includes your tests/ "app" in INSTALLED_APPS. (This doesn't work when running app tests from a real project, which won't have the tests app in INSTALLED_APPS, but I rarely find it useful to run reusable app tests from a project, and Django 1.6+ doesn't by default.)
(NOTE: The alternative dynamic method described below only works in Django 1.1+ if your test case subclasses TransactionTestCase - which slows down your tests significantly - and no longer works at all in Django 1.7+. It's left here only for historical interest; don't use it.)
At the beginning of your tests (i.e. in a setUp method, or at the beginning of a set of doctests), you can dynamically add "myapp.tests" to the INSTALLED_APPS setting, and then do this:
from django.core.management import call_command
from django.db.models import loading
loading.cache.loaded = False
call_command('syncdb', verbosity=0)
Then at the end of your tests, you should clean up by restoring the old version of INSTALLED_APPS and clearing the app cache again.
This class encapsulates the pattern so it doesn't clutter up your test code quite as much.
#paluh's answer requires adding unwanted code to a non-test file and in my experience, #carl's solution does not work with django.test.TestCase which is needed to use fixtures. If you want to use django.test.TestCase, you need to make sure you call syncdb before the fixtures get loaded. This requires overriding the _pre_setup method (putting the code in the setUp method is not sufficient). I use my own version of TestCase that lets me add apps with test models. It is defined as follows:
from django.conf import settings
from django.core.management import call_command
from django.db.models import loading
from django import test
class TestCase(test.TestCase):
apps = ()
def _pre_setup(self):
# Add the models to the db.
self._original_installed_apps = list(settings.INSTALLED_APPS)
for app in self.apps:
settings.INSTALLED_APPS.append(app)
loading.cache.loaded = False
call_command('syncdb', interactive=False, verbosity=0)
# Call the original method that does the fixtures etc.
super(TestCase, self)._pre_setup()
def _post_teardown(self):
# Call the original method.
super(TestCase, self)._post_teardown()
# Restore the settings.
settings.INSTALLED_APPS = self._original_installed_apps
loading.cache.loaded = False
I shared my solution that I use in my projects. Maybe it helps someone.
pip install django-fake-model
Two simple steps to create fake model:
1) Define model in any file (I usualy define model in test file near a test case)
from django_fake_model import models as f
class MyFakeModel(f.FakeModel):
name = models.CharField(max_length=100)
2) Add decorator #MyFakeModel.fake_me to your TestCase or to test function.
class MyTest(TestCase):
#MyFakeModel.fake_me
def test_create_model(self):
MyFakeModel.objects.create(name='123')
model = MyFakeModel.objects.get(name='123')
self.assertEqual(model.name, '123')
This decorator creates table in your database before each test and remove the table after test.
Also you may create/delete table manually: MyFakeModel.create_table() / MyFakeModel.delete_table()
I've figured out a way for test-only models for django 1.7+.
The basic idea is, make your tests an app, and add your tests to INSTALLED_APPS.
Here's an example:
$ ls common
__init__.py admin.py apps.py fixtures models.py pagination.py tests validators.py views.py
$ ls common/tests
__init__.py apps.py models.py serializers.py test_filter.py test_pagination.py test_validators.py views.py
And I have different settings for different purposes(ref: splitting up the settings file), namely:
settings/default.py: base settings file
settings/production.py: for production
settings/development.py: for development
settings/testing.py: for testing.
And in settings/testing.py, you can modify INSTALLED_APPS:
settings/testing.py:
from default import *
DEBUG = True
INSTALLED_APPS += ['common', 'common.tests']
And make sure that you have set a proper label for your tests app, namely,
common/tests/apps.py
from django.apps import AppConfig
class CommonTestsConfig(AppConfig):
name = 'common.tests'
label = 'common_tests'
common/tests/__init__.py, set up proper AppConfig(ref: Django Applications).
default_app_config = 'common.tests.apps.CommonTestsConfig'
Then, generate db migration by
python manage.py makemigrations --settings=<your_project_name>.settings.testing tests
Finally, you can run your test with param --settings=<your_project_name>.settings.testing.
If you use py.test, you can even drop a pytest.ini file along with django's manage.py.
py.test
[pytest]
DJANGO_SETTINGS_MODULE=kungfu.settings.testing
Quoting from a related answer:
If you want models defined for testing only then you should check out
Django ticket #7835 in particular comment #24 part of which
is given below:
Apparently you can simply define models directly in your tests.py.
Syncdb never imports tests.py, so those models won't get synced to the
normal db, but they will get synced to the test database, and can be
used in tests.
This solution works only for earlier versions of django (before 1.7). You can check your version easily:
import django
django.VERSION < (1, 7)
Original response:
It's quite strange but form me works very simple pattern:
add tests.py to app which you are going to test,
in this file just define testing models,
below put your testing code (doctest or TestCase definition),
Below I've put some code which defines Article model which is needed only for tests (it exists in someapp/tests.py and I can test it just with: ./manage.py test someapp ):
class Article(models.Model):
title = models.CharField(max_length=128)
description = models.TextField()
document = DocumentTextField(template=lambda i: i.description)
def __unicode__(self):
return self.title
__test__ = {"doctest": """
#smuggling model for tests
>>> from .tests import Article
#testing data
>>> by_two = Article.objects.create(title="divisible by two", description="two four six eight")
>>> by_three = Article.objects.create(title="divisible by three", description="three six nine")
>>> by_four = Article.objects.create(title="divisible by four", description="four four eight")
>>> Article.objects.all().search(document='four')
[<Article: divisible by two>, <Article: divisible by four>]
>>> Article.objects.all().search(document='three')
[<Article: divisible by three>]
"""}
Unit tests also working with such model definition.
I chose a slightly different, albeit more coupled, approach to dynamically creating models just for testing.
I keep all my tests in a tests subdirectory that lives in my files app. The models.py file in the tests subdirectory contains my test-only models. The coupled part comes in here, where I need to add the following to my settings.py file:
# check if we are testing right now
TESTING = 'test' in sys.argv
if TESTING:
# add test packages that have models
INSTALLED_APPS += ['files.tests',]
I also set db_table in my test model, because otherwise Django would have created the table with the name tests_<model_name>, which may have caused a conflict with other test models in another app. Here's my my test model:
class Recipe(models.Model):
'''Test-only model to test out thumbnail registration.'''
dish_image = models.ImageField(upload_to='recipes/')
class Meta:
db_table = 'files_tests_recipe'
Here's the pattern that I'm using to do this.
I've written this method that I use on a subclassed version of TestCase. It goes as follows:
#classmethod
def create_models_from_app(cls, app_name):
"""
Manually create Models (used only for testing) from the specified string app name.
Models are loaded from the module "<app_name>.models"
"""
from django.db import connection, DatabaseError
from django.db.models.loading import load_app
app = load_app(app_name)
from django.core.management import sql
from django.core.management.color import no_style
sql = sql.sql_create(app, no_style(), connection)
cursor = connection.cursor()
for statement in sql:
try:
cursor.execute(statement)
except DatabaseError, excn:
logger.debug(excn.message)
pass
Then, I create a special test-specific models.py file in something like myapp/tests/models.py that's not included in INSTALLED_APPS.
In my setUp method, I call create_models_from_app('myapp.tests') and it creates the proper tables.
The only "gotcha" with this approach is that you don't really want to create the models ever time setUp runs, which is why I catch DatabaseError. I guess the call to this method could go at the top of the test file and that would work a little better.
Combining your answers, specially #slacy's, I did this:
class TestCase(test.TestCase):
initiated = False
#classmethod
def setUpClass(cls, *args, **kwargs):
if not TestCase.initiated:
TestCase.create_models_from_app('myapp.tests')
TestCase.initiated = True
super(TestCase, cls).setUpClass(*args, **kwargs)
#classmethod
def create_models_from_app(cls, app_name):
"""
Manually create Models (used only for testing) from the specified string app name.
Models are loaded from the module "<app_name>.models"
"""
from django.db import connection, DatabaseError
from django.db.models.loading import load_app
app = load_app(app_name)
from django.core.management import sql
from django.core.management.color import no_style
sql = sql.sql_create(app, no_style(), connection)
cursor = connection.cursor()
for statement in sql:
try:
cursor.execute(statement)
except DatabaseError, excn:
logger.debug(excn.message)
With this, you don't try to create db tables more than once, and you don't need to change your INSTALLED_APPS.
If you are writing a reusable django-app, create a minimal test-dedicated app for it!
$ django-admin.py startproject test_myapp_project
$ django-admin.py startapp test_myapp
add both myapp and test_myapp to the INSTALLED_APPS, create your models there and it's good to go!
I have gone through all these answers as well as django ticket 7835, and I finally went for a totally different approach.
I wanted my app (somehow extending queryset.values() ) to be able to be tested in isolation; also, my package does include some models and I wanted a clean distinction between test models and package ones.
That's when I realized it was easier to add a very small django project in the package!
This also allows a much cleaner separation of code IMHO:
In there you can cleanly and without any hack define your models, and you know they will be created when you run your tests from in there!
If you are not writing an independent, reusable app you can still go this way: create a test_myapp app, and add it to your INSTALLED_APPS only in a separate settings_test_myapp.py!
Someone already mentioned Django ticket #7835, but there appears to be a more recent reply that looks much more promising for more recent versions of Django. Specifically #42, which proposes a different TestRunner:
from importlib.util import find_spec
import unittest
from django.apps import apps
from django.conf import settings
from django.test.runner import DiscoverRunner
class TestLoader(unittest.TestLoader):
""" Loader that reports all successful loads to a runner """
def __init__(self, *args, runner, **kwargs):
self.runner = runner
super().__init__(*args, **kwargs)
def loadTestsFromModule(self, module, pattern=None):
suite = super().loadTestsFromModule(module, pattern)
if suite.countTestCases():
self.runner.register_test_module(module)
return suite
class RunnerWithTestModels(DiscoverRunner):
""" Test Runner that will add any test packages with a 'models' module to INSTALLED_APPS.
Allows test only models to be defined within any package that contains tests.
All test models should be set with app_label = 'tests'
"""
def __init__(self, *args, **kwargs):
self.test_packages = set()
self.test_loader = TestLoader(runner=self)
super().__init__(*args, **kwargs)
def register_test_module(self, module):
self.test_packages.add(module.__package__)
def setup_databases(self, **kwargs):
# Look for test models
test_apps = set()
for package in self.test_packages:
if find_spec('.models', package):
test_apps.add(package)
# Add test apps with models to INSTALLED_APPS that aren't already there
new_installed = settings.INSTALLED_APPS + tuple(ta for ta in test_apps if ta not in settings.INSTALLED_APPS)
apps.set_installed_apps(new_installed)
return super().setup_databases(**kwargs)

Categories

Resources