Django. access database through model on shutdown hook - python

Here is my problem, i'm trying to update database through django-model on shutdown signal which is declared on init.py file but database on model object is None
import logging
import os
import signal
import sys
from django.db import transaction
logger = logging.getLogger("logger")
def my_signal_handler(*args):
if os.environ.get("RUN_MAIN") is not "true":
return
from mymodels import MyModel
logger.info("update models")
with transaction.atomic():
for model in MyModel.objects.all():
if model.my_flag:
model.my_flag = False
model.save()
sys.exit(0)
signal.signal(signal.SIGINT, my_signal_handler)
Also when i'm trying to import model outside my_signal_handler function application throws exception "django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet."
The question is: what's the better way to append shutdown hook that can access application context

if you want to use django models in standalone mode you should manually call django.setup() then you can import and work with your models so change your code to something like this:
import django
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings")
django.setup()
from mymodels import MyModel
so in short first run django.setup() then import and work with models
check django docs on this subject

Related

how to import all django models and more in a script?

I've put the following code at the top of my script file
os.environ.setdefault("DJANGO_SETTINGS_MODULE", 'momsite.conf.local.settings')
django.setup()
Now I can import my django apps and run small snippets (to mainly test stuff)
I'd like to import all the models registered through settings.INSTALLED_APPS
I know https://github.com/django-extensions/django-extensions does this when running manage.py shell_plus it automatically imports all the models and more.
I'm looking at their code. not sure if I'll make sense out of it.
https://github.com/django-extensions/django-extensions/blob/3355332238910f3f30a3921e604641562c79a0a8/django_extensions/management/commands/shell_plus.py#L137
at the moment, I'm doing the following, and I think it is importing models, but not available in the script somehow
from django_extensions.management.shells import import_objects
from django.core.management.base import BaseCommand, CommandError
options = {}
style = BaseCommand().style
import_objects(options, style)
edit.. answer adopted from dirkgroten
import_objects internally calls from importlib import import_module Apparently, we need to populate globals() with imported class
options = {'quiet_load': True}
style = BaseCommand().style
imported_objects = import_objects(options, style)
globals().update(imported_objects)
After you run django.setup(), do this:
from django.apps import apps
for _class in apps.get_models():
if _class.__name__.startswith("Historical"):
continue
globals()[_class.__name__] = _class
That will make all models classes available as globals in your script.
Create a management command. It will automagically load django() and everything.
Then in your command you simply start your command. ./manage.py mytest
#myapp/management/commands/mytest.py
from django.core.management.base import BaseCommand, CommandError
from myapp.sometest import Mycommand
class Command(BaseCommand):
help = 'my test script'
def add_arguments(self, parser):
pass
# parser.add_argument('poll_ids', nargs='+', type=int)
def handle(self, *args, **options):
Mycommand.something(self)
which will call the actuall script:
#sometest.py
from .models import *
class Mycommand():
def something(self):
print('...something')

Where in Django can I run startup code that requires models?

On Django startup I need to run some code that requires access to the database. I prefer to do this via models.
Here's what I currently have in apps.py:
from django.apps import AppConfig
from .models import KnowledgeBase
class Pqawv1Config(AppConfig):
name = 'pqawV1'
def ready(self):
to_load = KnowledgeBase.objects.order_by('-timestamp').first()
# Here should go the file loading code
However, this gives the following exception:
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
So is there a place in Django to run some startup code after the models are initialized?
The problem is that you import .models at the top of your file. This means that, when the file app.py file is loaded, Python will load the models.py file when it evalutes that line. But that is too early. You should let Django do the loading properly.
You can move the import in the def ready(self) method, such that the models.py file is imported when ready() is called by the Django framework, like:
from django.apps import AppConfig
class Pqawv1Config(AppConfig):
name = 'pqawV1'
def ready(self):
from .models import KnowledgeBase
to_load = KnowledgeBase.objects.order_by('-timestamp').first()
# Here should go the file loading code

How to correctly override Django manage.py's command?

I need to override createsuperuser.py's handle method in Django Command class.
I created myapp\management\commands\createsuperuser.py:
import getpass
import sys
import django.contrib.auth.management.commands.createsuperuser as makesuperuser
from django.contrib.auth.management import get_default_username
from django.contrib.auth.password_validation import validate_password
from django.core import exceptions
from django.core.management.base import CommandError
from django.utils.encoding import force_str
from django.utils.text import capfirst
class Command(makesuperuser.Command):
def handle(self, *args, **options):
# the rest of code is copied from Django source and is almost
# standart except few changes related to how info of
# REQUIRED_FIELDS is shown
When I do in terminal ./manage.py createsuperuser I do not see any changes. If I change the name of my file to lets say mycmd.py and do ./manage.py mycmd everything starts to work as I expect.
How to get changes I need using ./manage.py createsuperuser?
Put your application name on top in the INSTALLED_APPS list.

Resolving circular imports in celery and django

I have a Django app that uses Celery to offload some tasks. Mainly, it defers the computation of some fields in a database table.
So, I have a tasks.py:
from models import MyModel
from celery import shared_task
#shared_task
def my_task(id):
qs = MyModel.objects.filter(some_field=id)
for record in qs:
my_value = #do some computations
record.my_field = my_value
record.save()
And in models.py
from django.db import models
from tasks import my_task
class MyModel(models.Model):
field1 = models.IntegerField()
#more fields
my_field = models.FloatField(null=True)
#staticmethod
def load_from_file(file):
#parse file, set fields from file
my_task.delay(id)
Now obviously, this won't work because of a circular import (models imports tasks and tasks imports models).
I've resolved this for the moment by calling my_task.delay() from views.py, but it seems to make sense to keep the model logic within the model class. Is there a better way of doing this?
The solution posted by joshua is very good, but when I first tried it, I found that my #receiver decorators had no effect. That was because the tasks module wasn't imported anywhere, which was expected as I used task auto-discovery.
There is, however, another way to decouple tasks.py from modules.py. Namely, tasks can be sent by name and they don't have to be evaluated (imported) in the process that sends them:
from django.db import models
#from tasks import my_task
import celery
class MyModel(models.Model):
field1 = models.IntegerField()
#more fields
my_field = models.FloatField(null=True)
#staticmethod
def load_from_file(file):
#parse file, set fields from file
#my_task.delay(id)
celery.current_app.send_task('myapp.tasks.my_task', (id,))
send_task() is a method on Celery app objects.
In this solution it is important to take care of correct, predictable names for your tasks.
In your models instead of importing the my_task at the beginning of the file, you can import it just before you use it. It will solve circular imports problem.
from django.db import models
class MyModel(models.Model):
field1 = models.IntegerField()
#more fields
my_field = models.FloatField(null=True)
#staticmethod
def load_from_file(file):
#parse file, set fields from file
from tasks import my_task # import here instead of top
my_task.delay(id)
Alternatively, you can also do same thing in your tasks.py. You can import your models just before you use it instead of beginning.
Alternative:
You can use send_task method to call your task
from celery import current_app
from django.db import models
class MyModel(models.Model):
field1 = models.IntegerField()
#more fields
my_field = models.FloatField(null=True)
#staticmethod
def load_from_file(file):
#parse file, set fields from file
current_app.send_task('myapp.tasks.my_task', (id,))
Just to toss one more not-great solution into this list, what I've ended up doing is relying on django's now-built-in app registry.
So in tasks.py, rather than importing from models, you use apps.get_model() to gain access to the model.
I do this with a helper method with a healthy bit of documentation just to express why this is painful:
from django.apps import apps
def _model(model_name):
"""Generically retrieve a model object.
This is a hack around Django/Celery's inherent circular import
issues with tasks.py/models.py. In order to keep clean abstractions, we use
this to avoid importing from models, introducing a circular import.
No solutions for this are good so far (unnecessary signals, inline imports,
serializing the whole object, tasks forced to be in model, this), so we
use this because at least the annoyance is constrained to tasks.
"""
return apps.get_model('my_app', model_name)
And then:
#shared_task
def some_task(post_id):
post = _model('Post').objects.get(pk=post_id)
You could certainly just use apps.get_model() directly though.
Use signals.
tasks.py
from models import MyModel, my_signal
from celery import shared_task
from django.dispatch import receiver
#shared_task
def my_task(id):
qs = MyModel.objects.filter(some_field=id)
for record in qs:
my_value = #do some computations
record.my_field = my_value
record.save()
#receiver(my_signal)
def my_receiver(sender, **kwargs):
my_task.delay(kwargs['id'])
models.py
from django.db import models
from tasks import my_task
from django.dispatch import Signal
my_signal = Signal(providing_args=['id'])
class MyModel(models.Model):
field1 = models.IntegerField()
#more fields
my_field = models.FloatField(null=True)
#staticmethod
def load_from_file(file):
#parse file, set fields from file
my_signal.send(sender=?, id=?)
Not sure if this is anyone else's problem, but I took a few hours, and I found a solution...mainly, the key from the documentation:
Using the #shared_task decorator
The tasks you write will probably live in reusable apps, and reusable apps cannot depend on the project itself, so you also cannot import your app instance directly.
Basically what I was doing was this...
####
# project/coolapp/tasks.py -- DON'T DO THIS
import os
from celery import Celery
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings")
app = Celery("coolapp")
app.config_from_object("django.conf:settings", namespace="CELERY")
app.autodiscover_tasks()
#app.task(bind=True)
def some_task(self, some_id):
from coolapp.models import CoolPerson
####
# project/coolapp/__init__.py -- DON'T DO THIS
from __future__ import absolute_import, unicode_literals
from .tasks import app as celery_app
__all__ = ("celery_app",)
Therefore, I was getting weird errors about missing app labels (a clear indication of a circular import).
The solution...
Refactor project/coolapp/tasks.py -> project/project/tasks.py and project/coolapp/__init__.py -> project/project/__init__.py.
IMPORTANT: This does not (and should not) be added to INSTALLED_APPS. Otherwise, you'll get the circular import.
So then to start the woker:
celery -A project.project worker -l INFO
Also, a little debugging tip...
When you want to find out if your tasks are properly discovered, put this in project/project/app.py:
app.autodiscover_tasks()
assert "project.app.tasks.some_task" in app.tasks
Otherwise, you'll have to start up the worker only to realize your tasks aren't included in the app, then you'll have to wait for shutdown.

Where to put Django startup code?

I'd like to have these lines of code executed on server startup (both development and production):
from django.core import management
management.call_command('syncdb', interactive=False)
Putting it in settings.py doesn't work, as it requires the settings to be loaded already.
Putting them in a view and accessing that view externally doesn't work either, as there are some middlewares that use the database and those will fail and not let me access the view.
Putting them in a middleware would work, but that would get called each time my app is accessed. An possible solution might be to create a middleware that does all the job and then removes itself from MIDDLEWARE_CLASSES so it's not called anymore. Can I do that without too much monkey-patching?
Write middleware that does this in __init__ and afterwards raise django.core.exceptions.MiddlewareNotUsed from the __init__, django will remove it for all requests :). __init__ is called at startup by the way, not at the first request, so it won't block your first user.
There is talk about adding a startup signal, but that won't be available soon (a major problem for example is when this signal should be sent)
Related Ticket: https://code.djangoproject.com/ticket/13024
Update: Django 1.7 includes support for this. (Documentation, as linked by the ticket)
In Django 1.7+ if you want to run a startup code and,
1. Avoid running it in migrate, makemigrations, shell sessions, ...
2. Avoid running it twice or more
A solution would be:
file: myapp/apps.py
from django.apps import AppConfig
def startup():
# startup code goes here
class MyAppConfig(AppConfig):
name = 'myapp'
verbose_name = "My Application"
def ready(self):
import os
if os.environ.get('RUN_MAIN'):
startup()
file: myapp/__init__.py
default_app_config = 'myapp.apps.MyAppConfig'
This post is using suggestions from #Pykler and #bdoering
If you were using Apache/mod_wsgi for both, use the WSGI script file described in:
http://blog.dscpl.com.au/2010/03/improved-wsgi-script-for-use-with.html
Add what you need after language translations are activated.
Thus:
import sys
sys.path.insert(0, '/usr/local/django/mysite')
import settings
import django.core.management
django.core.management.setup_environ(settings)
utility = django.core.management.ManagementUtility()
command = utility.fetch_command('runserver')
command.validate()
import django.conf
import django.utils
django.utils.translation.activate(django.conf.settings.LANGUAGE_CODE)
# Your line here.
django.core.management.call_command('syncdb', interactive=False)
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
You can create a custom command and write your code in the handle function. details here https://docs.djangoproject.com/en/dev/howto/custom-management-commands/
Then you can create a startup script that runs the django server then executes your new custom command.
If you are using mod_wsgi you can put it in the wsgi start app
Here is how I work around the missing startup signal for Django:
https://github.com/lsaffre/djangosite/blob/master/djangosite/models.py
The code that is being called there is specific to my djangosite project, but the trick to get it called by writing a special app (based on an idea by Ross McFarland) should work for other environments.
Luc

Categories

Resources