Django - Use signals to refresh another model's fields - python

I have two models, one of which uses data from the other model to populate its own fields. The issue is that when the first model is updated, the second model does not also update its own fields. I have to go in and actually edit/save the 2nd model for its fields to update.
Something like this:
models.py:
class ModelA(models.ModelForm)
...
class ModelB(models.ModelForm)
count_number_of_model_A = models.IntegerField
def save(self)
self.count_number_of_model_A = ModelA.objects.all().count()
super(ModelB, self).save()
(this is a simplified version of what I'm trying to do)
Now I want the field "count_number_of_model_A" in ModelB to update every time ModelA is altered. Right now, it only refreshes if I actually modify+save ModelB.
I think the answer is to use signals (maybe?). I'm trying to set up a signal so that ModelB updates whenever a new object is created in ModelA. I have the following:
#receiver(post_save, sender=ModelA)
def update_sends(sender, **kwargs):
if kwargs.get('created', False):
#some code here to refresh ModelB??
The signal is functioning properly, as if I put in something like ModelB.objects.filter(some filter).update(some field), those changes are reflected when I go in and create a new ModelA object. But the whole model itself does not update, and the field in question that I'm after ("count_number_of_model_A") does not refresh.
Any help?

Just use:
for model_b in ModelB.objects.filter(<some_filter>):
model_b.save()
But you should be aware that this pulls all (filtered) objects to Django, there do something with them and saves them back to the database. This is much slower than using query expressions. You will have a little bit more work to set it up, but it will run much faster - especially when database grows.

Related

How to create model objects of one model automatically when a model object of another has been added in django

Say for example I have a Model called Player. I want Model objects of Player created when new Users are added in django (from django.contrib.auth.models.User) such that each User object in User model has its own object in Player model. I know that I would technically have to use models.ForeignKey() to create a field in Player model and relate it to the User model and that I don't have to worry about deletion of users if I use the on_delete=models.CASCADE parameter in models.ForeignKey() but how do I automatically create these objects with like default values and such.
Initially this was my code to do so:
for name in User.objects.all().values_list('username', flat=True):
if name not in Player.objects.all().values_list('player_name', flat=True):
new_player = Player(player_name=name, current_level_no=None, no_of_moves=None)
new_player.save()
else:
pass
But this would give me a DatabaseIntegrityError that "Models aren't loaded yet". So I am confused about what to do or how to go forward.
As always, I greatly appreciate all answers!
Check out Django signals, specifically the post_save signal. Essentially, Django lets you define a function that will be run whenever your User model is saved. You'd use it something like this:
from django.db.models.signals import post_save
def create_player(sender, instance, created, **kwargs):
if created:
Player.objects.create(user=instance) # instance is the User object
post_save.connect(create_player, sender=User)
Note that if you already have User objects in your database, this won't create Player objects for them; you'll want to do that by running a loop similar to the one you posted in the Django shell (using python manage.py shell). The signal will run on every subsequent save of a user model.

Duplicated models with Django admin

For a specific model I use Django admin interface.
I implemented custom validation (clean methods) and save method.
So, I have something like this:
class DailyActivitiesAdmin(admin.ModelAdmin):
form= MyCustomFormForm
def save_model(self, request, obj, form, change):
.... my custom save ....
class MyCustomFormForm(forms.ModelForm):
....
def clean(self):
... my custom validation ...
def clean_my_field(self):
... my custom field validation ...
My question is:
Have I to manage explicitly the transaction from validation to save model or the atomicity is already managed in Django admin?
A my customer reported me a bug about it:
Into my clean validation I implemented a check to avoid similar models;
Sometime he can create model duplicated. I think that probably he make more click on save button and probably he had a slowly internet connection.
It is a possible scenario? Can I void it? For example, Can I disable the save buttons during the save requests?
Can I guarantee atomicity in some way if it is not already managed?
PS: I use Python 3, Djnago 2 and Postgres
You have to block rows for updates explicitly. Use transaction.atomic() and select_for_update(). Here is an example:
#transaction.atomic
def update_bank_account():
# Another call to update_bank_account will block until the first one is finished
account = BankAccount.objects.select_for_update().get(id=123)
sleep(120)
account.usd += 100
account.save()
Docs:
https://docs.djangoproject.com/en/2.1/topics/db/transactions/
https://docs.djangoproject.com/en/2.1/ref/models/querysets/#select-for-update
Into my clean validation I implemented a check to avoid similar
models; Sometime he can create model duplicated.
This sounds like an issue I had. Make sure save() isn't being called from within your clean function.

What is the best way to implement persistent data model in Django?

What I need is basically a database model with version control. So that every time a record is modified/deleted, the data isn't lost, and the change can be undone.
I've been trying to implement it myself with something like this:
from django.db import models
class AbstractPersistentModel(models.Model):
time_created = models.DateTimeField(auto_now_add=True)
time_changed = models.DateTimeField(null=True, default=None)
time_deleted = models.DateTimeField(null=True, default=None)
class Meta:
abstract = True
Then every model would inherit from AbstractPersistentModel.
Problem is, if I override save() and delete() to make sure they don't actually touch the existing data, I'll still be left with the original object, and not the new version.
After trying to come up with a clean, safe and easy-to-use solution for some hours, I gave up.
Is there some way to implement this functionality that isn't overwhelming?
It seems common enough problem that I thought it would be built into Django itself, or at least there'd be a well documented package for this, but I couldn't find any.
When I hear version control for models and Django, I immediately think of django-reversion.
Then, if you want to access the versions of an instance, and not the actual instance, simply use the Version model.
from reversion.models import Version
versions = Version.objects.get_for_object(instance)
I feel you can work around your issue not by modifying your models but by modifying the logic that access them.
So, you could have two models for your same object: one that can be your staging area, in which you store values as the ones you mention, such as time_created, time_modified, and modifying_user, or others. From there, in the code for your views you go through that table and select the records you want/need according to your design and store in your definitive table.

how to add dynamic fields at run time in django

I have to add dynamic fields at run time in my django application,but I don't know the proper way how to add new fields at run time.
I want to add the code which will generate the dynamic field and will update database too. I am using postgresql database. please help if anyone can.
My "model.py" is simply like this:
class Student(models.Model):
name=models.CharField(max_length=100)
school=models.CharField(max_length=100)
created_at=models.DateField(auto_now_add=True)
is_active=models.BooleanField(default=False)
def __str__(self):
return self.name
Django is not made for dynamic models, as relational databases are not. A model change at runtime will create a ton of problems.
You have to simulate it, by...
clever use of related models
storing values in a large field, e.g. JSON as text
having a generic model that stores the data as key, value; e.g. a table with PK, a FK, key, value as columns.
You should try the first option and only if that does not work out try the other two.

Nullable ForeignKeys and deleting a referenced model instance

I have a ForeignKey which can be null in my model to model a loose coupling between the models. It looks somewhat like that:
class Message(models.Model):
sender = models.ForeignKey(User, null=True, blank=True)
sender_name = models.CharField(max_length=255)
On save the senders name is written to the sender_name attribute. Now, I want to be able to delete the User instance referenced by the sender and leave the message in place.
Out of the box, this code always results in deleted messages as soon as I delete the User instance. So I thought a signal handler would be a good idea.
def my_signal_handler(sender, instance, **kwargs):
instance.message_set.clear()
pre_delete.connect(my_signal_handler, sender=User)
Sadly, it is by no means a solution. Somehow Django first collects what it wants to delete and then fires the pre_delete handler.
Any ideas? Where is the knot in my brain?
Django does indeed emulate SQL's ON DELETE CASCADE behaviour, and there's no out-of-the box documented way to change this. The docs where they mention this are near the end of this section: Deleting objects.
You are right that Django's collects all related model instances, then calls the pre-delete handler for each. The sender of the signal will be the model class about to be deleted, in this case Message, rather than User, which makes it hard to detect the difference between a cascade delete triggered by User and a normal delete... especially since the signal for deleting the User class comes last, since that's the last deletion :-)
You can, however, get the list of objects that Django is proposing to delete in advance of calling the User.delete() function. Each model instance has a semi-private method called _collect_sub_objects() that compiles the list of instances with foreign keys pointing to it (it compiles this list without deleting the instances). You can see how this method is called by looking at delete() in django.db.base.
If this was one of your own objects, I'd recommend overriding the delete() method on your instance to run _collect_sub_objects(), and then break the ForeignKeys before calling the super class delete. Since you're using a built-in Django object that you may find too difficult to subclass (though it is possible to substitute your own User object for django's), you may have to rely on view logic to run _collect_sub_objects and break the FKs before deletion.
Here's a quick-and-dirty example:
from django.db.models.query import CollectedObjects
u = User.objects.get(id=1)
instances_to_be_deleted = CollectedObjects()
u._collect_sub_objects(instances_to_be_deleted)
for k in instances_to_be_deleted.ordered_keys():
inst_dict = instances_to_be_deleted.data[k]
for i in inst_dict.values():
i.sender = None # You will need a more generic way for this
i.save()
u.delete()
Having just discovered the ON DELETE CASCADE behaviour myself, I see that in Django 1.3 they have made the foreign key behaviour configurable.

Categories

Resources