Duplicated models with Django admin - python

For a specific model I use Django admin interface.
I implemented custom validation (clean methods) and save method.
So, I have something like this:
class DailyActivitiesAdmin(admin.ModelAdmin):
form= MyCustomFormForm
def save_model(self, request, obj, form, change):
.... my custom save ....
class MyCustomFormForm(forms.ModelForm):
....
def clean(self):
... my custom validation ...
def clean_my_field(self):
... my custom field validation ...
My question is:
Have I to manage explicitly the transaction from validation to save model or the atomicity is already managed in Django admin?
A my customer reported me a bug about it:
Into my clean validation I implemented a check to avoid similar models;
Sometime he can create model duplicated. I think that probably he make more click on save button and probably he had a slowly internet connection.
It is a possible scenario? Can I void it? For example, Can I disable the save buttons during the save requests?
Can I guarantee atomicity in some way if it is not already managed?
PS: I use Python 3, Djnago 2 and Postgres

You have to block rows for updates explicitly. Use transaction.atomic() and select_for_update(). Here is an example:
#transaction.atomic
def update_bank_account():
# Another call to update_bank_account will block until the first one is finished
account = BankAccount.objects.select_for_update().get(id=123)
sleep(120)
account.usd += 100
account.save()
Docs:
https://docs.djangoproject.com/en/2.1/topics/db/transactions/
https://docs.djangoproject.com/en/2.1/ref/models/querysets/#select-for-update

Into my clean validation I implemented a check to avoid similar
models; Sometime he can create model duplicated.
This sounds like an issue I had. Make sure save() isn't being called from within your clean function.

Related

Pass additional attributes with Django signals

Is there any way to pass additional parameters to instance I'm saving in DB to later access them after the instance is saved?
Brief example of my case:
I'm using Django's signals as triggers to events, like sending a confirmation email, executed by other processes, like workers.
I'm willing to specify which instance and when should trigger the event, and which should not: sometimes I want created/updated records to trigger series of events, and sometimes I want them to be processes silently or do some other actions.
One solution for this is saving desired behaviour for specific instance in model's field like JSONField and recover this behaviour at post_save, but this seems very ugly way of handlign such problem.
I'm using post_save signal as verification that instance was correctly saved in the DB, because I don't want to trigger event and a moment later something goes wrong while saving instance in DB.
Instances are saved through Django Forms, backend routines and RestFramework Seralizers
One solution is to use an arbitrary model instance attribute (not field) to store the desired state. For example:
def my_view(request):
...
instance._send_message = True if ... else False
instance.save()
#receiver(post_save, sender=MyModel)
def my_handler(sender, instance, **kwargs):
if instance._send_message:
...

Django decorator #transaction.non_atomic_requests not working in a ViewSet method

I recently ran into the need to disable transaction requests in one of my views, in order to be able to call db.connection.close() and connect() during requests in an effort to improve performance.
I have a DRF ViewSet, and used the following very simple view to verify that the non_atomic_requests decoractor seems to have no effect. ATOMIC_REQUESTS=True is enabled in settings.py, and DEBUG=False.
from django.db import transaction
#transaction.non_atomic_requests
def create(self, *args, **kwargs):
m = MyModel(stuff="hello")
m.save()
raise Exception('exception! row should still be saved though')
return Response()
After calling the view, I open Django shell, and verify that the amount of rows in the db has not grown, even though it should have. Also opening a debugger during the request to halt execution after the line m.save(), I can observe in Django shell that a new row is not visible yet.
If I set ATOMIC_REQUESTS=False in settings.py, the code works as expected, and the number of rows in the db is grown by one, even if an error is raised before returning from the view.
When ATOMIC_REQUESTS=False, using #transaction.atomic decorator does work as expected though. So as a workaround, I could use it to set every other view as atomic instead...
I am currently thinking this is a bug in the framework. Can anybody verify my findings, or point out if I am misunderstanding how this decorator is meant to function?
I am using Python 3.6, Django 2.0 and DRF 3.7.7.
As documented, non_atomic_requests only works if it's applied to the view itself.
In your case, create is a viewset method, it is not the view itself. With a regular class based view in Django, you'd need to wrap the dispatch method using method_decorator.
#method_decorator(transaction.non_atomic_requests, name='dispatch')
class MyViewSet(ViewSet):
...
def create(self, *args, **kwargs):
...
I'm not familiar enough with the rest framework internals to say whether this will work or not. Note that it will disable atomic requests for all views handled by the viewset, not just the create method.
The non_atomic_requests method has this limitation because the Django request handler has to inspect the view before it runs so that it knows whether to run it in a transaction. The transaction.atomic decorator does not have the same requirement - Django can simply start the transaction as soon as it enters an atomic function or block.
In case you are using db other than 'default':
you need to explicitly mention the 'using' property.
otherwise, it would default to 'default'
transaction.non_atomic_requests(using='db_name')
in case of class based views-
either apply it on dispatch in views:
#method_decorator(transaction.non_atomic_requests(using='db_name'), name='dispatch')
class MyViewSet(ViewSet):
...
or apply it on as_view method in urls
path(r'endpoint/', transaction.non_atomic_requests(using='db_name')(MyViewSet.as_view()), name='myview')

Django - Use signals to refresh another model's fields

I have two models, one of which uses data from the other model to populate its own fields. The issue is that when the first model is updated, the second model does not also update its own fields. I have to go in and actually edit/save the 2nd model for its fields to update.
Something like this:
models.py:
class ModelA(models.ModelForm)
...
class ModelB(models.ModelForm)
count_number_of_model_A = models.IntegerField
def save(self)
self.count_number_of_model_A = ModelA.objects.all().count()
super(ModelB, self).save()
(this is a simplified version of what I'm trying to do)
Now I want the field "count_number_of_model_A" in ModelB to update every time ModelA is altered. Right now, it only refreshes if I actually modify+save ModelB.
I think the answer is to use signals (maybe?). I'm trying to set up a signal so that ModelB updates whenever a new object is created in ModelA. I have the following:
#receiver(post_save, sender=ModelA)
def update_sends(sender, **kwargs):
if kwargs.get('created', False):
#some code here to refresh ModelB??
The signal is functioning properly, as if I put in something like ModelB.objects.filter(some filter).update(some field), those changes are reflected when I go in and create a new ModelA object. But the whole model itself does not update, and the field in question that I'm after ("count_number_of_model_A") does not refresh.
Any help?
Just use:
for model_b in ModelB.objects.filter(<some_filter>):
model_b.save()
But you should be aware that this pulls all (filtered) objects to Django, there do something with them and saves them back to the database. This is much slower than using query expressions. You will have a little bit more work to set it up, but it will run much faster - especially when database grows.

django assign value to positiveIntegerField programmatically

I am working on a little django application where the model contains a TextField and a PositiveIntegerField.
I need the PositiveInegerField to be populated with the number of words in the TextField.
This could be done via custom javascript code that count the number of words in the TextField's text area and put the value to the word count field text box before submitting the form but I do not want to play with custom javascript in the admin.
How to assign a value to a PositiveIntegerField programmatically?
This can be achieved using a pre_save Signal.
Create a signal-function like this:
def calculate_wordcount(sender, instance, **kwargs):
# count the words...
instance.word_count = # your count from line above
Then attach this function to your model. The preferred way since Django 1.7 is the application configuration (see doc).
You can attach your function in the AppConfig ready() - method
def ready(self):
pre_save.connect(calculate_wordcount,
sender= ,# your model
weak=False,
dispatch_uid='models.your_model.wordcount')
I'll leave all necessary imports up to you, please comment, if you need further direction!
While in general I think save signals are a reasonable idea, in this particular case you can also override .save() yourself
class MyClass(models.Model):
[stuff]
def save(self, *args, **kwargs):
self.wordcount = #something that calculates wordcount
super(self.myClass, self).save(*args, **kwargs) #Call django's save!
(as a rule of thumb, I'll overwrite save() if I'm just doing things to one model, but use a signal if I'm using one model to affect another. The idea is to keep all things affecting a model near it in the code. But this gets into personal philosophy)
Also WARNING: No matter which version you use, the signal or overwriting save(), bulk_create and bulk_delete will not send a signal or call your specific save function. See the documentation here: https://docs.djangoproject.com/en/1.8/topics/db/models/#overriding-predefined-model-methods

UPDATE doesnt use Model save method

If I do the folllowing:
obj = Model.objects.get(pk=2)
object.field = 'new value'
object.save()
It runs the custom save method that I have written in django.
However, if I do a normal update statement:
Model.objects.filter(pk=2).update(field='new value')
It does not use the custom save method. My question here is two-fold:
1) Why was that decision made in django -- why doesn't every 'save' implement the custom save method.
2) Is there a codebase-wide way to make sure that no update statements are ever made? Basically, I just want to ensure that the custom save method is always run whenever doing a save within the django ORM. How would this be possible?
I'm not a Django developer, but I dabble from time to time and no one else has answered yet.
Why was that decision made in django -- why doesn't every 'save' implement the custom save method.
I'm going to guess here that this is done as a speed optimization for the common case of just performing a bulk update. update works on the SQL level so it is potentially much faster than calling save on lots of objects, each one being its own database transaction.
Is there a codebase-wide way to make sure that no update statements are ever made? Basically, I just want to ensure that the custom save method is always run whenever doing a save within the django ORM. How would this be possible?
You can use a custom manager with a custom QuerySet that raises some Exception whenever update is called. Of course, you can always loop over the Queryset and call save on each object if you need the custom behavior.
Forbidding Update on a Model
from django.db import models
class NoUpdateQuerySet(models.QuerySet):
"""Don't let people call update! Muahaha"""
def update(self, **kwargs):
# you should raise a more specific Exception.
raise Exception('You shall not update; use save instead.')
class Person(models.Model):
first_name = models.CharField(max_length=50)
last_name = models.CharField(max_length=50)
# setting the custom manager keeps people from calling update.
objects = NoUpdateQuerySet.as_manager()
You would just need to set the NoUpdateQuerySet as a manager for each model you don't want to update. I don't really think it's necessary to set a custom QuerySet though; if it were me I would just not call update, and loop through the objects that need to be saved whenever I need to. You may find a time when you want to call update, and this would end up being very annoying.
Forbidding Update Project-Wide
If you really really decide you hate update, you can just monkey-patch the update method. Then you can be completely certain it's not being called. You can monkey-patch it in your project's settings.py, since you know that module will be imported:
def no_update(self, **kwargs):
# probably want a more specific Exception
raise Exception('NO UPDATING HERE')
from django.db.models.query import QuerySet
QuerySet.update = no_update
Note that the traceback will actually be pretty confusing, since it will point to a function in settings.py. I'm not sure how much, if ever, update is used by other apps; this could have unintended consequences.

Categories

Resources