I am using Django and Django Rest Framework with the Serializer Extensions Mixin to expand fields. I have some calculated fields that I only want to call sometimes to minimize hits on my DB. However, I need to be able to call these calculations in both templates (i.e. through the models) and through the serializers (i.e. using DRF's serializers.MethodField + Serializer Extensions Mixin expand feature).
As it stands, the only way I can figure out how to do this is by including the logic in both models.py AND serializers.py, as I can't use serializers.MethodField to call the method that I created in models.py. Not very DRY and a huge potential flaw.
When I try to call the method via serializers.MethodField it simply returns the method object and doesn't run the method itself (i.e. "<property object at 0x7f18d78de9a8>").
Is there any way to force DRF to run a method that is in models.py only when triggered? If I include it as a serializers.ReadOnlyField, it will trigger every time the serializer is called, which I don't want. However, Serializer Extensions Mixin doesn't support serializers.ReadOnlyField.
I suppose I could make a serializer specifically for this instance, but that seems overly complicated.
Any ideas? Thank you in advance!
Related
Being new to django rest framework , I often get puzzled as what exactly is the use of viewset when we can overwrite crud methods in serializers too .Another thing is that how is overwriting crud methods in serializers different from overwriting crud methods in viewsets ?
Technically, you can overwrite whatever you like wherever you like. The whole thing is just a convention.
The main idea is separation of concerns.
When you overwrite your views it's for the purpose of pre-processing the incoming request.
When you overwrite your serializers - it's because you want to change how the incoming data is serialized to be stored in your system (or how it is deserialized to be shown to the front-end).
Django Rest Framework serializers do not call the Model.clean when validating model serializers. The explanation given is that this leads to 'cleaner separation of concerns', from the Django Rest Framework 3.0 release notes:
Differences between ModelSerializer validation and ModelForm.
This change also means that we no longer use the .full_clean() method
on model instances, but instead perform all validation explicitly on
the serializer. This gives a cleaner separation, and ensures that
there's no automatic validation behavior on ModelSerializer classes
that can't also be easily replicated on regular Serializer classes.
But what concerns are the authors of Django Rest Framework attempting to separate?
My guess is that they're saying that a model instance should not be concerned about it's own validity. If that's the case I don't understand why.
There are two major issues with the model's "full_clean".
The first one is technical. There are a couple of cases where the full_clean isn't called at all. For example, you'll bypass it when you do a queryset.update().
The second one is that if you have a complex business logic - which is usually why you'll have a full_clean - it's likely that you should do the validation in the business logic, not go down to the models to validate.
Each layer should be responsible for its own consistency and the storage layer - ie models - shouldn't care about the business layer.
Another thing that I can think of is that full_clean will be called once you have a model that comes after the serializer has been doing its validation. At this point, things start getting messy because you have a two-step validation with an object created in between.
If you're using nested serializer, you might be stuck here because you won't be able to create nested models before the primary model has been saved which will make the full clean call even messier - some objects will be created, others will not. It's hard to figure out when and what object should be validated with their full_clean and you can be sure there'll be a lot of complaints from users when they'll override the update/clean and figure out the full_clean hasn't been called for every model.
This started becoming a total headache and we prefer to keep things simpler and more explicit.
I think Django's model validation is a little inconvenient for those models that don't use built-in ModelForm, though not knowing why.
Firstly, full_clean() needs called manually.
Note that full_clean() will not be called automatically when you call
your model’s save() method, nor as a result of ModelForm validation.In
the case of ModelForm validation, Model.clean_fields(), Model.clean(),
and Model.validate_unique() are all called individually.You’ll need to
call full_clean manually when you want to run one-step model
validation for your own manually created models.
Secondly, validators are used in built-in ModelForm.
Note that validators will not be run automatically when you save a
model, but if you are using a ModelForm, it will run your validators
on any fields that are included in your form.
There are great demands when you need to do data validation before saving data into databases. And obviously I'd prefer to make it in model, rather than views. So, are there any good ideas to implement this gracefully in Django 1.5?
Even though the idea of enforcing validation on Model level seems right, Django does not do this by default for various reasons. Except for some backward-compatibility problems, the authors probably don't want to support this because they fear this could create a false feeling of safety when in fact your data are not guaranteed to be always validated. Some ORM methods (e.g. bulk_create or update) don't call save() and thus are unable to validate your models.
In other words, it is hard to guarantee the validation, thus they've decided not to pretend it.
If you need this for multiple models, you can create a simple mixin that overrides the save() method and calls full_clean() before super. Do note that this might cause the validation to be run twice in some cases, like when using ModelForm. It might not be that of an issue though if your validation routines are side-effect free and cheap to run.
For more info, please see these answers:
https://stackoverflow.com/a/4441740/2263517
https://stackoverflow.com/a/12945692/2263517
https://stackoverflow.com/a/13039057/2263517
I am using the standard User model (django.contrib.auth) which comes with Django. I have made some of my own models in a Django application and created a relationship between like this:
from django.db import models
from django.contrib.auth.models import User
class GroupMembership(models.Model):
user = models.ForeignKey(User, null = True, blank = True, related_name='memberships')
#other irrelevant fields removed from example
So I can now do this to get all of a user's current memberships:
user.memberships.all()
However, I want to be able to do a more complex query, like this:
user.memberships.all().select_related('group__name')
This works fine but I want to fetch this data in a template. It seems silly to try to put this sort of logic inside a template (and I can't seem to make it work anyway), so I want to create a better way of doing it. I could sub-class User, but that doesn't seem like a great solution - I may in future want to move my application into other Django sites, and presumably if there was any another application that sub-classed User I wouldn't be able to get it to work.
Is the best to create a method inside GroupMembership called something like get_by_user(user)? Would I be able to call this from a template?
I would appreciate any advice anybody can give on structuring this - sorry if this is a bit long/vague.
First, calling select_related and passing arguments, doesn't do anything. It's a hint that cache should be populated.
You would never call select_related in a template, only a view function. And only when you knew you needed all those related objects for other processing.
"Is the best to create a method inside GroupMembership called something like get_by_user(user)?"
You have this. I'm not sure what's wrong with it.
GroupMembership.objects.filter( user="someUser" )
"Would I be able to call this from a template?"
No. That's what view functions are for.
groups = GroupMembership.objects.filter( user="someUser" )
Then you provide the groups object to the template for rendering.
Edit
This is one line of code; it doesn't seem that onerous a burden to include this in all your view functions.
If you want this to appear on every page, you have lots of choices that do not involve repeating this line of code..
A view function can call another function.
You might want to try callable objects instead of simple functions; these can subclass a common callable object that fills in this information.
You can add a template context processor to put this into the context of all templates that are rendered.
You could write your own decorator to assure that this is done in every view function that has the decorator.
I'm having trouble wrapping my head around this. Right now I have some models that looks kind of like this:
def Review(models.Model)
...fields...
overall_score = models.FloatField(blank=True)
def Score(models.Model)
review = models.ForeignKey(Review)
question = models.TextField()
grade = models.IntegerField()
A Review is has several "scores", the overall_score is the average of the scores. When a review or a score is saved, I need to recalculate the overall_score average. Right now I'm using a overridden save method. Would there be any benefits to using Django's signal dispatcher?
Save/delete signals are generally favourable in situations where you need to make changes which aren't completely specific to the model in question, or could be applied to models which have something in common, or could be configured for use across models.
One common task in overridden save methods is automated generation of slugs from some text field in a model. That's an example of something which, if you needed to implement it for a number of models, would benefit from using a pre_save signal, where the signal handler could take the name of the slug field and the name of the field to generate the slug from. Once you have something like that in place, any enhanced functionality you put in place will also apply to all models - e.g. looking up the slug you're about to add for the type of model in question, to ensure uniqueness.
Reusable applications often benefit from the use of signals - if the functionality they provide can be applied to any model, they generally (unless it's unavoidable) won't want users to have to directly modify their models in order to benefit from it.
With django-mptt, for example, I used the pre_save signal to manage a set of fields which describe a tree structure for the model which is about to be created or updated and the pre_delete signal to remove tree structure details for the object being deleted and its entire sub-tree of objects before it and they are deleted. Due to the use of signals, users don't have to add or modify save or delete methods on their models to have this management done for them, they just have to let django-mptt know which models they want it to manage.
You asked:
Would there be any benefits to using Django's signal dispatcher?
I found this in the django docs:
Overridden model methods are not called on bulk operations
Note that the delete() method for an object is not necessarily called
when deleting objects in bulk using a QuerySet or as a result of a
cascading delete. To ensure customized delete logic gets executed, you
can use pre_delete and/or post_delete signals.
Unfortunately, there isn’t a workaround when creating or updating
objects in bulk, since none of save(), pre_save, and post_save are
called.
From: Overriding predefined model methods
Small addition from Django docs about bulk delete (.delete() method on QuerySet objects):
Keep in mind that this will, whenever possible, be executed purely in
SQL, and so the delete() methods of individual object instances will
not necessarily be called during the process. If you’ve provided a
custom delete() method on a model class and want to ensure that it is
called, you will need to “manually” delete instances of that model
(e.g., by iterating over a QuerySet and calling delete() on each
object individually) rather than using the bulk delete() method of a
QuerySet.
https://docs.djangoproject.com/en/1.11/topics/db/queries/#deleting-objects
And bulk update (.update() method on QuerySet objects):
Finally, realize that update() does an update at the SQL level and,
thus, does not call any save() methods on your models, nor does it
emit the pre_save or post_save signals (which are a consequence of
calling Model.save()). If you want to update a bunch of records for a
model that has a custom save() method, loop over them and call save()
https://docs.djangoproject.com/en/2.1/ref/models/querysets/#update
If you'll use signals you'd be able to update Review score each time related score model gets saved. But if don't need such functionality i don't see any reason to put this into signal, that's pretty model-related stuff.
It is a kind sort of denormalisation. Look at this pretty solution. In-place composition field definition.