I have a model that has an owner field.
class MyModel(models.Model):
owner = models.CharField(...)
I extended the django User class and added an ownership filed
class AppUser(User):
ownership = models.CharField(...)
I want to create a Manager for MyModel so it will retrieve only objects that correspond with ownership of the currently logged in user.
For example (using Django REST framework):
class MyModelAPI(APIView):
def get(self, request, format=None):
# This query will automatically add a filter of owner=request.user.ownership
objs = MyModel.objects.all()
# rest of code ...
All of the examples of managers user constant values in their queries and i'm looking for something more dynamic. Is this thing even possible?
Thanks
This is not possible with a custom manager because a model manager is instantiated at class loading time. Hence, it is stateless with regard to the http-request-response cycle and could only provide some custom method that you would have to pass the user to anyway. So why don't you just add some convenience method/property on your model (a manager seems unnecessary for this sole purpose)
class MyModel(models.Model):
...
#clsmethod
def user_objects(cls, user):
return cls.objects.filter(owner=user.ownership)
Then, in your view:
objs = MyModel.user_objects(request.user)
For a manager-based solution, look at this question. Another interesting solution is a custom middleware that makes the current user available via some function/module attribute which can be accessed in acustom manager's get_queryset() method, as described here.
Related
I, as a newbie Django developer, am trying to build a RESTful API for a mobile app. I've took over an existing project and previous developers have used Django REST Framework. Super cool package, easy to work with so far. Except for one thing...
There is this problem when I want to create new resources, which happen to have nested serializers. I'm not great on explaining software issues with words, so here is the simplified version of my case:
class UserSerializer(serializers.ModelSerializer):
company = CompanySerializer()
# other props and functions are unrelated
class CompanySerializer(serializers.ModelSerializer):
# props and functions are unrelated
Now with this structure, GET /users and GET /users/{id} endpoints work great and I get the results I expect. But with POST /users and PATCH /users/{id} I get a response that says I need to provide an object for company prop and it should resemble a Company object with all the required props, so that it can create the company too. And I'm sure it tries to create a new company because I've tried sending { company: { id: 1 } } and it simply ignores the ID and requires a name to create a new one. This is obviously not what I want because I just want to create a user (who may or may not belong to a company), not both a user and a company.
I've tried switching that CompanySerializer to a serializers.PrimaryKeyRelatedField and it seems like it works on create endpoint but now I don't get the Company object on list and detail endpoints.
What am I missing here? I'm 99% sure that they did not intend this framework to work this way.
You need to override create() and update() methods on nested serializers to make them writable. Otherwise DRF is not sure what to do with nested objects. The simplest override would go something like this:
class UserSerializer(serializers.ModelSerializer):
company = CompanySerializer()
...
def create(self, validated_data):
return User.objects.create(**validated_data)
def update(self, instance, validated_data):
user = instance
user.__dict__.update(validated_data)
user.save()
return user
Note: haven't tested this variant of update() might need adjustments.
The trick is to use a different serializer class for retrieving vs updating - one with a PrimaryKeyRelatedField and one with a nested serializer. You can override get_serializer_class to do this. Assuming you are using a viewset:
class BaseUserSerializer(serializers.ModelSerializer):
class Meta:
model = User
fields = (...)
class WriteUserSerializer(BaseUserSerializer):
company = CompanySerializer()
class ReadUserSerializer(BaseUserSerializer):
company = PrimaryKeyRelatedField()
class UserViewSet(viewsets.ViewSet):
def get_serializer_class(self):
if self.action in ["create", "update", "partial_update", "destroy"]:
return WriteUserSerializer
else:
return ReadUserSerializer
let's say I have this model:
class MyModel(models.Model):
char_field = models.CharField(max_length=64)
json_field = LimitedJSONField(default={})
where LimitedJSONField is a custom field for storing JSONStrings on DB.
I would like to do pre-save check on json_field (e.g. truncate its length if exceeding). I have read about overriding save method for MyModel, I also know I can implement a pre-save signal but I would like to handle it on field-level. Because let's say I use LimitedJSONField on 500 models. Do I have to override save method for each of those 500 models? I implemented a validate method on LimitedJSONField but it does not get triggered on save (it's triggered only on form validation, i.e. full_clean routine).
How can I implement a validator for LimitedJSONField, so that whatever Model uses it, this field gets validated with regards to one single business logic written inside LimitedJSONField?
Put simply, I would like to implement the logic within field class and I would like to have no logic written in Model class, so that the solution is scalable for new Model classes to use this field without needing to implement boilerplace logic code.
Thanks a lot for your time!
Could you make a parent class with a single save method and use it as a mixin that is inherited by all of your other models?
Something like:
class SpecialJsonModel(models.model):
json_field = LimitedJSONField(default={})
def save(self, *args, **kwargs):
// Specific save logic goes here
class OtherModelA(SpecialJsonModel)
char_field = models.CharField(max_length=64)
class OtherModelB(SpecialJsonModel)
char_field = models.CharField(max_length=64)
Then you would only have to write one overridden save method.
I'm creating UserSerializer and want to allow users to create new accounts but forbid them to change their usernames. There is a read_only attribute that I can apply but then users won't be able to set a username when creating a new one. But without that It allows me to change it. There is also a required attribute which unfortunately cannot be used with read_only. There is no other relevant attribute.
One solution is to create 2 different Serializers one for creating User and another from changing him, but that seems the ugly and wrong thing to do. Do you have any suggestions on how to accomplish that without writing 2 serializers?
Thanks for any advice.
PS: I'm using python3.6 and django2.1
EDIT: I'm using generics.{ListCreateAPIView|RetrieveUpdateDestroyAPIView} classes for views. Like this:
class UserList(generics.ListCreateAPIView):
queryset = User.objects.all()
serializer_class = UserSerializer
class UserDetails(generics.RetrieveUpdateAPIView):
# this magic means (read only request OR accessing user is the same user being edited OR user is admin)
permission_classes = (perm_or(ReadOnly, perm_or(IsUserOwner, IsAdmin)),)
queryset = User.objects.all()
serializer_class = UserSerializer
EDIT2: There is a duplicate question (probably mine is duplicate) here
Assuming you are using a viewset class for your view, then you could override the init method of serializer as,
class UserSerializer(serializers.ModelSerializer):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if 'view' in self.context and self.context['view'].action in ['update', 'partial_update']:
self.fields.pop('username', None)
class Meta:
....
If you are trying to update the username field while update (HTTP PUT) or partial update (HTTP PATCH), the serializer will remove the username field from the list of fields and hence it wont affect the data/model
UPDATE
Why the above answer not woking with documentaion API?
From the doc
Note: By default include_docs_urls configures the underlying SchemaView to generate public schemas. This means that views will not be instantiated with a request instance. i.e. Inside the view self.request will be None.
In the answer, the fields are pops out dynamically with the help of a request object.
So, If you wish to handle API documentaion also, define multiple serializer and use get_serializer_class() method efficently. That's the DRF way.
Perhaps, one of the possible approaches would be to create a RegistrationSerializer which you use only in registration process/endpoint.
And then, you create another serializer UserSerializer where you make username read_only field and you use this serializer everywhere else ( eg. when updating user).
Anwser from #JPG is pretty accurate, but it has one limitation. You can use the serializer only in DRF views, because in other views or anywhere else the context will not have view.actions. To fix it self.instance can be used. It will make the code shorter and more versatile. Also instead of popping the field its better to make it read only, so that it can still be viewed but cannot be changed.
class UserSerializer(serializers.ModelSerializer):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if self.instance is not None: # if object is being created the instance doesn't exist yet, otherwise it exists.
# self.fields.pop('username', None)
self.fields.get('username').read_only = True # An even better solution is to make the field read only instead of popping it.
class Meta:
....
Another possible solution is to use CreateOnlyDefault() which is a builtin feature in DRF now. You can read more about it here in the docs
So I just started using Django Rest Framework and one of my serializers has a MultipleChoiceField in which the choices are simply all the instances of another model.
Here is the serializer in question:
class ObjectTypeSerializer(serializers.ModelSerializer):
def get_field_choices():
return sorted([
(p.id, p.name) for p in Parameter.objects.all()
])
object_fields = serializers.MultipleChoiceField(
choices=get_field_choices()
)
instance_fields = serializers.MultipleChoiceField(
choices=get_field_choices()
)
labels = serializers.SlugRelatedField(
queryset=Label.objects.all(),
many=True, allow_null=True, slug_field='name'
)
class Meta:
model = ObjectType
fields = ('id', 'name', 'object_fields',
'instance_fields', 'labels')
However, when I add a new Parameter object, the choices are not updated. In regular Django forms, I solved this simply using
forms.ChoiceField(choices=[(p.id, p.name) for p in Parameter.objects.all()])
and it would update the choices when a new parameter is added without restarting the server. How can I accomplish the same thing with Django Rest Framework serializers?
Any help is appreciated. Thanks!
When your choices are models, the most straightforward approach is to use some derivative of RelatedField. Given that you're using p.id, does PrimaryKeyRelatedField work for you? (Please update your question if it doesn't)
If the default behavior (using model's __unicode__ for the display value) is not what you desire, you can always subclass it and redefine the display_value method:
class CustomPKRelatedField(serializers.PrimaryKeyRelatedField):
"""A PrimaryKeyRelatedField derivative that uses named field for the display value."""
def __init__(self, **kwargs):
self.display_field = kwargs.pop("display_field", "name")
super(CustomPKRelatedField, self).__init__(**kwargs)
def display_value(self, instance):
# Use a specific field rather than model stringification
return getattr(instance, self.display_field)
...
class ObjectTypeSerializer(serializers.ModelSerializer):
...
object_fields = CustomPKRelatedField(queryset=Parameter.objects.all(), many=True)
instance_fields = CustomPKRelatedField(queryset=Parameter.objects.all(), many=True)
...
...
If all you need is so BrowsableAPIRenderer would render a nice-looking <select>, I believe that's all you need to do.
The ChoiceField and MultipleChoiceField are designed to work on a static dataset. They even preprocess things at __init__ to allow grouping. This is why new items don't appear there - those fields essentially "cache" results forever (until the server restart).
If, for some reason, you really need it to be ChoiceField-derivative, you can set up post_save and post_delete singal listeners and update fields' choices (and grouped_choices if you're not on a very bleeding edge version where a PR to allow choices to be set dynamically is already included) attributes. Check the ChoiceField source code for the details. That would be a dirty hack, though. ;)
This is a very simple question: Is there any good way to disable calling a bulk-delete (through querysets of course) on all models in an entire Django project?
The reasoning for this is under the premise that completely deleting data is almost always a poor choice, and an accidental bulk-delete can be detrimental.
Like comments says on your first post, you have to create a subclass for each of these elements:
The model manager
Queryset class
BaseModel
After some search, a great example can be found here, all credits to Akshay Shah, the blog author. Before looking to the code, be aware of that:
However, it inevitably leads to data corruption. The problem is simple: using a Boolean to store deletion status makes it impossible to enforce uniqueness constraints in your database.
from django.db import models
from django.db.models.query import QuerySet
class SoftDeletionQuerySet(QuerySet):
def delete(self):
# Bulk delete bypasses individual objects' delete methods.
return super(SoftDeletionQuerySet, self).update(alive=False)
def hard_delete(self):
return super(SoftDeletionQuerySet, self).delete()
def alive(self):
return self.filter(alive=True)
def dead(self):
return self.exclude(alive=True)
class SoftDeletionManager(models.Manager):
def __init__(self, *args, **kwargs):
self.alive_only = kwargs.pop('alive_only', True)
super(SoftDeletionManager, self).__init__(*args, **kwargs)
def get_queryset(self):
if self.alive_only:
return SoftDeletionQuerySet(self.model).filter(alive=True)
return SoftDeletionQuerySet(self.model)
def hard_delete(self):
return self.get_queryset().hard_delete()
class SoftDeletionModel(models.Model):
alive = models.BooleanField(default=True)
objects = SoftDeletionManager()
all_objects = SoftDeletionManager(alive_only=False)
class Meta:
abstract = True
def delete(self):
self.alive = False
self.save()
def hard_delete(self):
super(SoftDeletionModel, self).delete()
Basically, it adds an alive field to check if the row has been deleted or not, and update it when the delete() method is called.
Of course, this method works only on project where you can manipulate the code base.
There are nice off-the-shelf applications that allow for restoring deleted models (if that is what you are interested in), here are ones I used:
Django softdelete: https://github.com/scoursen/django-softdelete I used it more
Django reversion: https://github.com/etianen/django-reversion this one is updated more often, and allows you to revert to any version of your model (not only after delete, but as well after update).
If you really want to forbid bulk deletes, I'd discourage you from this approach as it will:
Break expectations about applicaiton behaviour. If I call MyModel.objects.all().delete() I want table to be empty afterwards.
Break existing applications.
If you want do do it please follow advice from comment:
I'm guessing this would involve subclassing QuerySet and changing the delete method to your liking, subclassing the default manager and have it use your custom query set, subclassing model - create an abstract model and have it use your custom manager and then finally have all your models subclass your custom abstract model.