I am new to DJango and DRF and have been asked to manage some DJango/DRF related code. After a lot of search I am still unable to find a complete example on how filter_queryset works and how can it be used with different arguments.
At some places I see it used like the following,
self.filter_queryset(queryset)
and at other places it is used with some arguments. It would be helpful if some one can explain the fundamentals like how and when to use it, what are the dependent variables (lookup_field, filter_backends etc...) and arguments and how to set them up.
I have searched a lot and also gone through the docs. If i have missed any doc kindly let me know.
The filter_queryset()--(source code) is a method which is originally implemented in GenericAPIView -- (DRF doc) class.
def filter_queryset(self, queryset):
"""
Given a queryset, filter it with whichever filter backend is in use.
You are unlikely to want to override this method, although you may need
to call it either from a list view, or from a custom `get_object`
method if you want to apply the configured filtering backend to the
default queryset.
"""
for backend in list(self.filter_backends):
queryset = backend().filter_queryset(self.request, queryset, self)
return queryset
I think the functionality of the method is clearly visible from the doc strings.
".....and at other places it is used with some arguments"
The views's filter_queryset() method takes only one parameter, which is the queryset to be filtered.
But, filter-backends' filter_queryset() method takes three arguments which are request,queryset and the view itself.
What are FilterBackends?
Filterbackends are classes which helps us to filter the queryset with complex lookups and some other stuff.
DRF has few built-in backends which can be found here.DRF official docs recommend to use django-filter package for advanced filtering purposes.
How filter-backend working?
Take a look at the source code of DjangoFilterBackend class and it's methods...It's filter_queryset(...) method plays key role in the filtering process.
I would recommend to go through the doc of django-filter to understand the usage of the same with more examples.
By defining filterset_class, you could've more controll over the filtering process (such as providing lookup_expr etc)
Related
Say I have a simple application with users, groups and users are members of the groups (a M2M relationship). So I create a viewset which shall do the following:
GET /group lists all groups
GET /group/1 gives detail of a group
POST /group/1/member adds a member specified in the request to group 1
GET /group/1/member lists all members of group 1
and some other CRUD operations on groups/memberships which are not relevant for my question.
To make the code as readable as possible, I came up with this solution:
class GroupViewSet(ModelViewSet):
...
#action(detail=True, methods=["get", "post"])
def member(self, request, *args, **kwargs):
return MembershipView.as_view()(request, *args, **kwargs)
class MembershipView(ListCreateAPIView):
# handle the membership somehow here
However, this approach ends with
The `request` argument must be an instance of `django.http.HttpRequest`, not `rest_framework.request.Request`.
Although manually instantiating a view inside another view and passing the request would most probably work in pure Django, the REST framework refuses to do this, as each DRF view expects a pure Django request and cannot just take an existing DRF request as it is.
I have two questions:
Is my design idea bad? And if yes, how shall I redesign?
Do you think it will be worth creating a pull request to DRF? Allowing this is very easy.
Firstly, I think your URL structure may not really be conventionally RESTful. You should be using plurals to access your resources, i.e.
POST /groups/
returns groups.
POST /groups/1
fetches an item from the groups list with an id of 1. It makes things a little clearer.
Secondly, I personally would redesign your flow as I think the current design overcomplicates your REST structure.
Are Members and Groups distinct resources? I would say yes, these are distinct. So they should be kept in different endpoints. Have /members/ and /groups/ and use django's built in filters to get the desired behavior, i.e.:
GET /members/?group_id=1
This is cleaner and more RESTful. You are fetching members who have a corresponding group_id of 1.
If you really want to keep this embedded structure, You can probably do some magic with some of rest_framework.decorators list_route and detail_route to probably achieve your current desired behavior.
However, in my experience, I've found embedding like that causes issues with some django mechanics and makes for some gnarly URL routing. I really would advise keeping distinct endpionts.
You're basically doing this:
/list-view/detail-view/list-view/
I can understand use cases where this design could make sense on paper, but I don't think it's ideal.
Trying to implement a simple ordering query on ProductCategoryView in Django-Oscar. It should be fairly simple to implement but it's taking too much time to understand. I'm having second thoughts to go forward with Oscar or not as it seems difficult to extend.
The ProductCategoryView returns products of a certain category. I want to sort them according to certain field in product model say price. First I change the parent generic class from TemplateView to ListView so that I can use get_queryset method. Then I override the get_queryset method as below and write a simple queryset in that. Still the sorting doesn't happen though the flow does go inside the get_queryset method.
def get_queryset(self):
queryset = Product.objects.filter(categories = self.category)
logger.debug("inside")
queryset = queryset.order_by("price")
return queryset
So what methods I have to overwrite. Will there be so much trouble editing Oscar every time or I'm missing something?
P.S : I have asked lot of questions around Django/Oscar Class based view recently . So I might look like a Help Vampire. Please ignore this question if that is the case.
#Anentropic is right but let me elaborate a bit.
Oscar bases all its browse views on search engines so faceting can be used to narrow down the list of products returned. By default there are search engine integrations for Solr and Elasticsearch.
If you don't use Solr or Elasticsearch, the default implementation is SimpleProductSearchHandler. It does not use any external services but instead calls Product.browsable.get_queryset(). That code lives in oscar.apps.catalogue.managers and could be customized to provide custom ordering.
This is all assuming you don't want to use a search engine as customizations required to change the order of results for other search handler classes are backend-specific.
I've been going off of the documentation on the django-rest-swagger github page, more specifically the part called "How it works". It shows that you can define your own parameters for your rest api, and have those parameters show up in your swagger doc page.
The commenting example is something like:
"""
This text is the description for this API
param1 -- A first parameter
param2 -- A second parameter
"""
I can get this to work, but my issue is how to specify if the variable is required, its parameter type, and its data type. The github page shows an example image of how your swagger doc could look, and they have the information I just mentioned. But when I comment my custom parameters like the example shows, my parameters just show as parameter type: "query", data type: is blank, and it doesn't show "required".
The closest thing I have found to an answer was in this stackoverflow question. It seems like an answer provider is saying that django-rest-swagger generates its documentation by automatically inspecting your serializers (which is fine), and that modelserializers won't contain enough information for django-rest-swagger to properly derive the criteria I mentioned above. I get that it can't figure out this criteria but there must be some way for me to manually specify it then.
Am I correct that django-rest-swagger would only display what I want if I rewrote my modelserializers as just serializers? Is there no way for me to manually tell django-rest-swagger what a parameter's parameter type and data type should be, and if it's required?
I know I must be missing something here. I use class-based views and modelserializers that are almost identical to the examples in the django-rest-framework tutorials. It seems entirely possible that I'm just missing an understanding of "parameter types" in this context. My API is working great and I don't want to rewrite my modelserializers to serializers just so I can get better automatic documentation through swagger.
ModelSerializers are the right way to go with DR-Swagger. It can be a bit tricky chasing down exactly where the different Swagger fields are extracted from though, I often had to fall back to step-debugging through the page rendering process in order to figure out where things were coming from.
In turn:
Required? comes from the Field.required parameter (either set on the model or the Serializer field).
Description comes from the Field.help_text parameter.
In the new-style DRF serialization, the description text comes from the ViewSet's docstring. If you want method-specific docs, you need to override the docstring for individual methods, e.g. retrieve:
def retrieve(self, request, *args, **kwargs):
"""Retrieve a FooBar"""
return super().retrieve(request, *args, **kwargs)
One thing to note is that DR-Swagger migrated to using the new DRF schema logic in version 2.0 (with DRF version 3.5), which has a few rough edges still. I recommend sticking with DR-Swagger version 0.3.x, which (though deprecated) has more features and in my experience, more reliable serialization.
In most cases ModelSerializer is what you need, because it can be greatly customized to suit your needs. In ideal situation you should define all your constraints, like required attribute on a field, in your model class, but there are times when it's not architecturally possible, then you can override such a field in your ModelSerializer subclass:
from django.contrib.auth import get_user_model
from rest_framework import serializers
class UserSerializer(serializers.ModelSerializer):
first_name = serializers.CharField(required=True)
last_name = serializers.CharField(required=True)
class Meta:
model = get_user_model()
In the example above I serialize standard User model from Django and override required attributes so, that first_name and last_name are now required.
Of course, there are cases when it's hard or impossible to use ModelSerializer then you can always fallback to Serializer subclassing
In the code you have:
"""
This text is the description for this API
param1 -- A first parameter
param2 -- A second parameter
"""
Try:
""" This text is the description for this API
param1 -- A first parameter
param2 -- A second parameter
"""
I have found some python and/or Django plugins need the docstring's first line, which is the one with the opening three double-quotes to also be the line that starts the documentation. You might even want to try no space between the last double-quote and the T.
We have a soft delete scheme where we just mark things as deleted and then filter the deleted ones out in various places. I'm trying to figure out how to filter the deleted ones out of the grapelli autocomplete suggestions.
In the end I went with this:
from grappelli.views.related import AutocompleteLookup
class YPAutocompleteLookup(AutocompleteLookup):
""" patch grappelli's autocomplete to let us control the queryset
by creating a autocomplete_queryset function on the model """
def get_queryset(self):
if hasattr(self.model, "autocomplete_queryset"):
qs = self.model.autocomplete_queryset()
else:
qs = self.model._default_manager.all()
qs = self.get_filtered_queryset(qs)
qs = self.get_searched_queryset(qs)
return qs.distinct()
It can be installed by overriding the relevant url:
url(r'^grappelli/lookup/autocomplete/$', YPAutocompleteLookup.as_view(), name="grp_autocomplete_lookup"),
Make sure this is ahead of Grappelli in your urls.
If your working with the Admin site, you should take advantage of the ModelAdmin.queryset function:
https://docs.djangoproject.com/en/1.5/ref/contrib/admin/#django.contrib.admin.ModelAdmin.queryset
As I found out, changing the default model manager to restrict the results is a bad idea, causing all kinds of nasty problems. For example: preventing syncdb, shell or shell_plus from running. Making it impossible to add the first record to a blank db. The exact errors depend upon what your restricting, but you are bound to get a few.
What is needed here is a way to tell Grappelli the name of the queryset manager to use. Passed in or a setting perhaps?
You can specify a simple (constant or related field) filter using ForeignKey.limit_choices_to. Grappelli grabs this value and sends it in the GET as param 'query_string'.
However, this might not be enough. I posted a request to the Grappelli repo I use to add a way to specify the record manger to use, or just automatically use the admin queryset (ModelAdmin.queryset).
My post is here:
https://github.com/sehmaschine/django-grappelli/issues/362
It looks like you can pass extra search params into the ajax autocompleter somehow. Likely a frontend hack needed.
https://github.com/sehmaschine/django-grappelli/blob/master/grappelli/views/related.py#L101
OR
You can make the default Manager for the models return an already filtered list, and have places that need to explicitly see deleted items remove that restriction.
This would likely make the default case much easier for you across the board.
I have a utility function in my Django project, it takes a queryset, gets some data from it and returns a result. I'd like to write some tests for this function. Is there anyway to 'mock' a QuerySet? I'd like to create an object that doesn't touch the database, and i can provide it with a list of values to use (i.e. some fake rows) and then it'll act just like a queryset, and will allow someone to do field lookups on it/filter/get/all etc.
Does anything like this exist already?
For an empty Queryset, I'd go simply for using none as keithhackbarth has already stated.
However, to mock a Queryset that will return a list of values, I prefer to use a Mock with a spec of the Model's manager. As an example (Python 2.7 style - I've used the external Mock library), here's a simple test where the Queryset is filtered and then counted:
from django.test import TestCase
from mock import Mock
from .models import Example
def queryset_func(queryset, filter_value):
"""
An example function to be tested
"""
return queryset.filter(stuff=filter_value).count()
class TestQuerysetFunc(TestCase):
def test_happy(self):
"""
`queryset_func` filters provided queryset and counts result
"""
m_queryset = Mock(spec=Example.objects)
m_queryset.filter.return_value = m_queryset
m_queryset.count.return_value = 97
result = func_to_test(m_queryset, '__TEST_VALUE__')
self.assertEqual(result, 97)
m_queryset.filter.assert_called_once_with(stuff='__TEST_VALUE__')
m_queryset.count.assert_called_once_with()
However, to fulfil the question, instead of setting a return_value for count, this could easily be adjusted to be a list of model instances returned from all.
Note that chaining is handled by setting the filter to return the mocked queryset:
m_queryset.filter.return_value = m_queryset
This would need to be applied for any queryset methods used in the function under test, e.g. exclude, etc.
Of course you can mock a QuerySet, you can mock anything.
You can create an object yourself, and give it the interface you need, and have it return any data you like. At heart, mocking is nothing more than providing a "test double" that acts enough like the real thing for your tests' purposes.
The low-tech way to get started is to define an object:
class MockQuerySet(object):
pass
then create one of these, and hand it to your test. The test will fail, likely on an AttributeError. That will tell you what you need to implement on your MockQuerySet. Repeat until your object is rich enough for your tests.
I am having the same issue, and it looks like some nice person has written a library for mocking QuerySets, it is called mock-django and the specific code you will need is here https://github.com/dcramer/mock-django/blob/master/mock_django/query.py I think you can then just patch you models objects function to return one of these QuerySetMock objects that you have set up to return something expected!
For this I use Django's .none() function.
For example:
class Location(models.Model):
name = models.CharField(max_length=100)
mock_locations = Location.objects.none()
This is the method used frequently in Django's own internal test cases. Based on comments in the code
Calling none() will create a queryset that never returns any objects and no
+query will be executed when accessing the results. A qs.none() queryset
+is an instance of ``EmptyQuerySet``.
Try out the django_mock_queries library that lets you mock out the database access, and still use some of the Django query set features like filtering.
Full disclosure: I contributed some features to the project.
Have you looked into FactoryBoy? https://factoryboy.readthedocs.io/en/latest/orms.html
It's a fixtures replacement tool with support for the django orm - factories basically generate orm-like objects (either in memory or in a test database).
Here's a great article for getting started: https://www.caktusgroup.com/blog/2013/07/17/factory-boy-alternative-django-testing-fixtures/
One first advice would be to split the function in two parts, one that creates the queryset
and one that manipulates the output of it. In this way testing the second part is straightforward.
For the database problem, I investigated if django uses sqlite-in-memory and I found out that
recent version of django uses the sqlite -in-memory database, from The django unittest page:
When using the SQLite database engine the tests will by default use an
in-memory database (i.e., the database will be created in memory,
bypassing the filesystem entirely!).
Mocking the QuerySet object will not make you exercise its full logic.
You can mock like this:
#patch('django.db.models.query.QuerySet')
def test_returning_distinct_records_for_city(self, mock_qs):
self.assertTrue(mock_qs.called)
Not that I know of, but why not use an actual queryset? The test framework is all set up to allow you to create sample data within your test, and the database is re-created on every test, so there doesn't seem to be any reason not to use the real thing.