According to the Flask documentation for the Flask.make_response method, the only types allowed to be returned from a view are instances of app.response_class, str, unicode, a wsgi function, or a tuple with a response object already following one of the above.
I'd like to be able to return my own models and queries from a view and having generic code building an adequate response, serializing the object to an accepted format, building collections from queries, applying filtering conditions, etc. Where's the best place to hook that up?
I considered the following possibilities, but can't figure the one correct way to do it.
Subclassing Response and setting app.response_class
Subclassing Flask redefining Flask.make_response
Wrapping app.route into another decorator
Flask.after_request
?
edit1: I already have many APIs with the behavior I need implemented in the views, but I'd like to avoid repeating that everywhere.
edit2: I'm actually building a Flask-extension with many default practices used in my applications. While a plain decorator would certainly work, I really need a little more magic than that.
Why don't you just create a function make_response_from_custom_object and end your views with
return make_response_from_custom_object(custom_object)
If it is going to be common I would put it into a #response_from_custom_object decorator, but hooking into Flask seems an overkill. You can chain decorators, so wrapping app.route does not make sense too; all you need is
#app.route(..)
#response_from_custom_object
def view(...):
...
If you can do it the simple and explicit way, there is no sense to make your code do magic and thus be less comprehensible.
The farther up the chain you go (make_response, dispatch_request + handle_user_error, full_dispatch_request, rewrite Flask from scratch) the more functionality you will have to re-create.
The easiest thing to do in this case is to override response_class and do the serialization there - that leaves you with all the magic that Flask does in make_response, full_dispatch_request etc., but still gives you control over how to respond to exceptions and serialize responses. It also leaves all of Flask's hooks in place, so consumers of your extension can override behavior where they need to (and they can re-use their existing knowledge of Flask's request / lifecycle)
Python is dyanmic by nature, so while this probably isn't the best practice, you can reassign the make_response method on the application to whatever you want.
To avoid having to recreate the default functionality, you can save a reference to the original function and use that to implement your new function.
I used this recently in a project to add the ability to return instances of a custom Serializable class directly from flask views.
app = Flask("StarCorp")
__original_make_response = app.make_response
def convert_custom_object(obj):
# Check if the returned object is "Serializable"
if not isinstance(obj, Serializable):
# Nope, do whatever flask normally does
return __original_make_response(obj)
# It is, get a `dict` from `obj` using the `json` method
data = obj.json
data.pop(TYPE_META) # Don't share the type meta info of an object with users
# Let flask turn the `dict` into a `json` response
return __original_make_response(data)
app.make_response = convert_custom_object
Since flask extensions typically provide an init_app(app) method I'm sure you could build an extension that monkey patches the passed in application object in a similar manner.
Related
Coming from a Java environment I've been quite surprised how simple Python/Django generally is and how it handles objects in memory. One thing that I've never found a clear answer about is what would be the best practice to instantiate objects to ensure when requests are made through Django each request refers to the same object.
An example of this would be within my views.py file I may reference to a UserService() class. What's the best practice to instantiate this class?
Is it safe to do it within the def/class view itself?
def test_view(request, format=None):
service = UserService()
service.to_something()
return Response( status=status.HTTP_200_OK )
Or should it be done at the top of the views.py file then references within the def/class view?
service = UserService()
def test_view(request, format=None):
service.to_something()
return Response( status=status.HTTP_200_OK )
Or should it be instantiated elsewhere?
The goal is to ensure the least amount of objects are created and referred to. I understand that python holds a reference count against each object as it's created, but am unclear on how that may place into this.
Additionally when creating a objects, let's say a service. How necessary is it to instantiate the classes you wish to use throughout the service within the init method first? Given the efficiency of the python GC it something I've wondered when it comes towards best practices and being as pythonic as possible.
With flask-sqlalchemy, does anyone know why the second approach of construction in http://pythonhosted.org/Flask-SQLAlchemy/api.html doesn't suggest db.app = app as well? It seems the major difference between the first and second construction methods is simply that the first does db.app = app whilst the second does db.app = None
Thanks!
The two methods of initialization are pretty standard for Flask extensions and follow an implicit convention on how extensions are to be initialized. In this section of the Flask documentation you can find a note that explains it:
As you noticed, init_app does not assign app to self. This is intentional! Class based Flask extensions must only store the application on the object when the application was passed to the constructor. This tells the extension: I am not interested in using multiple applications.
When the extension needs to find the current application and it does not have a reference to it, it must either use the current_app context local or change the API in a way that you can pass the application explicitly.
The idea can be summarized as follows:
If you use the SQLAlchemy(app) constructor then the extension will assume that app is the only application, so it will store a reference to it in self.app.
If you use the init_app(app) constructor then the extension will assume that app is one of possibly many applications. So instead of saving a reference it will rely on current_app to locate the application every time it needs it.
The practical difference between the two ways to initialize extensions is that the first format requires the application to exist, because it must be passed in the constructor. The second format allows the db object to be created before the application exists because you pass nothing to the constructor. In this case you postpone the call to db.init_app(app) until you have an application instance. The typical situation in which the creation of the application instance is delayed is if you use the application factory pattern.
I have a utility function in my Django project, it takes a queryset, gets some data from it and returns a result. I'd like to write some tests for this function. Is there anyway to 'mock' a QuerySet? I'd like to create an object that doesn't touch the database, and i can provide it with a list of values to use (i.e. some fake rows) and then it'll act just like a queryset, and will allow someone to do field lookups on it/filter/get/all etc.
Does anything like this exist already?
For an empty Queryset, I'd go simply for using none as keithhackbarth has already stated.
However, to mock a Queryset that will return a list of values, I prefer to use a Mock with a spec of the Model's manager. As an example (Python 2.7 style - I've used the external Mock library), here's a simple test where the Queryset is filtered and then counted:
from django.test import TestCase
from mock import Mock
from .models import Example
def queryset_func(queryset, filter_value):
"""
An example function to be tested
"""
return queryset.filter(stuff=filter_value).count()
class TestQuerysetFunc(TestCase):
def test_happy(self):
"""
`queryset_func` filters provided queryset and counts result
"""
m_queryset = Mock(spec=Example.objects)
m_queryset.filter.return_value = m_queryset
m_queryset.count.return_value = 97
result = func_to_test(m_queryset, '__TEST_VALUE__')
self.assertEqual(result, 97)
m_queryset.filter.assert_called_once_with(stuff='__TEST_VALUE__')
m_queryset.count.assert_called_once_with()
However, to fulfil the question, instead of setting a return_value for count, this could easily be adjusted to be a list of model instances returned from all.
Note that chaining is handled by setting the filter to return the mocked queryset:
m_queryset.filter.return_value = m_queryset
This would need to be applied for any queryset methods used in the function under test, e.g. exclude, etc.
Of course you can mock a QuerySet, you can mock anything.
You can create an object yourself, and give it the interface you need, and have it return any data you like. At heart, mocking is nothing more than providing a "test double" that acts enough like the real thing for your tests' purposes.
The low-tech way to get started is to define an object:
class MockQuerySet(object):
pass
then create one of these, and hand it to your test. The test will fail, likely on an AttributeError. That will tell you what you need to implement on your MockQuerySet. Repeat until your object is rich enough for your tests.
I am having the same issue, and it looks like some nice person has written a library for mocking QuerySets, it is called mock-django and the specific code you will need is here https://github.com/dcramer/mock-django/blob/master/mock_django/query.py I think you can then just patch you models objects function to return one of these QuerySetMock objects that you have set up to return something expected!
For this I use Django's .none() function.
For example:
class Location(models.Model):
name = models.CharField(max_length=100)
mock_locations = Location.objects.none()
This is the method used frequently in Django's own internal test cases. Based on comments in the code
Calling none() will create a queryset that never returns any objects and no
+query will be executed when accessing the results. A qs.none() queryset
+is an instance of ``EmptyQuerySet``.
Try out the django_mock_queries library that lets you mock out the database access, and still use some of the Django query set features like filtering.
Full disclosure: I contributed some features to the project.
Have you looked into FactoryBoy? https://factoryboy.readthedocs.io/en/latest/orms.html
It's a fixtures replacement tool with support for the django orm - factories basically generate orm-like objects (either in memory or in a test database).
Here's a great article for getting started: https://www.caktusgroup.com/blog/2013/07/17/factory-boy-alternative-django-testing-fixtures/
One first advice would be to split the function in two parts, one that creates the queryset
and one that manipulates the output of it. In this way testing the second part is straightforward.
For the database problem, I investigated if django uses sqlite-in-memory and I found out that
recent version of django uses the sqlite -in-memory database, from The django unittest page:
When using the SQLite database engine the tests will by default use an
in-memory database (i.e., the database will be created in memory,
bypassing the filesystem entirely!).
Mocking the QuerySet object will not make you exercise its full logic.
You can mock like this:
#patch('django.db.models.query.QuerySet')
def test_returning_distinct_records_for_city(self, mock_qs):
self.assertTrue(mock_qs.called)
Not that I know of, but why not use an actual queryset? The test framework is all set up to allow you to create sample data within your test, and the database is re-created on every test, so there doesn't seem to be any reason not to use the real thing.
I want to include an initialized data structure in my request object, making it accessible in the context object from my templates. What I'm doing right now is passing it manually and tiresome within all my views:
render_to_response(...., ( {'menu': RequestContext(request)}))
The request object contains the key, value pair which is injected using a custom context processor. While this works, I had hoped there was a more generic way of passing selected parts of the request object to the template context. I've tried passing it by generic views, but as it turns out the request object isn't instantiated when parsing the urlpatterns list.
To accomplish this, you will probably have to create your own middleware. That way, you have full control of the request, both before and after the view function.
Middleware is a very powerful concept, and not as hard to implement as it could seem, but don’t overdo it – it makes it hard to follow the program flow.
I don't necessarily understand your question well enough.
Either you are complaining having to include the RequestContext in all views, in which case you need to write a wrapper that passes RequestContext for you. But you will still have to pass to it the request. If you don't want to pass that too, you may have to create your own middleware as mikl suggests.
Or, you are complaining about having to pass a lot of menu items, in each and every view. Which is wrong way to do it, you need to define a template context processor that ensures these are present in the template by default.
I am adding MetaWeblog API support to a Django CMS, and am not quite sure how to layer the application.
I am using django_xmlrpc, which allows me to map to parameterised functions for each request. It is just a case of what level do I hook in calls to the django application from the service functions (AddPage, EditPage etc)
For django-page-cms, and I suppose many django apps, the business logic and validation is contained within the forms. In this case there is PageForm(forms.ModelForm) and PageAdmin(ModelAdmin), which both contain a lot of logic and validation.
If I am to build an API to allow maintenance of pages and content, does this mean I should be programmatically creating and filling a PageAdmin instance? Then catching any exceptions, and converting to their api equivalent? Or would this be a bad idea - misusing what forms are intended for?
The other option is refactoring the code so that business and logic is kept outside of the form classes. Then I would have the form and api, both go through the separate business logic.
Any other alternatives?
What would be the best solution?
Web services API's are just more URL's.
These WS API URL's map to view functions.
The WS view functions handle GET and POST (and possibly PUT and DELETE).
The WS view functions use Forms as well as the Models to make things happen.
It is, in a way, like an admin interface. Except, there's no HTML.
The WS view functions respond with JSON messages or XML messages.
It seems python does not provide this out of the box. But there is something called abc module:
I quote from http://www.doughellmann.com/PyMOTW/abc/ "By defining an abstract base class, you can define a common API for a set of subclasses. This capability is especially useful in situations where a third-party is going to provide implementations..." => the goal of an API, defining a contract.