Dependency injection in (Python) Google App Engine - python

I want to achieve maximum testability in my Google App Engine app which I'm writing in Python.
Basically what I'm doing is creating an all-purpose base handler which inherits the google.appengine.ext.webapp.RequestHandler. My base handler will expose common functionality in my app such as repository functions, a session object and the like.
When the WSGIApplication receives a request it will find the handler class that has been registered for the requested URL, and call its constructor and after that it will call a method called initialize passing in the request and response objects.
Now, for the sake of testability I want to be able to "mock" these objects (along with my own objects). So my question is how do I go about injecting these mocks? I can override the initialize method in my base handler and check for some global "test flag" and initialize some dummy request and response objects. But it seems wrong (in my mind at least). And how do I go about initializing my other objects (which may depend on the request and response objects)?
As you can probably tell I'm a little new to Python so any recommendations would be most welcome.
EDIT:
It has been pointed out to me that this question was a little hard to answer without some code, so here goes:
from google.appengine.ext import webapp
from ..utils import gmemsess
from .. import errors
_user_id_name = 'userid'
class Handler(webapp.RequestHandler):
'''
classdocs
'''
def __init__(self):
'''
Constructor
'''
self.charset = 'utf8'
self._session = None
def _getsession(self):
if not self._session:
self._session = gmemsess.Session(self)
return self._session
def _get_is_logged_in(self):
return self.session.has_key(_user_id_name)
def _get_user_id(self):
if not self.is_logged_in:
raise errors.UserNotLoggedInError()
return self.session[_user_id_name]
session = property(_getsession)
is_logged_in = property(_get_is_logged_in)
user_id = property(_get_user_id)
As you can see, no dependency injection is going on here at all. The session object is created by calling gmemsess.Session(self). The Session class expects a class which has a request object on it (it uses this to read a cookie value). In this case, self does have such a property since it inherits from webapp.RequestHandler. It also only has the object on it because after calling (the empty) constructor, WSGIApplication calls a method called initialize which sets this object (and the response object). The initialize method is declared on the base class (webapp.RequestHandler).
It looks like this:
def initialize(self, request, response):
"""Initializes this request handler with the given Request and
Response."""
self.request = request
self.response = response
When a request is made, the WSGIApplication class does the following:
def __call__(self, environ, start_response):
"""Called by WSGI when a request comes in."""
request = self.REQUEST_CLASS(environ)
response = self.RESPONSE_CLASS()
WSGIApplication.active_instance = self
handler = None
groups = ()
for regexp, handler_class in self._url_mapping:
match = regexp.match(request.path)
if match:
handler = handler_class()
handler.initialize(request, response)
groups = match.groups()
break
self.current_request_args = groups
if handler:
try:
method = environ['REQUEST_METHOD']
if method == 'GET':
handler.get(*groups)
elif method == 'POST':
handler.post(*groups)
'''SNIP'''
The lines of interest are those that say:
handler = handler_class()
handler.initialize(request, response)
As you can see, it calls the empty constructor on my handler class. And this is a problem for me, because what I think I would like to do is to inject, at runtime, the type of my session object, such that my class would look like this instead (fragment showed):
def __init__(self, session_type):
'''
Constructor
'''
self.charset = 'utf8'
self._session = None
self._session_type = session_type
def _getsession(self):
if not self._session:
self._session = self._session_type(self)
return self._session
However I can't get my head around how I would achieve this, since the WSGIApplication only calls the empty constructor. I guess I could register the session_type in some global variable, but that does not really follow the philosophy of dependency injection (as I understand it), but as stated I'm new to Python, so maybe I'm just thinking about it the wrong way. In any event I would rather pass in a session object instead of it's type, but this looks kind of impossible here.
Any input is appreciated.

The simplest way to achieve what you want would be to create a module-level variable containing the class of the session to create:
# myhandler.py
session_class = gmemsess.Session
class Handler(webapp.Request
def _getsession(self):
if not self._session:
self._session = session_class(self)
return self._session
then, wherever it is that you decide between testing and running:
import myhandler
if testing:
myhandler.session_class = MyTestingSession
This leaves your handler class nearly untouched, leaves the WSGIApplication completely untouched, and gives you the flexibility to do your testing as you want.

Why not just test your handlers in isolation? That is, create your mock Request and Response objects, instantiate the handler you want to test, and call handler.initialize(request, response) with your mocks. There's no need for dependency injection here.

Related

Python3 "Class factory" - ex: API(token).MyClass()

I'm writing a python REST client for an API.
The API needs authentication and I would like to have many API client objects running on the same script.
My current code for the API is something like this:
class RestAPI:
def __init__(self, id):
self.id = id
self.fetch()
def fetch(self):
requests.get(self.url+self.id, auth=self.apikey)
class Purchase(RestAPI):
url = 'http://example.com/purchases/'
class Invoice(RestAPI):
url = 'http://example.com/invoices/'
...
And I would like to use the API like that:
api_admin = Api('adminmytoken')
api_user = Api('usertoken')
…
amount = api_admin.Purchase(2).amount
api_user.Purchase(2).amount # raises because api_user is not authorized for this purchase
The problem is that each object needs to know it's apikey depending on the client I want to use.
That pattern looks like to me to a "class factory": all the classes of RestAPI need to know of the provided token.
How is it possible to cleanly do that without giving manually the token to each model ?
I think the issue here is that your design is a little backwards. Inheritance might not be the key here. What I might do is take the api token as an argument on the User class, then that gets passed to an instance-level binding on the Rest interface:
class APIUser:
def __init__(self, id, api_key, **kwargs):
self._rest = Interface(id, api_key, **kwargs)
def purchase(self, some_arg):
# the interface itself does the actual legwork,
# and you are simply using APIUser to call functions with the interface
return self._rest.fetch('PURCHASE', some_arg)
class Interface:
methods = {
# call you want (class url)
'PURCHASE': (Purchase, 'https://myexample.com/purchases'),
'INVOICE': (Invoice, 'https://myexample.com/invoices'),
# add more methods here
}
def __init__(self, id, key):
self.id = id
self.key = key
self.session = requests.Session()
def _fetch(self, method, *args, **kwargs):
# do some methods to go get data
try:
# use the interface to look up your class objects
# which you may or may not need
_class, url = self.methods[method]
except KeyError as e:
raise ValueError(f"Got unsupported method, expected "
f"{'\n'.join(self.methods)}") from e
headers = kwargs.pop('headers', {})
# I'm not sure the actual interface here, maybe you call the
# url to get metadata to populate the class with first...
req = requests.Request(_class.http_method, url+self.id, auth=self.key, headers=headers).prepare()
resp = self.session.send(req)
# this will raise the 401 ahead of time
resp.raise_for_status()
# maybe your object uses metadata from the response
params = resp.json()
# return the business object only if the user should see it
return _class(*args, **kwargs, **params)
class Purchase:
http_method = 'GET'
def __init__(self, *args, **kwargs):
# do some setup here with your params passed by the json
# from the api
user = APIUser("token", "key") # this is my user session
some_purchase = user.purchase(2) # will raise a 401 Unauthorized error from the requests session
admin = APIUser("admintoken", "adminkey") # admin session
some_purchase = admin.purchase(2)
# returns a purchase object
some_purchase.amount
There are a few reasons why you might want to go this way:
You don't get the object back if you aren't allowed to see it
Now the rest interface is in control of who sees what, and that's implicitly tied to the user object itself, without every other class needing to be aware of what's going on
You can change your url's in one place (if you need to)
Your business objects are just business objects, they don't need to do anything else
By separating out what your objects actually are, you still only need to pass the api keys and tokens once, to the User class. The Interface is bound on the instance, still giving you the flexibility of multiple users within the same script.
You also get the models you call on explicitly. If you try to take a model, you have to call it, and that's when the Interface can enforce your authentication. You no longer need your authentication to be enforced by your business objects

How to have one DB URI for read and one for read-write [duplicate]

I have a Flask, SQLAlchemy webapp which uses a single mysql server. I want to expand the database setup to have a read-only slave server such that I can spread the reads between both master and slave while continuing to write to the master db server.
I have looked at few options and I believe I can't do this with plain SQLAlchemy. Instead I'm planning to create 2 database handles in my webapp, one each for master and slave db servers. Then using a simple random value use either the master/slave db handle for "SELECT" operations.
However, I'm not sure if this is the right way to go with using SQLAlchemy. Any suggestion/tips on how to pull this off?
I have an example of how to do this on my blog at http://techspot.zzzeek.org/2012/01/11/django-style-database-routers-in-sqlalchemy/ . Basically you can enhance the Session so that it chooses from master or slave on a query-by-query basis. One potential glitch with that approach is that if you have one transaction that calls six queries, you might end up using both slaves in one request....but there we're just trying to imitate Django's feature :)
A slightly less magic approach that also establishes the scope of usage more explicitly I've used is a decorator on view callables (whatever they're called in Flask), like this:
#with_slave
def my_view(...):
# ...
with_slave would do something like this, assuming you have a Session and some engines set up:
master = create_engine("some DB")
slave = create_engine("some other DB")
Session = scoped_session(sessionmaker(bind=master))
def with_slave(fn):
def go(*arg, **kw):
s = Session(bind=slave)
return fn(*arg, **kw)
return go
The idea is that calling Session(bind=slave) invokes the registry to get at the actual Session object for the current thread, creating it if it doesn't exist - however since we're passing an argument, scoped_session will assert that the Session we're making here is definitely brand new.
You point it at the "slave" for all subsequent SQL. Then, when the request is over, you'd ensure that your Flask app is calling Session.remove() to clear out the registry for that thread. When the registry is next used on the same thread, it will be a new Session bound back to the "master".
Or a variant, you want to use the "slave" just for that call, this is "safer" in that it restores any existing bind back to the Session:
def with_slave(fn):
def go(*arg, **kw):
s = Session()
oldbind = s.bind
s.bind = slave
try:
return fn(*arg, **kw)
finally:
s.bind = oldbind
return go
For each of these decorators you can reverse things, have the Session be bound to a "slave" where the decorator puts it on "master" for write operations. If you wanted a random slave in that case, if Flask had some kind of "request begin" event you could set it up at that point.
Or, we can try another way. Such as we can declare two different class with all the instance attributes the same but the __bind__ class attribute is different. Thus we can use rw class to do read/write and r class to do read only. :)
I think this way is more easy and reliable. :)
We declare two db models because we can have tables in two different db with the same names. This way we can also bypass the 'extend_existing' error when two models with the same __tablename__.
Here is an example:
app = Flask(__name__)
app.config['SQLALCHEMY_BINDS'] = {'rw': 'rw', 'r': 'r'}
db = SQLAlchemy(app)
db.Model_RW = db.make_declarative_base()
class A(db.Model):
__tablename__ = 'common'
__bind_key__ = 'r'
class A(db.Model_RW):
__tablename__ = 'common'
__bind_key__ = 'rw'
Maybe this answer is too late! I use a slave_session to query the slave DB
class RoutingSession(SignallingSession):
def __init__(self, db, bind_name=None, autocommit=False, autoflush=True, **options):
self.app = db.get_app()
if bind_name:
bind = options.pop('bind', None)
else:
bind = options.pop('bind', None) or db.engine
self._bind_name = bind_name
SessionBase.__init__(
self, autocommit=autocommit, autoflush=autoflush,
bind=bind, binds=None, **options
)
def get_bind(self, mapper=None, clause=None):
if self._bind_name is not None:
state = get_state(self.app)
return state.db.get_engine(self.app, bind=self._bind_name)
else:
if mapper is not None:
try:
persist_selectable = mapper.persist_selectable
except AttributeError:
persist_selectable = mapper.mapped_table
info = getattr(persist_selectable, 'info', {})
bind_key = info.get('bind_key')
if bind_key is not None:
state = get_state(self.app)
return state.db.get_engine(self.app, bind=bind_key)
return SessionBase.get_bind(self, mapper, clause)
class RouteSQLAlchemy(SQLAlchemy):
def __init__(self, *args, **kwargs):
SQLAlchemy.__init__(self, *args, **kwargs)
self.slave_session = self.create_scoped_session({'bind_name':
'slave'})
def create_session(self, options):
return orm.sessionmaker(class_=RoutingSession,db=self,**options)
db = RouteSQLAlchemy(metadata=metadata, query_class=orm.Query)

Automatically adding headers to python Requests requests

I am trying to create a rest api client for talking to one of our services. Every request needs to include an authorisation header which is compromised of Epoch times, request verb, data, path etc.
I'm trying to use the python requests module as seamlessly as possible, but am unsure the best way to "inject" a header into every request.
There seems to be a concept of "hooks" in requests, but currently there is only a "response" hook.
I was considering extending the Session object and overriding the "send" method, adding the header and then passing it up to the super (Session.send) method.
My Python isn't fantastic when it comes to OOP and Inheritance, but this is what I have tried
class MySession(Session):
def __init__(self, access_id=None, access_key=None):
self.access_id = access_id
self.access_key = access_key
super(MySession, self).__init__()
def send(self, request, **kwargs):
method = request.method
path = urlparse(request.url).path
request.headers['Authorization'] = self.__create_security_header(method, path)
request.headers['Content-Type'] = "application/json"
return Session.send(self, request, **kwargs)
I guess you don't need to override the send method, since you have already overriden the __init__.
class MySession(Session):
def __init__(self, access_id=None, access_key=None):
super(MySession, self).__init__()
self.access_id, self.access_key = access_id, access_key
# provided __create_security_header method is defined
self.headers['Authorization'] = self.__create_security_header(method, path)
self.headers['Content-Type'] = "application/json"
Most likely that should do.

Mocking render to response with Pyramid

I have a decorator that looks like so:
def validate_something(func):
def validate_s(request):
if request.property:
render_to_response('template.jinja', 'error'
return func(request)
return validate_something
I'm trying to test it like so. I load the local WSGI stack as an app.
from webtest import TestApp
def setUp(self):
self.app = TestApp(target_app())
self.config = testing.setUp(request=testing.DummyRequest)
def test_something(self):
def test_func(request):
return 1
request = testing.DummyRequest()
resp = validate_something(test_func(request))
result = resp(request)
The error I'm getting is (being generated at the innermost render_to_response):
ValueError: no such renderer factory .jinja
I understand that I need to mock render_to_response, but I'm at a bit of a loss as to how to exactly do that. If anyone has any suggestions, I would greatly appreciate it.
Mock library is awesome:
mock provides a core Mock class removing the need to create a host of
stubs throughout your test suite. After performing an action, you can
make assertions about which methods / attributes were used and
arguments they were called with. You can also specify return values
and set needed attributes in the normal way.
Additionally, mock provides a patch() decorator that handles patching
module and class level attributes within the scope of a test
Youc code would look like this:
def test_something(self):
test_func = Mock.MagicMock(return_value=1) # replaced stub function with a mock
request = testing.DummyRequest()
# patching a Pyramid method with a mock
with mock.patch('pyramid.renderers.render_to_response' as r2r:
resp = validate_something(test_func(request))
result = resp(request)
assert r2r.assert_called_with('template.jinja', 'error')
The following worked for me:
def setUp(self):
self.app = TestApp(target_app())
self.config = testing.setUp(request=testing.DummyRequest)
self.config.include('pyramid_jinja2')
By setting up the config to include jinja your tests can then find your template and the jinja environment. You may also need to provide a test version of the template in the same folder as your tests. If you get a message such as TemplateNotFound on running the tests then make sure a version of the template is located in the correct place.

Call external function without sending 'self' arg

I'm writing tests for a Django application and using a attribute on my test class to store which view it's supposed to be testing, like this:
# IN TESTS.PY
class OrderTests(TestCase, ShopTest):
_VIEW = views.order
def test_gateway_answer(self):
url = 'whatever url'
request = self.request_factory(url, 'GET')
self._VIEW(request, **{'sku': order.sku})
# IN VIEWS.PY
def order(request, sku)
...
My guess is that the problem I'm having is caused because since I'm calling an attribute of the OrderTests class, python assumes I wanna send self and then order get the wrong arguments. Easy to solve... just not use it as a class attribute, but I was wondering if there's a way to tell python to not send self in this case.
Thanks.
This happens because in Python functions are descriptors, so when they are accessed on class instances they bind their first (assumed self) parameter to the instance.
You could access _VIEW on the class, not on the instance:
class OrderTests(TestCase, ShopTest):
_VIEW = views.order
def test_gateway_answer(self):
url = 'whatever url'
request = self.request_factory(url, 'GET')
OrderTests._VIEW(request, **{'sku': order.sku})
Alternatively, you can wrap it in staticmethod to prevent it being bound to the instance:
class OrderTests(TestCase, ShopTest):
_VIEW = staticmethod(views.order)
def test_gateway_answer(self):
url = 'whatever url'
request = self.request_factory(url, 'GET')
self._VIEW(request, **{'sku': order.sku})

Categories

Resources