How to write a stub for a classmethod in Python - python

I have a method which calls for a classmethod of another class
def get_interface_params_by_mac(self, host, mac_unified):
lines = RemoteCommand.remote_command(host, cls.IFCONFIG)
...
class RemoteCommand(object):
#classmethod
def remote_command(cls, host, cmd, sh = None):
...
I'm going to write a unit test for get_interface_params_by_mac method, in which I'd like to change an implementation of remote_command (I think it calls stub - fix me if I wrong)
What the right way to do this in Python?

Your unit-test code (maybe in its setUp method, if this is needed across several test methods and thus qualifies as a fixture) should do:
def fake_command(cls, host, cmd, sh=None):
pass # whatever you want in here
self.save_remote_command = somemodule.RemoteCommand.remote_command
somemodule.RemoteCommand.remote_command = classmethod(fake_command)
and then undo this monkey-patching (e.g. in the tearDown method if the patching is done in setUp) by
somemodule.RemoteCommand.remote_command = self.save_remote_command
It's not always necessary to put things back after a test, but it's good general practice.
A more elegant approach would be to design your code for testability via the Dependency Injection (DI) pattern:
def __init__(self, ...):
...
self.remote_command = RemoteCommand.remote_command
...
def set_remote_command_function(self, thefunction):
self.remote_command = thefunction
def get_interface_params_by_mac(self, host, mac_unified):
lines = self.remote_command(host, cls.IFCONFIG)
DI buys you a lot of flexibility (testability-wise, but also in many other contexts) at very little cost, which makes it one of my favorite design patterns (I'd much rather avoid monkey patching wherever I possibly can). Of course, if you design your code under test to use DI, all you need to do in your test is appropriately prepare that instance by calling the instance's set_remote_command_function with whatever fake-function you want to use!

Related

Preferred way of patching multiple methods in Python unit test

I need to patch three methods (_send_reply, _reset_watchdog and _handle_set_watchdog) with mock methods before testing a call to a fourth method (_handle_command) in a unit test of mine.
From looking at the documentation for the mock package, there's a few ways I could go about it:
With patch.multiple as decorator
#patch.multiple(MBG120Simulator,
_send_reply=DEFAULT,
_reset_watchdog=DEFAULT,
_handle_set_watchdog=DEFAULT,
autospec=True)
def test_handle_command_too_short_v1(self,
_send_reply,
_reset_watchdog,
_handle_set_watchdog):
simulator = MBG120Simulator()
simulator._handle_command('XA99')
_send_reply.assert_called_once_with(simulator, 'X?')
self.assertFalse(_reset_watchdog.called)
self.assertFalse(_handle_set_watchdog.called)
simulator.stop()
With patch.multiple as context manager
def test_handle_command_too_short_v2(self):
simulator = MBG120Simulator()
with patch.multiple(simulator,
_send_reply=DEFAULT,
_reset_watchdog=DEFAULT,
_handle_set_watchdog=DEFAULT,
autospec=True) as mocks:
simulator._handle_command('XA99')
mocks['_send_reply'].assert_called_once_with('X?')
self.assertFalse(mocks['_reset_watchdog'].called)
self.assertFalse(mocks['_handle_set_watchdog'].called)
simulator.stop()
With multiple patch.object decoratorations
#patch.object(MBG120Simulator, '_send_reply', autospec=True)
#patch.object(MBG120Simulator, '_reset_watchdog', autospec=True)
#patch.object(MBG120Simulator, '_handle_set_watchdog', autospec=True)
def test_handle_command_too_short_v3(self,
_handle_set_watchdog_mock,
_reset_watchdog_mock,
_send_reply_mock):
simulator = MBG120Simulator()
simulator._handle_command('XA99')
_send_reply_mock.assert_called_once_with(simulator, 'X?')
self.assertFalse(_reset_watchdog_mock.called)
self.assertFalse(_handle_set_watchdog_mock.called)
simulator.stop()
Manually replacing methods using create_autospec
def test_handle_command_too_short_v4(self):
simulator = MBG120Simulator()
# Mock some methods.
simulator._send_reply = create_autospec(simulator._send_reply)
simulator._reset_watchdog = create_autospec(simulator._reset_watchdog)
simulator._handle_set_watchdog = create_autospec(simulator._handle_set_watchdog)
# Exercise.
simulator._handle_command('XA99')
# Check.
simulator._send_reply.assert_called_once_with('X?')
self.assertFalse(simulator._reset_watchdog.called)
self.assertFalse(simulator._handle_set_watchdog.called)
Personally I think the last one is clearest to read, and will not result in horribly long lines if the number of mocked methods grow. It also avoids having to pass in simulator as the first (self) argument to assert_called_once_with.
But I don't find any of them particularly nice. Especially the multiple patch.object approach, which requires careful matching of the parameter order to the nested decorations.
Is there some approach I've missed, or a way to make this more readable? What do you do when you need to patch multiple methods on the instance/class under test?
No you didn't have missed anything really different from what you proposed.
About readability my taste is for decorator way because it remove the mocking stuff from test body... but it is just taste.
You are right: if you patch the static instance of the method by autospec=True you must use self in assert_called_* family check methods. But your case is just a small class because you know exactly what object you need to patch and you don't really need other context for your patch than test method.
You need just patch your object use it for all your test: often in tests you cannot have the instance to patch before doing your call and in these cases create_autospec cannot be used: you can just patch the static instance of the methods instead.
If you are bothered by passing the instance to assert_called_* methods consider to use ANY to break the dependency. Finally I wrote hundreds of test like that and I never had a problem about the arguments order.
My standard approach at your test is
from unittest.mock import patch
#patch('mbgmodule.MBG120Simulator._send_reply', autospec=True)
#patch('mbgmodule.MBG120Simulator._reset_watchdog', autospec=True)
#patch('mbgmodule.MBG120Simulator._handle_set_watchdog', autospec=True)
def test_handle_command_too_short(self,mock_handle_set_watchdog,
mock_reset_watchdog,
mock_send_reply):
simulator = MBG120Simulator()
simulator._handle_command('XA99')
# You can use ANY instead simulator if you don't know it
mock_send_reply.assert_called_once_with(simulator, 'X?')
self.assertFalse(mock_reset_watchdog.called)
self.assertFalse(mock_handle_set_watchdog_mock.called)
simulator.stop()
Patching is out of the test method code
Every mock starts by mock_ prefix
I prefer to use simple patch call and absolute path: it is clear and neat what you are doing
Finally: maybe create simulator and stop it are setUp() and tearDown() responsibility and tests should take in account just to patch some methods and do the checks.
I hope that answer is useful but the question don't have a unique valid answer because readability is not an absolute concept and depends from the reader. Moreover even the title speaking about general case, question examples are about the specific class of problem where you should patch methods of the object to test.
[EDIT]
I though a while about this question and I found what bother me: you are trying to test and sense on private methods. When this happen the first thing that you should ask is why? There are a lot chances that the answer is because these methods should be public methods of private collaborators (that not my words).
In that new scenario you should sense on private collaborators and you cannot change just your object. What you need to do is to patch the static instance of some other classes.

passing dependencies of dependencies using manual constructor injection in python

My Situation
I'm currently writing on a project in python which I want to use to learn a bit more about software architecture. I've read a few texts and watched a couple of talks about dependency injection and learned to love how clear constructor injection shows the dependencies of an object.
However, I'm kind of struggling how to get a dependency passed to an object. I decided NOT to use a DI framework since:
I don't have enough knowledge of DI to specify my requirements and thus cannot choose a framework.
I want to keep the code free of more "magical" stuff since I have the feeling that introducing a seldom used framework drastically decreases readability. (More code to read of which only a small part is used).
Thus, I'm using custom factory functions to create objects and explicitly pass their dependencies:
# Business and Data Objects
class Foo:
def __init__(self,bar):
self.bar = bar
def do_stuff(self):
print(self.bar)
class Bar:
def __init__(self,prefix):
self.prefix = prefix
def __str__(self):
return str(self.prefix)+"Hello"
# Wiring up dependencies
def create_bar():
return Bar("Bar says: ")
def create_foo():
return Foo(create_bar())
# Starting the application
f = create_foo()
f.do_stuff()
Alternatively, if Foo has to create a number of Bars itself, it gets the creator function passed through its constructor:
# Business and Data Objects
class Foo:
def __init__(self,create_bar):
self.create_bar = create_bar
def do_stuff(self,times):
for _ in range(times):
bar = self.create_bar()
print(bar)
class Bar:
def __init__(self,greeting):
self.greeting = greeting
def __str__(self):
return self.greeting
# Wiring up dependencies
def create_bar():
return Bar("Hello World")
def create_foo():
return Foo(create_bar)
# Starting the application
f = create_foo()
f.do_stuff(3)
While I'd love to hear improvement suggestions on the code, this is not really the point of this post. However, I feel that this introduction is required to understand
My Question
While the above looks rather clear, readable and understandable to me, I run into a problem when the prefix dependency of Bar is required to be identical in the context of each Foo object and thus is coupled to the Foo object lifetime. As an example consider a prefix which implements a counter (See code examples below for implementation details).
I have two Ideas how to realize this, however, none of them seems perfect to me:
1) Pass Prefix through Foo
The first idea is to add a constructor parameter to Foo and make it store the prefix in each Foo instance.
The obvious drawback is, that it mixes up the responsibilities of Foo. It controls the business logic AND provides one of the dependencies to Bar. Once Bar does not require the dependency any more, Foo has to be modified. Seems like a no-go for me. Since I don't really think this should be a solution, I did not post the code here, but provided it on pastebin for the very interested reader ;)
2) Use Functions with State
Instead of placing the Prefix object inside Foo this approach is trying to encapsulate it inside the create_foo function. By creating one Prefix for each Foo object and referencing it in a nameless function using lambda, I keep the details (a.k.a there-is-a-prefix-object) away from Foo and inside my wiring-logic. Of course a named function would work, too (but lambda is shorter).
# Business and Data Objects
class Foo:
def __init__(self,create_bar):
self.create_bar = create_bar
def do_stuff(self,times):
for _ in range(times):
bar = self.create_bar()
print(bar)
class Bar:
def __init__(self,prefix):
self.prefix = prefix
def __str__(self):
return str(self.prefix)+"Hello"
class Prefix:
def __init__(self,name):
self.name = name
self.count = 0
def __str__(self):
self.count +=1
return self.name+" "+str(self.count)+": "
# Wiring up dependencies
def create_bar(prefix):
return Bar(prefix)
def create_prefix(name):
return Prefix(name)
def create_foo(name):
prefix = create_prefix(name)
return Foo(lambda : create_bar(prefix))
# Starting the application
f1 = create_foo("foo1")
f2 = create_foo("foo2")
f1.do_stuff(3)
f2.do_stuff(2)
f1.do_stuff(2)
This approach seems much more useful to me. However, I'm not sure about common practices and thus fear that having state inside functions is not really recommended. Coming from a java/C++ background, I'd expect a function to be dependent on its parameters, its class members (if it's a method) or some global state. Thus, a parameterless function that does not use global state would have to return exactly the same value every time it is called. This is not the case here. Once the returned object is modified (which means that counter in prefix has been increased), the function returns an object which has a different state than it had when beeing returned the first time.
Is this assumption just caused by my restricted experience in python and do I have to change my mindset, i.e. don't think of functions but of something callable? Or is supplying functions with state an unintended misuse of lambda?
3) Using a Callable Class
To overcome my doubts on stateful functions I could use callable classes where the create_foo function of approach 2 would be replaced by this:
class BarCreator:
def __init__(self, prefix):
self.prefix = prefix
def __call__(self):
return create_bar(self.prefix)
def create_foo(name):
return Foo(BarCreator(create_prefix(name)))
While this seems a usable solution for me, it is sooo much more verbose.
Summary
I'm not absolutely sure how to handle the situation. Although I prefer number 2 I still have my doubts. Furthermore, I'm still hope that anyone comes up with a more elegant way.
Please comment, if there is anything you think is too vague or can be possibly misunderstood. I will improve the question as far as my abilities allow me to do :)
All examples should run under python2.7 and python3 - if you experience any problems, please report them in the comments and I'll try to fix my code.
If you want to inject a callable object but don't want it to have a complex setup -- if, as in your example, it's really just binding to a single input value -- you could try using functools.partial to provide a function <> value pair:
def factory_function(arg):
#processing here
return configurted_object_base_on_arg
class Consumer(object):
def __init__(self, injection):
self._injected = injection
def use_injected_value():
print self._injected()
injectable = functools.partial(factory_function, 'this is the configuration argument')
example = Consumer(injectable)
example.use_injected_value() # should return the result of your factory function and argument
As an aside, if you're creating a dependency injection setup like your option 3, you probably want to put the knwledge about how to do the configuration into a factory class rather than doing it inline as you're doing here. That way you can swap out factories if you want to choose between strategies. It's not functionally very different (unless the creation is more complex than this example and involves persistent state) but it's more flexible down the road if the code looks like
factory = FooBarFactory()
bar1 = factory.create_bar()
alt_factory = FooBlahFactory(extra_info)
bar2 = alt_factory.create_bar()

python unittests with multiple setups?

I'm working on a module using sockets with hundreds of test cases. Which is nice. Except now I need to test all of the cases with and without socket.setdefaulttimeout( 60 )... Please don't tell me cut and paste all the tests and set/remove a default timeout in setup/teardown.
Honestly, I get that having each test case laid out on it's own is good practice, but i also don't like to repeat myself. This is really just testing in a different context not different tests.
i see that unittest supports module level setup/teardown fixtures, but it isn't obvious to me how to convert my one test module into testing itself twice with two different setups.
any help would be much appreciated.
you could do something like this:
class TestCommon(unittest.TestCase):
def method_one(self):
# code for your first test
pass
def method_two(self):
# code for your second test
pass
class TestWithSetupA(TestCommon):
def SetUp(self):
# setup for context A
do_setup_a_stuff()
def test_method_one(self):
self.method_one()
def test_method_two(self):
self.method_two()
class TestWithSetupB(TestCommon):
def SetUp(self):
# setup for context B
do_setup_b_stuff()
def test_method_one(self):
self.method_one()
def test_method_two(self):
self.method_two()
The other answers on this question are valid in as much as they make it possible to actually perform the tests under multiple environments, but in playing around with the options I think I like a more self contained approach. I'm using suites and results to organize and display results of tests. In order to run one tests with two environments rather than two tests I took this approach - create a TestSuite subclass.
class FixtureSuite(unittest.TestSuite):
def run(self, result, debug=False):
socket.setdefaulttimeout(30)
super().run(result, debug)
socket.setdefaulttimeout(None)
...
suite1 = unittest.TestSuite(testCases)
suite2 = FixtureSuite(testCases)
fullSuite = unittest.TestSuite([suite1,suite2])
unittest.TextTestRunner(verbosity=2).run(fullSuite)
I would do it like this:
Make all of your tests derive from your own TestCase class, let's call it SynapticTestCase.
In SynapticTestCase.setUp(), examine an environment variable to determine whether to set the socket timeout or not.
Run your entire test suite twice, once with the environment variable set one way, then again with it set the other way.
Write a small shell script to invoke the test suite both ways.
If your code does not call socket.setdefaulttimeout then you can run tests the following way:
import socket
socket.setdeaulttimeout(60)
old_setdefaulttimeout, socket.setdefaulttimeout = socket.setdefaulttimeout, None
unittest.main()
socket.setdefaulttimeout = old_setdefaulttimeout
It is a hack, but it can work
You could also inherit and rerun the original suite, but overwrite the whole setUp or a part of it:
class TestOriginal(TestCommon):
def SetUp(self):
# common setUp here
self.current_setUp()
def current_setUp(self):
# your first setUp
pass
def test_one(self):
# your test
pass
def test_two(self):
# another test
pass
class TestWithNewSetup(TestOriginal):
def current_setUp(self):
# overwrite your first current_setUp

Solving AttributeErrors in nested attributes

I am writing a small mocking class to do some tests.
But this class needs to support the idea of having nested attributes.
This example should provide some insight to the problem:
class Foo(object):
def __init__(self):
self.x = True
From the above class, we can have:
f = Foo()
f.x
I know I can add attributes falling back to __getattr__ to avoid an AttributeError, but what if I need something like this to be valid:
f = Foo()
f.x
f.x.y
f.x.y.z()
I know what to return if the object gets called as f.x.y.z() but I just need to find a way to get to z() that makes sense.
You can "mock anything" by returning, on each attribute access, another instance of the "mock anything" class (which must also be callable, if you want to have the .z() part work;-).
E.g.:
class MockAny(object):
# mock special methods by making them noops
def __init__(self, *a, **k): pass
# or returning fixed values
def __len__(self): return 0
# mock attributes:
def getattr(self, name):
return MockAny()
# make it callable, if you need to
def __call__(self, *a, **k):
return MockAny()
The alternative, of course, is to know what it is that you're mocking (by introspection, or by some form of "declarative description", or simply by coding mock for specific things;-) rather than take the catch-all approach; but, the latter is also feasible, as you see in the above (partial) example.
Personally, I'd recommend using an existing mocking framework such as pymox rather than reinventing this particular wheel (also, the source code for such frameworks can be more instructive than a reasonably terse response on SO, like this one;-).
If you are calling something like f.x.y.z() in your unit tests, the chances are you're trying to test too much. Each of these nested attributes should be covered by the unit tests for their particular classes.
Take another look at your Foo class and see if you can test its own behaviour in your unit tests.
Perhaps not the answer you were looking for, but hopefully one that will help in the long run.

In Python, what are some examples of when decorators greatly simplify a task?

Trying to find examples of when decorators might be really beneficial, and when not so much.
Sample code is appreciated.
Decorators are simple syntax for a specific way to call higher-order functions, so if you're focusing just on the syntax it's unlikely to make a great difference. IOW, wherever you can say
#mydecorator
def f(...):
# body of f
you could identically say
def f(...):
# body of f
f = mydecorator(f)
The decorator syntax's advantage is that it's a wee bit more concise (no repeating f three times;-) and that it comes before the def (or class, for class decorators) statement, thus immediately alerting the reader of the code. It's important, but it just can't be great!
The semantics of decorators (and, identically, of higher-order function calls that match this pattern, if there were no decorators;-). For example,
#classmethod
def f(cls, ...):
lets you make class methods (very useful esp. for alternate constructors), and
#property
def foo(self, ...):
lets you make read-only properties (with other related decorators in 2.6 for non-read-only properties;-), which are extremely useful even when not used (since they save you from writing lot of dumb "boilerplate" accessors for what are essentially attributes... just because access to the attribute might require triggering some computation in the future!-).
Beyond the ones built into Python, your own decorators can be just as important -- depending on what your application is, of course. In general, they make it easy to refactor some part of the code (which would otherwise have to be duplicated in many functions and classes [[or you might have to resort to metaclasses for the class case, but those are richer and more complicated to use correctly]]) into the decorator. Therefore, they help you avoid repetitious, boilerplatey code -- and since DRY, "Don't Repeat Yourself", is a core principle of software development, any help you can get towards it should be heartily welcome.
The easiest way to understand the usefulness of decorators is to see some examples. Here is one, for instance:
Suppose you are studying some code and wish to
understand when and how a function is called.
You can use a decorator to alter the function
so it prints some debugging information each time the function is called:
import functools
def trace(f):
'''This decorator shows how the function was called'''
#functools.wraps(f)
def wrapper(*arg,**kw):
arg_str=','.join(['%r'%a for a in arg]+['%s=%s'%(key,kw[key]) for key in kw])
print "%s(%s)" % (f.__name__, arg_str)
return f(*arg, **kw)
return wrapper
#trace
def foo(*args):
pass
for n in range(3):
foo(n)
prints:
# foo(0)
# foo(1)
# foo(2)
If you only wished to trace one function foo, you could of course
add the code more simply to the definition of foo:
def foo(*args):
print('foo({0})'.format(args))
but if you had many functions that you wished to trace, or
did not want to mess with the original code, then the decorator
becomes useful.
For other examples of useful decorators, see the decorator library.
The usual example is using the #property decorator to make a read-only property:
#property
def count(self):
return self._events
instead of:
def _get_count(self):
return self._events
count = property(_get_count)
Decorators are for design choices where you are merging two concepts, like "Logging" and "Inventory Management" or "is registered user" and "View Lastest Messages".
One of those concepts is wrapping the other, controlling how it is called. That concept is the decorator.
The second concept is permanently joined to the first, so much so that it is OK to lose
the ability to call the second concept directly. For example, losing the ability to call "View Latest Messages" without also calling "Is registered user"
When the design choice is correct, the decorator syntax (or syntactic sugar for decorators)
reads cleanly and moves to eliminate errors from misunderstanding.
The usual decorator concepts include:
Logging. This might be joined with a transaction processor to log each successful or
unsuccessful transaction.
Requiring security. This might be coupled with changing price in an inventory.
Caching (or memoizing). This might be coupled with Net Present Value computations or any expensive, static, read only operation.
Language fix-ups like "#classmethod" or "convert error return values to exceptions"
Registration with frameworks, such as "this function gets called when that button is
pressed.
state machine processing, where the decorator decides whch state to process next.
etc. etc.
You might look at http://wiki.python.org/moin/PythonDecoratorLibrary (a dated wiki page) or the dectools (http://pypi.python.org/pypi/dectools) library for more documentation and examples.
In the AppEngine API, there's the nice #login_required decorator, which can clean up code quite a bit:
class MyPage(webapp.RequestHandler):
#login_required
def get(self):
self.response.headers['Content-Type'] = 'text/plain'
self.response.out.write("Hello, world!")
As opposed to:
class MyPage(webapp.RequestHandler):
def get(self):
user = users.get_current_user()
if not user:
return self.redirect(users.create_login_url(self.request.uri))
self.response.headers['Content-Type'] = 'text/plain'
self.response.out.write("Hello, world!")
The other ones I find myself using most are #classmethod for class methods, #staticmethod for static methods, and (as Mike DeSimone said) #property for read-only properties. It just reads nicer to have the decorator before the function rather than after it, like in
class Bar(object):
#classmethod
def foo(cls):
return id(cls)
instead of:
class Bar(object):
def foo(cls):
return id(cls)
foo = classmethod(foo)
It just saves boilerplate code.

Categories

Resources