I need to patch three methods (_send_reply, _reset_watchdog and _handle_set_watchdog) with mock methods before testing a call to a fourth method (_handle_command) in a unit test of mine.
From looking at the documentation for the mock package, there's a few ways I could go about it:
With patch.multiple as decorator
#patch.multiple(MBG120Simulator,
_send_reply=DEFAULT,
_reset_watchdog=DEFAULT,
_handle_set_watchdog=DEFAULT,
autospec=True)
def test_handle_command_too_short_v1(self,
_send_reply,
_reset_watchdog,
_handle_set_watchdog):
simulator = MBG120Simulator()
simulator._handle_command('XA99')
_send_reply.assert_called_once_with(simulator, 'X?')
self.assertFalse(_reset_watchdog.called)
self.assertFalse(_handle_set_watchdog.called)
simulator.stop()
With patch.multiple as context manager
def test_handle_command_too_short_v2(self):
simulator = MBG120Simulator()
with patch.multiple(simulator,
_send_reply=DEFAULT,
_reset_watchdog=DEFAULT,
_handle_set_watchdog=DEFAULT,
autospec=True) as mocks:
simulator._handle_command('XA99')
mocks['_send_reply'].assert_called_once_with('X?')
self.assertFalse(mocks['_reset_watchdog'].called)
self.assertFalse(mocks['_handle_set_watchdog'].called)
simulator.stop()
With multiple patch.object decoratorations
#patch.object(MBG120Simulator, '_send_reply', autospec=True)
#patch.object(MBG120Simulator, '_reset_watchdog', autospec=True)
#patch.object(MBG120Simulator, '_handle_set_watchdog', autospec=True)
def test_handle_command_too_short_v3(self,
_handle_set_watchdog_mock,
_reset_watchdog_mock,
_send_reply_mock):
simulator = MBG120Simulator()
simulator._handle_command('XA99')
_send_reply_mock.assert_called_once_with(simulator, 'X?')
self.assertFalse(_reset_watchdog_mock.called)
self.assertFalse(_handle_set_watchdog_mock.called)
simulator.stop()
Manually replacing methods using create_autospec
def test_handle_command_too_short_v4(self):
simulator = MBG120Simulator()
# Mock some methods.
simulator._send_reply = create_autospec(simulator._send_reply)
simulator._reset_watchdog = create_autospec(simulator._reset_watchdog)
simulator._handle_set_watchdog = create_autospec(simulator._handle_set_watchdog)
# Exercise.
simulator._handle_command('XA99')
# Check.
simulator._send_reply.assert_called_once_with('X?')
self.assertFalse(simulator._reset_watchdog.called)
self.assertFalse(simulator._handle_set_watchdog.called)
Personally I think the last one is clearest to read, and will not result in horribly long lines if the number of mocked methods grow. It also avoids having to pass in simulator as the first (self) argument to assert_called_once_with.
But I don't find any of them particularly nice. Especially the multiple patch.object approach, which requires careful matching of the parameter order to the nested decorations.
Is there some approach I've missed, or a way to make this more readable? What do you do when you need to patch multiple methods on the instance/class under test?
No you didn't have missed anything really different from what you proposed.
About readability my taste is for decorator way because it remove the mocking stuff from test body... but it is just taste.
You are right: if you patch the static instance of the method by autospec=True you must use self in assert_called_* family check methods. But your case is just a small class because you know exactly what object you need to patch and you don't really need other context for your patch than test method.
You need just patch your object use it for all your test: often in tests you cannot have the instance to patch before doing your call and in these cases create_autospec cannot be used: you can just patch the static instance of the methods instead.
If you are bothered by passing the instance to assert_called_* methods consider to use ANY to break the dependency. Finally I wrote hundreds of test like that and I never had a problem about the arguments order.
My standard approach at your test is
from unittest.mock import patch
#patch('mbgmodule.MBG120Simulator._send_reply', autospec=True)
#patch('mbgmodule.MBG120Simulator._reset_watchdog', autospec=True)
#patch('mbgmodule.MBG120Simulator._handle_set_watchdog', autospec=True)
def test_handle_command_too_short(self,mock_handle_set_watchdog,
mock_reset_watchdog,
mock_send_reply):
simulator = MBG120Simulator()
simulator._handle_command('XA99')
# You can use ANY instead simulator if you don't know it
mock_send_reply.assert_called_once_with(simulator, 'X?')
self.assertFalse(mock_reset_watchdog.called)
self.assertFalse(mock_handle_set_watchdog_mock.called)
simulator.stop()
Patching is out of the test method code
Every mock starts by mock_ prefix
I prefer to use simple patch call and absolute path: it is clear and neat what you are doing
Finally: maybe create simulator and stop it are setUp() and tearDown() responsibility and tests should take in account just to patch some methods and do the checks.
I hope that answer is useful but the question don't have a unique valid answer because readability is not an absolute concept and depends from the reader. Moreover even the title speaking about general case, question examples are about the specific class of problem where you should patch methods of the object to test.
[EDIT]
I though a while about this question and I found what bother me: you are trying to test and sense on private methods. When this happen the first thing that you should ask is why? There are a lot chances that the answer is because these methods should be public methods of private collaborators (that not my words).
In that new scenario you should sense on private collaborators and you cannot change just your object. What you need to do is to patch the static instance of some other classes.
Related
So lately in Python I started using the unittest library. However, one thing that I cannot understand (and I've tried looking this up for hours and hours...) is why you would use the patch decorator over an explicit MagicMock objects.
To be more specific, below is my code that I am attempting to test. Some quick notes:
The code is attempting to test a simple menu class for some restaurant.
In the setUp method, I am preparing the instantiated Menu object by storing some instantiated Food objects (which in this case are replaced by MagicMock objects).
In the testFindItem method, I am attempting to find and return a Food object from the menu via searching for it's name. Then I compare the found object with the Food object (MagicMock object in this case) it is suppose to be.
Now that being said, observe how in the setUp method I replaced self.bread and self.cardboard by MagicMock objects instead of Food objects. The code works, and that is great but an alternative would be to use a patch decorator that overrides the Food class.
TL;DR: Why would that (i.e. using a patch) be better or worse in this case? Or rather as mentioned before, why use patch decorator over explicit MagicMock objects?
Oh on a side note, the closest answer that I have found is another post which discusses the difference between patch and mock but not why you would use one over the other: Mocking a class: Mock() or patch()?
class MenuTest(unittest.TestCase):
"""
Unit test class for Menu class.
"""
def setUp(self):
"""
Prepares a menu to be tested against using mock objects.
"""
self.bread = MagicMock()
self.cardboard = MagicMock()
self.bread.name = "bread"
self.cardboard.name = "cardboard"
foodItems = [self.cardboard, self.bread]
self.menu = Menu(foodItems)
def testFindItem(self):
"""
Tests whether a specified food item can be found on the menu.
"""
# Items on the menu
self.assertEqual(self.menu.findItem("bread"), self.bread)
self.assertEqual(self.menu.findItem("cardboard"), self.cardboard)
# Items not on the menu
with self.assertRaises(NameError):
self.menu.findItem("salvation")
This isn't the use case for patch. The reason you use that is when you want to replace an object that is defined elsewhere. Here, you're explicitly instantiating the Menu and passing in the things you want to call assertions on, so patch is not useful; but there are plenty of times when the class under test creates its own objects, or gets them from other parts of the code, and that's when you'd use patch.
When mocking classes or methods when writing unittests in Python, why do I need to use #patch decorator? I just could replace the method with Mock object without any patch annotation.
Examples:
class TestFoobar(unittest.TestCase):
def setUp(self):
self.foobar = FooBar()
# 1) With patch decorator:
#patch.object(FooBar, "_get_bar")
#patch.object(FooBar, "_get_foo")
def test_get_foobar_with_patch(self, mock_get_foo, mock_get_bar):
mock_get_bar.return_value = "bar1"
mock_get_foo.return_value = "foo1"
actual = self.foobar.get_foobar()
self.assertEqual("foo1bar1", actual)
# 2) Just replacing the real methods with Mock with proper return_value:
def test_get_foobar_with_replacement(self):
self.foobar._get_foo = Mock(return_value="foo2")
self.foobar._get_bar = Mock(return_value="bar2")
actual = self.foobar.get_foobar()
self.assertEqual("foo2bar2", actual)
Could someone produce an example, where patch decorator is good and replacing is bad?
We have always used patch decorator with our team, but after reading this comment for a post, I got the idea that maybe we could write nicer-looking code without the need of patch decorators.
I understand that patching is temporary, so maybe with some cases, it is dangerous to not use patch decorator and replace methods with mock instead? Could it be that replacing objects in one test method can affect the result of the next test method?
I tried to prove this, but came up empty: both tests pass in the next code:
def test_get_foobar_with_replacement(self):
self.foobar._get_foo = Mock(return_value="foo2")
self.foobar._get_bar = Mock(return_value="bar2")
actual = self.foobar.get_foobar()
self.assertIsInstance(self.foobar._get_bar, Mock)
self.assertIsInstance(self.foobar._get_foo, Mock)
self.assertEqual("foo2bar2", actual)
def test_get_foobar_with_real_methods(self):
actual = self.foobar.get_foobar()
self.assertNotIsInstance(self.foobar._get_bar, Mock)
self.assertNotIsInstance(self.foobar._get_foo, Mock)
self.assertIsInstance(self.foobar._get_bar, types.MethodType)
self.assertIsInstance(self.foobar._get_foo, types.MethodType)
self.assertEqual("foobar", actual)
Full source code (Python 3.3): dropbox.com/s/t8bewsdaalzrgke/test_foobar.py?dl=0
patch.object will restore the item you patched to its original state after the test method returns. If you monkey-patch the object yourself, you need to restore the original value if that object will be used in another test.
In your two examples, you are actually patching two different things. Your call to patch.object patches the class FooBar, while your monkey patch patches a specific instance of FooBar.
Restoring the original object isn't important if the object will be created from scratch each time. (You don't show it, but I assume self.foobar is being created in a setUp method, so that even though you replace its _get_foo method, you aren't reusing that specific object in multiple tests.)
I have a Python TestCase class where all test methods, except one, need to patch an object the same way. The other method need some other behavior from the same object. I'm using mock, so I did:
#mock.patch('method_to_patch', mock.Mock(return_value=1))
class Tests(TestCase):
#mock.patch('method_to_patch', mock.Mock(return_value=2))
def test_override(self):
(....)
But that's not working. When test_override is run, it still calls the patched behavior from the class decorator.
After a lot of debugging, I found out that during the TestSuite build, the #patch around test_override is being called before the one around Tests, and since mock apply the patches in order, the class decorator is overriding the method decorator.
Is this order correct? I was expecting the opposite and I'm not really sure how to override patching... Maybe with a with statement?
Well, turns out that a good night sleep and a cold shower made me rethink the whole issue.
I'm still very new to the concept of mocking, so it still hasn't sunk in quite right.
The thing is, there's no need to override the patch to a mocked object. It's a mocked object and that means I can make it do anything. So my first try was:
#mock.patch('method_to_patch', mock.Mock(return_value=1))
class Tests(TestCase):
def test_override(self):
method_to_patch.return_value = 2
(....)
That worked, but had the side effect of changing the return value for all following tests. So then I tried:
#mock.patch('method_to_patch', mock.Mock(return_value=1))
class Tests(TestCase):
def test_override(self):
method_to_patch.return_value = 2
(....)
method_to_patch.return_value = 1
And it worked like a charm. But seemed like too much code. So then I went the down the road of context management, like this:
#mock.patch('method_to_patch', mock.Mock(return_value=1))
class Tests(TestCase):
def test_override(self):
with mock.patch('method_to_patch', mock.Mock(return_value=2):
(....)
I think it seems clearer and more concise.
About the order in which the patch decorators were being applied, it's actually the correct order. Just like stacked decorators are applied from the bottom up, a method decorator is supposed to be called before the class decorator. I guess it makes sense, I was just expecting the opposite behavior.
Anyway, I hope this help some poor newbie soul like mine in the future.
I have a class like the following:
class A:
def __init__(self, arg1, arg2, arg3):
self.a=arg1
self.b=arg2
self.c=arg3
# ...
self.x=do_something(arg1, arg2, arg3)
self.y=do_something(arg1, arg2, arg3)
self.m = self.func1(self.x)
self.n = self.func2(self.y)
# ...
def func1(self, arg):
# do something here
def func2(self, arg):
# do something here
As you can see, initializing the class needs to feed in arg1, arg2, and arg3. However, testing func1 and func2 does not directly require such inputs, but rather, it's simply an input/output logic.
In my test, I can of course instantiate and initialize a test object in the regular way, and then test func1 and func2 individually. But the initialization requires input of arg1 arg2, arg3, which is really not relevant to test func1 and func2.
Therefore, I want to test func1 and func2 individually, without first calling __init__. So I have the following 2 questions:
What's the best way of designing such tests? (perferably, in py.test)
I want to test func1 and func2 without invoking __init__. I read from here that A.__new__() can skip invoking __init__ but still having the class instantiated. Is there a better way to achieve what I need without doing this?
NOTE:
There have been 2 questions regarding my ask here:
Is it necessary to test individual member functions?
(for testing purpose) Is it necessary to instantiating a class without initializing the object with __init__?
For question 1, I did a quick google search and find some relevant study or discussion on this:
Unit Testing Non Public Member Functions
(PDF) Incremental Testing of Object-Oriented Class Structures.
We initially test base classes having no parents by designing a test
suite that tests each member function individually and also tests the
interactions among member functions.
For question 2, I'm not sure. But I think it is necessary, as shown in the sample code, func1 and func2 are called in __init__. I feel more comfortable testing them on an class A object that hasn't been called with __init__ (and therefore no previous calls to func1 and func2).
Of course, one could just instantiate a class A object with regular means (testobj = A()) and then perform individual test on func1 and func2. But is it good:)? I'm just discussing here as what's the best way to test such scenario, what's the pros and cons.
On the other hand, one might also argue that from design perspective one should NOT put calls to func1 and func2 in __init__ in the first place. Is this a reasonable design option?
It is not usually useful or even possible to test methods of a class without instantiating the class (including running __init__). Typically your class methods will refer to attributes of the class (e.g., self.a). If you don't run __init__, those attributes won't exist, so your methods won't work. (If your methods don't rely on the attributes of their instance, then why are they methods and not just standalone functions?) In your example, it looks like func1 and func2 are part of the initialization process, so they should be tested as part of that.
In theory it is possible to "quasi-instantiate" the class by using __new__ and then adding just the members that you need, e.g.:
obj = A.__new__(args)
obj.a = "test value"
obj.func1()
However, this is probably not a very good way to do tests. For one thing, it results in you duplicating code that presumably already exists in the initialization code, which means your tests are more likely to get out of sync with the real code. For another, you may have to duplicate many initialization calls this way, since you'll have to manually re-do what would otherwise be done by any base-class __init__ methods called from your class.
As for how to design tests, you can take a look at the unittest module and/or the nose module. That gives you the basics of how to set up tests. What to actually put in the tests obviously depends on what your code is supposed to do.
Edit: The answer to your question 1 is "definitely yes, but not necessarily every single one". The answer to your question 2 is "probably not". Even at the first link you give, there is debate about whether methods that are not part of the class's public API should be tested at all. If your func1 and func2 are purely internal methods that are just part of the initialization, then there is probably no need to test them separately from the initialization.
This gets to your last question about whether it's appropriate to call func1 and func2 from within __init__. As I've stated repeatedly in my comments, it depends on what these functions do. If func1 and func2 perform part of the initialization (i.e., do some "setting-up" work for the instance), then it's perfectly reasonable to call them from __init__; but in that case they should be tested as part of the initialization process, and there is no need to test them independently. If func1 and func2 are not part of the initialization, then yes, you should test them independently; but in that case, why are they in __init__?
Methods that form an integral part of instantiating your class should be tested as part of testing the instantiation of your class. Methods that do not form an integral part of instantiating your class should not be called from within __init__.
If func1 and func2 are "simply an input/output logic" and do not require access to the instance, then they don't need to be methods of the class at all; they can just be standalone functions. If you want to keep them in the class you can mark them as staticmethods and then call them on the class directly without instantiating it. Here's an example:
>>> class Foo(object):
... def __init__(self, num):
... self.numSquared = self.square(num)
...
... #staticmethod
... def square(num):
... return num**2
>>> Foo.square(2) # you can test the square "method" this way without instantiating Foo
4
>>> Foo(8).numSquared
64
It is just imaginable that you might have some monster class which requires a hugely complex initialization process. In such a case, you might find it necessary to test parts of that process individually. However, such a giant init sequence would itself be a warning of an unwieldy designm.
If you have a choice, i'd go for declaring your initialization helper functions as staticmethods and just call them from tests.
If you have different input/output values to assert on, you could look into some parametrizing examples with py.test.
If your class instantiation is somewhat heavy you might want to look into dependency injection and cache the instance like this:
# content of test_module.py
def pytest_funcarg__a(request):
return request.cached_setup(lambda: A(...), scope="class")
class TestA:
def test_basic(self, a):
assert .... # check properties/non-init functions
This would re-use the same "a" instance across each test class. Other possible scopes are "session", "function" or "module". You can also define a command line option to set the scope so that for quick development you use more caching and for Continous-Integration you use more isolated resource setup, without the need to change the test source code.
Personally, in the last 12 years i went from fine-grained unit-testing to more functional/integration types of testing because it eases refactoring and seemed to make better use of my time overall. It's of course crucial to have good support and reports when failures occur, like dropping to PDB, concise tracebacks etc. And for some intricate algorithms i still write very fine-grained unit-tests but then i usually separate the algorithm out into a very independently testable thing.
HTH, holger
I agree with previous comments that it is generally better to avoid this problem by reducing the amount of work done at instantiation, e.g. by moving func1 etc calls into aconfigure(self) method which should be called after instantiation.
If you have strong reasons for keeping calls to self.func1 etc in __init__, there is an approach in pytest which might help.
(1) Put this in the module:
_called_from_test = False
(2) Put the following in conftest.py
import your_module
def pytest_configure(config):
your_module._called_from_test = True
with the appropriate name for your_module.
(3) Insert an if statement to end the execution of __init__ early when you are running tests,
if _called_from_test:
pass
else:
self.func1( ....)
You can then step through the individual function calls, testing them as you go.
The same could be achieved by making _called_from_test an optional argument of __init__.
More context is given in the Detect if running from within a pytest run section of pytest documentation.
I have a method which calls for a classmethod of another class
def get_interface_params_by_mac(self, host, mac_unified):
lines = RemoteCommand.remote_command(host, cls.IFCONFIG)
...
class RemoteCommand(object):
#classmethod
def remote_command(cls, host, cmd, sh = None):
...
I'm going to write a unit test for get_interface_params_by_mac method, in which I'd like to change an implementation of remote_command (I think it calls stub - fix me if I wrong)
What the right way to do this in Python?
Your unit-test code (maybe in its setUp method, if this is needed across several test methods and thus qualifies as a fixture) should do:
def fake_command(cls, host, cmd, sh=None):
pass # whatever you want in here
self.save_remote_command = somemodule.RemoteCommand.remote_command
somemodule.RemoteCommand.remote_command = classmethod(fake_command)
and then undo this monkey-patching (e.g. in the tearDown method if the patching is done in setUp) by
somemodule.RemoteCommand.remote_command = self.save_remote_command
It's not always necessary to put things back after a test, but it's good general practice.
A more elegant approach would be to design your code for testability via the Dependency Injection (DI) pattern:
def __init__(self, ...):
...
self.remote_command = RemoteCommand.remote_command
...
def set_remote_command_function(self, thefunction):
self.remote_command = thefunction
def get_interface_params_by_mac(self, host, mac_unified):
lines = self.remote_command(host, cls.IFCONFIG)
DI buys you a lot of flexibility (testability-wise, but also in many other contexts) at very little cost, which makes it one of my favorite design patterns (I'd much rather avoid monkey patching wherever I possibly can). Of course, if you design your code under test to use DI, all you need to do in your test is appropriately prepare that instance by calling the instance's set_remote_command_function with whatever fake-function you want to use!