Mock has a helpful assert_called_with() method. However, as far as I understand this only checks the last call to a method.
If I have code that calls the mocked method 3 times successively, each time with different parameters, how can I assert these 3 calls with their specific parameters?
assert_has_calls is another approach to this problem.
From the docs:
assert_has_calls (calls, any_order=False)
assert the mock has been
called with the specified calls. The mock_calls list is checked for
the calls.
If any_order is False (the default) then the calls must be sequential.
There can be extra calls before or after the specified calls.
If any_order is True then the calls can be in any order, but they must
all appear in mock_calls.
Example:
>>> from unittest.mock import call, Mock
>>> mock = Mock(return_value=None)
>>> mock(1)
>>> mock(2)
>>> mock(3)
>>> mock(4)
>>> calls = [call(2), call(3)]
>>> mock.assert_has_calls(calls)
>>> calls = [call(4), call(2), call(3)]
>>> mock.assert_has_calls(calls, any_order=True)
Source: https://docs.python.org/3/library/unittest.mock.html#unittest.mock.Mock.assert_has_calls
Usually, I don't care about the order of the calls, only that they happened. In that case, I combine assert_any_call with an assertion about call_count.
>>> import mock
>>> m = mock.Mock()
>>> m(1)
<Mock name='mock()' id='37578160'>
>>> m(2)
<Mock name='mock()' id='37578160'>
>>> m(3)
<Mock name='mock()' id='37578160'>
>>> m.assert_any_call(1)
>>> m.assert_any_call(2)
>>> m.assert_any_call(3)
>>> assert 3 == m.call_count
>>> m.assert_any_call(4)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "[python path]\lib\site-packages\mock.py", line 891, in assert_any_call
'%s call not found' % expected_string
AssertionError: mock(4) call not found
I find doing it this way to be easier to read and understand than a large list of calls passed into a single method.
If you do care about order or you expect multiple identical calls, assert_has_calls might be more appropriate.
Edit
Since I posted this answer, I've rethought my approach to testing in general. I think it's worth mentioning that if your test is getting this complicated, you may be testing inappropriately or have a design problem. Mocks are designed for testing inter-object communication in an object oriented design. If your design is not objected oriented (as in more procedural or functional), the mock may be totally inappropriate. You may also have too much going on inside the method, or you might be testing internal details that are best left unmocked. I developed the strategy mentioned in this method when my code was not very object oriented, and I believe I was also testing internal details that would have been best left unmocked.
You can use the Mock.call_args_list attribute to compare parameters to previous method calls. That in conjunction with Mock.call_count attribute should give you full control.
I always have to look this one up time and time again, so here is my answer.
Asserting multiple method calls on different objects of the same class
Suppose we have a heavy duty class (which we want to mock):
In [1]: class HeavyDuty(object):
...: def __init__(self):
...: import time
...: time.sleep(2) # <- Spends a lot of time here
...:
...: def do_work(self, arg1, arg2):
...: print("Called with %r and %r" % (arg1, arg2))
...:
here is some code that uses two instances of the HeavyDuty class:
In [2]: def heavy_work():
...: hd1 = HeavyDuty()
...: hd1.do_work(13, 17)
...: hd2 = HeavyDuty()
...: hd2.do_work(23, 29)
...:
Now, here is a test case for the heavy_work function:
In [3]: from unittest.mock import patch, call
...: def test_heavy_work():
...: expected_calls = [call.do_work(13, 17),call.do_work(23, 29)]
...:
...: with patch('__main__.HeavyDuty') as MockHeavyDuty:
...: heavy_work()
...: MockHeavyDuty.return_value.assert_has_calls(expected_calls)
...:
We are mocking the HeavyDuty class with MockHeavyDuty. To assert method calls coming from every HeavyDuty instance we have to refer to MockHeavyDuty.return_value.assert_has_calls, instead of MockHeavyDuty.assert_has_calls. In addition, in the list of expected_calls we have to specify which method name we are interested in asserting calls for. So our list is made of calls to call.do_work, as opposed to simply call.
Exercising the test case shows us it is successful:
In [4]: print(test_heavy_work())
None
If we modify the heavy_work function, the test fails and produces a helpful error message:
In [5]: def heavy_work():
...: hd1 = HeavyDuty()
...: hd1.do_work(113, 117) # <- call args are different
...: hd2 = HeavyDuty()
...: hd2.do_work(123, 129) # <- call args are different
...:
In [6]: print(test_heavy_work())
---------------------------------------------------------------------------
(traceback omitted for clarity)
AssertionError: Calls not found.
Expected: [call.do_work(13, 17), call.do_work(23, 29)]
Actual: [call.do_work(113, 117), call.do_work(123, 129)]
Asserting multiple calls to a function
To contrast with the above, here is an example that shows how to mock multiple calls to a function:
In [7]: def work_function(arg1, arg2):
...: print("Called with args %r and %r" % (arg1, arg2))
In [8]: from unittest.mock import patch, call
...: def test_work_function():
...: expected_calls = [call(13, 17), call(23, 29)]
...: with patch('__main__.work_function') as mock_work_function:
...: work_function(13, 17)
...: work_function(23, 29)
...: mock_work_function.assert_has_calls(expected_calls)
...:
In [9]: print(test_work_function())
None
There are two main differences. The first one is that when mocking a function we setup our expected calls using call, instead of using call.some_method. The second one is that we call assert_has_calls on mock_work_function, instead of on mock_work_function.return_value.
Related
I am confused with following difference. Say I have this class with some use case:
class C:
def f(self, a, b, c=None):
print(f"Real f called with {a=}, {b=} and {c=}.")
my_c = C()
my_c.f(1, 2, c=3) # Output: Real f called with a=1, b=2 and c=3.
I can monkey patch it for purpose of testing like this:
class C:
def f(self, a, b, c=None):
print(f"Real f called with {a=}, {b=} and {c=}.")
def f_monkey_patched(self, *args, **kwargs):
print(f"Patched f called with {args=} and {kwargs=}.")
C.f = f_monkey_patched
my_c = C()
my_c.f(1, 2, c=3) # Output: Patched f called with args=(1, 2) and kwargs={'c': 3}.
So far so good. But I would like to patch only one single instance and it somehow consumes first argument:
class C:
def f(self, a, b, c=None):
print(f"Real f called with {a=}, {b=} and {c=}.")
def f_monkey_patched(self, *args, **kwargs):
print(f"Patched f called with {args=} and {kwargs=}.")
my_c = C()
my_c.f = f_monkey_patched
my_c.f(1, 2, c=3) # Output: Patched f called with args=(2,) and kwargs={'c': 3}.
Why has been first argument consumed as self instead of the instance itself?
Functions in Python are descriptors; when they're attached to a class, but looked up on an instance of the class, the descriptor protocol gets invoked, producing a bound method on your behalf (so my_c.f, where f is defined on the class, is distinct from the actual function f you originally defined, and implicitly passes my_c as self).
If you want to make a replacement that shadows the class f only for a specific instance, but still passes along the instance as self like you expect, you need to manually bind the instance to the function to create the bound method using the (admittedly terribly documented) types.MethodType:
from types import MethodType # The class implementing bound methods in Python 3
# ... Definition of C and f_monkey_patched unchanged
my_c = C()
my_c.f = MethodType(f_monkey_patched, my_c) # Creates a pre-bound method from the function and
# the instance to bind to
Being bound, my_c.f will now behave as a function that does not accept self from the caller, but when called self will be received as the instance bound to my_c at the time the MethodType was constructed.
Update with performance comparisons:
Looks like, performance-wise, all the solutions are similar enough as to be irrelevant performance-wise (Kedar's explicit use of the descriptor protocol and my use of MethodType are equivalent, and the fastest, but the percentage difference over functools.partial is so small that it won't matter under the weight of any useful work you're doing):
>>> # ... define C as per OP
>>> def f_monkey_patched(self, a): # Reduce argument count to reduce unrelated overhead
... pass
>>> from types import MethodType
>>> from functools import partial
>>> partial_c, mtype_c, desc_c = C(), C(), C()
>>> partial_c.f = partial(f_monkey_patched, partial_c)
>>> mtype_c.f = MethodType(f_monkey_patched, mtype_c)
>>> desc_c.f = f_monkey_patched.__get__(desc_c, C)
>>> %%timeit x = partial_c # Swapping in partial_c, mtype_c or desc_c
... x.f(1)
...
I'm not even going to give exact timing outputs for the IPython %%timeit magic, as it varied across runs, even on a desktop without CPU throttling involved. All I could say for sure is that partial was reliably a little slower, but only by a matter of ~1 ns (the other two typically ran in 56-56.5 ns, the partial solution typically took 56.5-57.5), and it took quite a lot of paring of extraneous stuff (e.g. switching from %timeit reading the names from global scope causing dict lookups to caching to a local name in %%timeit to use simple array lookups) to even get the differences that predictable.
Point is, any of them work, performance-wise. I'd personally recommend either my MethodType or Kedar's explicit use of descriptor protocol approach (they are identical in end result AFAICT; both produce the same bound method class), whichever one looks prettier to you, as it means the bound method is actually a bound method (so you can extract .__self__ and .__func__ like you would on any bound method constructed the normal way, where partial requires you to switch to .args[0] and .func to get the same info).
You can convert the function to bound method by calling its __get__ method (since all function as descriptors as well, thus have this method)
def t(*args, **kwargs):
print(args)
print(kwargs)
class Test():
pass
Test.t = t.__get__(Test(), Test) # binding to the instance of Test
For example
Test().t(1,2, x=1, y=2)
(<__main__.Test object at 0x7fd7f6d845f8>, 1, 2)
{'y': 2, 'x': 1}
Note that the instance is also passed as an positional argument. That is if you want you function to be instance method, the function should have been written in such a way that first argument behaves as instance of the class. Else, you can bind the function to None instance and the class, which will be like staticmethod.
Test.tt = t.__get__(None, Test)
Test.tt(1,2,x=1, y=2)
(1, 2)
{'y': 2, 'x': 1}
Furthermore, to make it a classmethod (first argument is class):
Test.ttt = t.__get__(Test, None) # bind to class
Test.ttt(1,2, x=1, y=2)
(<class '__main__.Test'>, 1, 2)
{'y': 2, 'x': 1}
When you do C.f = f_monkey_patched, and later instantiate an object of C, the function is bound to that object, effectively doing something like
obj.f = functools.partial(C.f, obj)
When you call obj.f(...), you are actually calling the partially bound function, i.e. f_monkey_patched(obj, ...)
On the other hand, doing my_c.f = f_monkey_patched, you assign the function as-is to the attribute my_c.f. When you call my_c.f(...), those arguments are passed to the function as-is, so self is the first argument you passed, i.e. 1, and the remaining arguments go to *args
I have a unit test where the setup mocks a client like so:
def setUp(self):
self.mock_client = mock.patch.object(module_name, 'ClientClassName', autospec=True).start()
Then in my test I have a faked return value:
def myTest(self):
self.mock_client.my_method.return_value = ...
Now I want to get the arguments that my_method was called with, however I've been tearing my hair out trying to access them. It seems that I can't just do:
mock_args, mock_kwargs = self.mock_client.my_method.call_args
This gives me back First off why doesn't this work? I did make a little headway and found that:
self.mock_client.method_calls[0]
will give me back a call object that looks like call().my_method(...the arguments), but I have been trying for hours to get access to the individual arguments and cant seem to do it. Where am I going wrong?
Call args are just accessed with subscription on the mock.call object, i.e. __getitem__.
>>> from unittest.mock import MagicMock
>>> m = MagicMock()
>>> m(123, xyz="hello")
<MagicMock name='mock()' id='140736989479888'>
>>> m("another call")
<MagicMock name='mock()' id='140736989479888'>
>>> m.call_args_list
[call(123, xyz='hello'), call('another call')]
>>> m.call_args_list[0][0]
(123,)
>>> m.call_args_list[0][1]
{'xyz': 'hello'}
Item 0 will be a tuple of args, and item 1 will be a dict of kwargs. Attribute access also works, like a namedtuple (item 0 is attribute "args", and item 1 is attribute "kwargs"). If you only need to access the most recent call, you can use call_args instead of call_args_list.
Note that accessing the call args items directly is usually not required, you can use an assertion against another call instance in the tests:
>>> from unittest.mock import call
>>> m(k=123)
<MagicMock name='mock()' id='140736989479888'>
>>> assert m.call_args == call(k=123) # pass
>>> assert m.call_args == call(k=124) # fail
AssertionError
Or an even higher level, you can use m.assert_has_calls on the mock directly.
When mocking methods, whether the mock calls have self or not can be influenced by autospec:
>>> from unittest import mock
>>> class A(object):
... def f(self, *args, **kwargs):
... pass
...
>>> with mock.patch("__main__.A.f") as m:
... a = A()
... a.f('without autospec', n=1)
...
>>> m.call_args
call('without autospec', n=1)
>>> with mock.patch("__main__.A.f", autospec=True) as m:
... a = A()
... a.f('with autospec', n=2)
...
>>> m.call_args
call(<__main__.A object at 0x7fffe3d4e6a0>, 'with autospec', n=2)
This is discussed in more detail in the docs here.
In addition to wim's answer, you can actually dig down fairly deeply into this arguments, although sometimes you seem to find a string instead of a real object.
The main point to understand is that when you iterate through my_mock.call_args_list you get objects of type unittest.mock._Call. These can indeed be compared to call(...) objects which you have created yourself. But that's not all.
unittest.mock._Call is itself iterable, and consists of 2 elements: one is a tuple, the other is a dict. These are none other than the *args and **kwargs passed to the mock method.
Given that this is, as far as I can make out, completely undocumented in the docs, I suppose it is not beyond the bounds of possibility that this could break one day. It does often prove handy to know though, in my experience.
I have gone through the page https://docs.python.org/3/library/unittest.mock-examples.html and i see that they have listed an example on how to mock generators
I have a code where i call a generator to give me a set of values that i save as a dictionary. I want to mock the calls to this generator in my unit test.
I have written the following code and it does not work.
Where am i going wrong?
In [7]: items = [(1,'a'),(2,'a'),(3,'a')]
In [18]: def f():
print "here"
for i in [1,2,3]:
yield i,'a'
In [8]: def call_f():
...: my_dict = dict(f())
...: print my_dict[1]
...:
In [9]: call_f()
"here"
a
In [10]: import mock
In [18]: def test_call_f():
with mock.patch('__main__.f') as mock_f:
mock_f.iter.return_value = items
call_f()
....:
In [19]: test_call_f()
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-19-33ca65a4f3eb> in <module>()
----> 1 test_call_f()
<ipython-input-18-92ff5f1363c8> in test_call_f()
2 with mock.patch('__main__.f') as mock_f:
3 mock_f.iter.return_value = items
----> 4 call_f()
<ipython-input-8-a5cff08ebf69> in call_f()
1 def call_f():
2 my_dict = dict(f())
----> 3 print my_dict[1]
KeyError: 1
Change this line:
mock_f.iter.return_value = items
To this:
mock_f.return_value = iter(items)
Wims answer:
mock_f.return_value = iter(items)
works as long as your mock gets called only once. In unit testing, we may often want to call a function or method multiple times with different arguments. That will fail in this case, because on the first call the iterator will be exhausted such that on the second call it will immediately raise a StopIteration exception. With Alexandre Paes' answer I was getting an AttributeError: 'function' object has no attribute '__iter__' when my mock was coming from unittest.mock.patch.
As an alternative, we can create a “fake” iterator and assign that as a side_effect:
#unittest.mock.patch("mymod.my_generator", autospec=True):
def test_my_func(mm):
from mymod import my_func
def fake():
yield from [items]
mm.side_effect = fake
my_func() # which calls mymod.my_generator
my_func() # subsequent calls work without unwanted memory from first call
I have another approach:
mock_f.__iter__.return_value = [items]
This way you really mock the iterator returned value.
This approach works even when you are mocking complex objects which are iterables and have methods (my case).
I tried the chosen answer but didtn't work in my case, only worked when I mocked the way I explained
I need to patch current datetime in tests. I am using this solution:
def _utcnow():
return datetime.datetime.utcnow()
def utcnow():
"""A proxy which can be patched in tests.
"""
# another level of indirection, because some modules import utcnow
return _utcnow()
Then in my tests I do something like:
with mock.patch('***.utils._utcnow', return_value=***):
...
But today an idea came to me, that I could make the implementation simpler by patching __call__ of function utcnow instead of having an additional _utcnow.
This does not work for me:
from ***.utils import utcnow
with mock.patch.object(utcnow, '__call__', return_value=***):
...
How to do this elegantly?
When you patch __call__ of a function, you are setting the __call__ attribute of that instance. Python actually calls the __call__ method defined on the class.
For example:
>>> class A(object):
... def __call__(self):
... print 'a'
...
>>> a = A()
>>> a()
a
>>> def b(): print 'b'
...
>>> b()
b
>>> a.__call__ = b
>>> a()
a
>>> a.__call__ = b.__call__
>>> a()
a
Assigning anything to a.__call__ is pointless.
However:
>>> A.__call__ = b.__call__
>>> a()
b
TLDR;
a() does not call a.__call__. It calls type(a).__call__(a).
Links
There is a good explanation of why that happens in answer to "Why type(x).__enter__(x) instead of x.__enter__() in Python standard contextlib?".
This behaviour is documented in Python documentation on Special method lookup.
[EDIT]
Maybe the most interesting part of this question is Why I cannot patch somefunction.__call__?
Because the function don't use __call__'s code but __call__ (a method-wrapper object) use function's code.
I don't find any well sourced documentation about that, but I can prove it (Python2.7):
>>> def f():
... return "f"
...
>>> def g():
... return "g"
...
>>> f
<function f at 0x7f1576381848>
>>> f.__call__
<method-wrapper '__call__' of function object at 0x7f1576381848>
>>> g
<function g at 0x7f15763817d0>
>>> g.__call__
<method-wrapper '__call__' of function object at 0x7f15763817d0>
Replace f's code by g's code:
>>> f.func_code = g.func_code
>>> f()
'g'
>>> f.__call__()
'g'
Of course f and f.__call__ references are not changed:
>>> f
<function f at 0x7f1576381848>
>>> f.__call__
<method-wrapper '__call__' of function object at 0x7f1576381848>
Recover original implementation and copy __call__ references instead:
>>> def f():
... return "f"
...
>>> f()
'f'
>>> f.__call__ = g.__call__
>>> f()
'f'
>>> f.__call__()
'g'
This don't have any effect on f function. Note: In Python 3 you should use __code__ instead of func_code.
I Hope that somebody can point me to the documentation that explain this behavior.
You have a way to work around that: in utils you can define
class Utcnow(object):
def __call__(self):
return datetime.datetime.utcnow()
utcnow = Utcnow()
And now your patch can work like a charm.
Follow the original answer that I consider even the best way to implement your tests.
I've my own gold rule: never patch protected methods. In this case the things are little bit smoother because protected method was introduced just for testing but I cannot see why.
The real problem here is that you cannot to patch datetime.datetime.utcnow directly (is C extension as you wrote in the comment above). What you can do is to patch datetime by wrap the standard behavior and override utcnow function:
>>> with mock.patch("datetime.datetime", mock.Mock(wraps=datetime.datetime, utcnow=mock.Mock(return_value=3))):
... print(datetime.datetime.utcnow())
...
3
Ok that is not really clear and neat but you can introduce your own function like
def mock_utcnow(return_value):
return mock.Mock(wraps=datetime.datetime,
utcnow=mock.Mock(return_value=return_value)):
and now
mock.patch("datetime.datetime", mock_utcnow(***))
do exactly what you need without any other layer and for every kind of import.
Another solution can be import datetime in utils and to patch ***.utils.datetime; that can give you some freedom to change datetime reference implementation without change your tests (in this case take care to change mock_utcnow() wraps argument too).
As commented on the question, since datetime.datetime is written in C, Mock can't replace attributes on the class (see Mocking datetime.today by Ned Batchelder). Instead you can use freezegun.
$ pip install freezegun
Here's an example:
import datetime
from freezegun import freeze_time
def my_now():
return datetime.datetime.utcnow()
#freeze_time('2000-01-01 12:00:01')
def test_freezegun():
assert my_now() == datetime.datetime(2000, 1, 1, 12, 00, 1)
As you mention, an alternative is to track each module importing datetime and patch them. This is in essence what freezegun does. It takes an object mocking datetime, iterates through sys.modules to find where datetime has been imported and replaces every instance. I guess it's arguable whether you can do this elegantly in one function.
Can I somehow call a function without the ()? Maybe abusing the magic methods such as __call__() somehow?
I'd like to be able to something similar to
from IPython import embed as qq
but call embed() only via qq rather than qq()
This is more out of curiosity, and as a learning exercise for python, rather than practical purposes.
If you are using the REPL (the Python shell), then you can hack your way around this, because the REPL will call repr() on objects for you (which in turn invokes their __repr__ method):
from IPython import embed
class WrappedFunctionCall(object):
def __init__(self, fn):
self.fn = fn
def __repr__(self):
self.fn()
return "" # `__repr__` must return a string
qq = WrappedFunctionCall(embed)
# Typing "qq" will invoke embed now and load iPython.
But really, you should not be doing this!
And of course, it won't work outside of the REPL, because there won't be anything to call __repr__ in that case. Obviously, passing arguments isn't "supported" either.
__call__ will be invoked only if the function is invoked with (). If the function is in a class, then you can use #property decorator, to do something like this
import math
class Circle(object):
def __init__(self, radius):
self.radius = radius
#property
def area(self):
return math.pi * (self.radius ** 2)
print(Circle(5).area)
# 78.53981633974483
Read more about getter and setter here
If you want to learn, play around with Python.
In [1]: def foo():
...: pass
...:
In [2]: foo
Out[2]: <function __main__.foo>
In [3]: foo()
In [4]: bar = foo
In [5]: bar
Out[5]: <function __main__.foo>
In [6]: bar()
As you see, foo will not call the function, it will return it. And that is a good thing becaus you can pass it as an argument and assign it, for example bar = foo.
In pure Python, the only way I can think of is to use an object and a property:
>>> class Wtf(object):
... #property
... def yadda(self):
... print "Yadda"
...
>>> w = Wtf()
>>> w.yadda
Yadda
>>>
Else you might want to check IPython's doc on how to define your own custom "magic" commands: http://ipython.org/ipython-doc/dev/config/custommagics.html
You can call the function foo without using () (on that function):
def call_function(fun_name,*args):
return fun_name(*args)
def foo(a,b):
return a+b
print call_function(foo,1,2)
# Prints 3
Note that this answer isn't entirely serious, but it does contain a snippet of interesting Python code.