Suppose I have a python function
def func(self):
self.method_1()
self.method_2()
How can I write an unit test that can assert method_1 is called before method_2?
#mock.patch(method_1)
#mock.patch(method_2)
def test_call_order(method_2_mock, method_1_mock):
# Test the order
Your case is a slight variation of Python Unit Testing with two mock objects, how to verify call-order?. What you should do is set method_2_mock, method_1_mock as children of one new mock object and then ask for mock_calls attribute or use assert_has_calls:
#mock.patch(method_1)
#mock.patch(method_2)
def test_call_order(method_2_mock, method_1_mock):
mock_parent = Mock()
mock_parent.m1, mock_parent.m2 = method_1_mock, method_2_mock
<test code>
"""Check if method 1 is called before method 2"""
mock_parent.assert_has_calls([call.m1(), call.m2()])
There are lot of details omitted in this code like call arguments. Take a look to call and the very useful ANY helper.
ATTENTION
This is valid just for unitetest.mock in python3. For python 2.7 and mock 1.0.1 you should use attach_mock instead.
Another option is to create a simple list and append each mock to it through side_effect.
#mock.patch(method_1)
#mock.patch(method_2)
def test_call_order(method_2_mock, method_1_mock):
call_order = []
method_1_mock.side_effect = lambda *a, **kw: call_order.append(method_1_mock)
method_2_mock.side_effect = lambda *a, **kw: call_order.append(method_2_mock)
# Run test code...
assert call_order == [method_1_mock, method_2_mock]
Each time the method is called, the side_effect lambda function is called. Since lists are ordered, this is a clean way to check the call order of your methods.
Improving on the second approach, by Chris Collett.
Using side_effect like that makes the call return None. This is a problem if you need the mock method to return a value. A simple solution is to use a helper function:
call_order = []
def log_ret(func, ret_val):
call_order.append(func)
return ret_val
method_1_mock.side_effect = lambda *a, **kw: log_ret(method_1_mock, 'return_value_1')
method_2_mock.side_effect = lambda *a, **kw: log_ret(method_2_mock, 'return_value_2')
Cheers
Related
I have written the following decorator:
def partializable(fn):
def arg_partializer(*fixable_parameters):
def partialized_fn(dynamic_arg):
return fn(dynamic_arg, *fixable_parameters)
return partialized_fn
return arg_partializer
The purpose of this decorator is to break the function call into two calls. If I decorate the following:
#partializable
def my_fn(dyn, fix1, fix2):
return dyn + fix1 + fix2
I then can do:
core_accepting_dynamic_argument = my_fn(my_fix_1, my_fix_2)
final_result = core_accepting_dynamic_argument(my_dyn)
My problem is that the now decorated my_fn exhibits the following signature: my_fn(*fixable_parameters)
I want it to be: my_fn(fix1, fix2)
How can I accomplish this? I probably have to use wraps or the decorator module, but I need to preserve only part of the original signature and I don't know if that's possible.
Taking inspiration from https://stackoverflow.com/a/33112180/9204395, it's possible to accomplish this by manually altering the signature of arg_partializer, since only the signature of fn is known in the relevant scope and can be handled with inspect.
from inspect import signature
def partializable(fn):
def arg_partializer(*fixable_parameters):
def partialized_fn(dynamic_arg):
return fn(dynamic_arg, *fixable_parameters)
return partialized_fn
# Override signature
sig = signature(fn)
sig = sig.replace(parameters=tuple(sig.parameters.values())[1:])
arg_partializer.__signature__ = sig
return arg_partializer
This is not particularly elegant, but as I think about the problem I'm starting to suspect that this (or a conceptual equivalent) is the only possible way to pull this stunt. Feel free to contradict me.
Below is an example of decorator in python. I don't quite get how it actually works for the doubly decorated decorator.
from functools import update_wrapper
def decorator(d):
print(d.__name__)
return lambda fn: update_wrapper(d(fn),fn)
decorator=decorator(decorator) #I don't understand how this works.
#decorator
def n_ary(f):
print(f.__name__)
def n_ary_f(x,*args):
return x if not args else f(x,n_ary_f(*args))
return n_ary_f
#n_ary
def seq(x,y):return ('seq',x,y)
It seems that the flow should be (I am not sure about it):
decorator is decorated, so it returns lambda fn: update_wrapper(decorator(fn),fn).
n_ary=decorator(n_ary), then n_ary is now updated due to the function of update_wrapper(decorator(n_ary),n_ary)
The third part should be the update of seq, but I don't understand when is the update_wrapper function used.
Decoration is just syntactic sugar for calling another function, and replacing the current function object with the result. The decorator dance you are trying to understand is over-using that fact. Even though it tries to make it easier to produce decorators, I find it doesn't actually add anything and is only creating confusion by not following standard practice.
To understand what is going on, you can substitute the function calls (including decorators being applied) with their return values, and tracking the d references by imagining saved references to the original decorated function object:
decorator=decorator(decorator) replaces the original decorator function with a call to itself. We'll just ignore the print() call here to make substitution easier.
The decorator(decorator) call returns lambda fn:
update_wrapper(d(fn),fn), where d is bound to the original
decorator, so now we have
_saved_reference_to_decorator = decorator
decorator = lambda fn: update_wrapper(_saved_reference_to_decorator(fn), fn)
so update_wrapper() is not actually called yet. It'll only be called when this new decorator lambda is called.
#decorator then calls the above lambda (the one calling _saved_reference_to_decorator(fr) and passing the result to update_wrapper()) and applies that lambda to the def n_ary(f) function:
n_ary = decorator(n_ary)
which expands to:
n_ary = update_wrapper(_saved_reference_to_decorator(n_ary), n_ary)
which is:
_saved_reference_to_n_ary = n_ary
n_ary = update_wrapper(lambda fn: update_wrapper(_saved_reference_to_n_ary(fn), fn), n_ary)
Now, update_wrapper() just copies metadata from the second argument to the first returning the first argument, so that then leaves:
n_ary = lambda fn: update_wrapper(_saved_reference_to_n_ary(fn), fn)
with the right __name__ and such set on the lambda function object.
#n_ary is again a decorator being applied, this time to def seq(x, y), so we get:
seq = n_ary(seq)
which can be expanded to:
seq = update_wrapper(_saved_reference_to_n_ary(seq), seq)
which if we take the return value of update_wrapper() is
seq = _saved_reference_to_n_ary(seq)
with the metadata copied over from the original seq to whatever the original n_ary function returns.
So in the end, all this dance gets you is update_wrapper() being applied to the return value from a decorator, which is the contained wrapper function.
This is all way, way too complicated. The update_wrapper() function has a far more readable helper decorator already provided: #functools.wraps(). Your piece of code could be rewritten to:
import functools
def n_ary(f):
print(f.__name__)
#functools.wraps(f)
def n_ary_f(x,*args):
return x if not args else f(x,n_ary_f(*args))
return n_ary_f
#n_ary
def seq(x,y):return ('seq',x,y)
I simply replaced the #decorator decorator on the n_ary() function definition with a #functools.wraps() decorator on the contained wrapper function that is returned.
I'm using unittest and nose-parametarized, and want to apply different decorators to a test based on a condition.
I have a test and I want to skip unittest.skip the test or execute it #parameterized.expand(args)based on the arguments passed to args.
I think I need to have another decorator which applies proper decorator to the test , but now sure how.
pseudo code could be something like this :
#validate_data(args)
def test(args):
...
where #validate_data(args) is a decorator which applies unittest.skip if args ==None or #parameterized.expand(args)otherwise
Any comments/suggestions is appreciated.
A decorator can also be called as a function. #decorator is equivalent to decorator(func) and #decorator(args) to decorator(args)(func). So you could return the value of those function returns conditionally in your decorator. Here is an example below:
def parameterized_or_skip(args=None):
if args:
return parameterized.expand(args)
return unittest.skip(reason='No args')
...
#parameterized_or_skip(args)
def my_testcase(self, a, b):
pass
I have a decorator function my_fun(I,k) and it is applied to a function add(x,y) as such
#my_fun(4,5)
def add(x,y): return x+y
I am new to Python would like to know if I am writing the my_fun function
How can I access x,y in the add method in my_fun?
How can I access the return value of add in the decorator function?
I am a little confused on syntax and concepts any explanation would be help.
A decorator consists of the decorator function and a function wrapper (and if you want additional arguments for the decorator another outer layer of function around it):
# Takes the arguments for the decorator and makes them accessible inside
def my_fun(decorator_argument1, decorator_argument2):
# Takes the function so that it can be wrapped.
def wrapfunc(func):
# Here we are actually going to wrap the function ... finally
def wrapper(*args, **kwargs):
# Call the function with the args and kwargs
res = func(*args, **kwargs)
# return this result
return res
# Replace the decorated function with the wrapper
return wrapper
# Return the wrapper for the function wrapper :-)
return wrapfunc
In your case if you only want to use the decorator with your function you don't need to bother with the *args, **kwargs and replace it by:
def wrapper(x, y):
# Here you can do stuff with x and y, i.e. print(x)
# Call the function with x and y
res = func(x, y)
# Here you can do stuff with the result, i.e. res = res * decorator_argument1
return res
I indicated the places where you can access x and y and the result.
If you want to predefine values for x and y a custom decorator is not the best way. You could use defaults:
def add(x=4,y=5): return x+y
add() # returns 9
add(2) # returns 7
add(5, 10) # returns 15
or if you want to fix an argument you should use functools.partial
If you're passing arguments to the decorator with #my_fun(4, 5), you need three levels of nested functions to implement the decorator in the simplest way. The outer level is the "decorator factory". It returns the middle level function, the decorator. The decorator gets called with the function it's decorating as an argument and needs to return the inner most nested function, the wrapper. The wrapper function is the one that gets called by the user.
def decorator_factory(deco_arg, deco_arg2): # name this whatever you want to use with #syntax
def decorator(func):
def wrapper(func_arg, func_arg2):
# This is a closure!
# In here you can write code using the arguments from the enclosing scpoes. e.g.:
return func(func_arg*deco_arg, func_arg2*deco_arg2) # uses args from all levels
return wrapper
return decorator
The inner functions here are closures. They can see the variables in the scope surrounding the place they were defined in, even after the functions those scope belonged to have finished running.
(Note, if you want your decorator to be able to decorate many different functions, you may want the wrapper function to accept *args and **kwargs and pass them along to func. The example above only works for functions that accept exactly two arguments. A limitation like that may be perfectly reasonable for some uses, but not always.)
I'd like to modify the arguments passed to a method in a module, as opposed to replacing its return value.
I've found a way around this, but it seems like something useful and has turned into a lesson in mocking.
module.py
from third_party import ThirdPartyClass
ThirdPartyClass.do_something('foo', 'bar')
ThirdPartyClass.do_something('foo', 'baz')
tests.py
#mock.patch('module.ThirdPartyClass.do_something')
def test(do_something):
# Instead of directly overriding its return value
# I'd like to modify the arguments passed to this function.
# change return value, no matter inputs
do_something.return_value = 'foo'
# change return value, based on inputs, but have no access to the original function
do_something.side_effect = lambda x, y: y, x
# how can I wrap do_something, so that I can modify its inputs and pass it back to the original function?
# much like a decorator?
I've tried something like the following, but not only is it repetitive and ugly, it doesn't work. After some PDB introspection.. I'm wondering if it's simply due to however this third party library works, as I do see the original functions being called successfully when I drop a pdb inside the side_effect.
Either that, or some auto mocking magic I'm just not following that I'd love to learn about.
def test():
from third_party import ThirdPartyClass
original_do_something = ThirdPartyClass.do_something
with mock.patch('module.ThirdPartyClass.do_something' as mocked_do_something:
def side_effect(arg1, arg2):
return original_do_something(arg1, 'overridden')
mocked_do_something.side_effect = side_effect
# execute module.py
Any guidance is appreciated!
You may want to use parameter wraps for the mock call. (Docs for reference.) This way the original function will be called, but it will have everything from Mock interface.
So for changing parameters called to original function you may want to try it like that:
org.py:
def func(x):
print(x)
main.py:
from unittest import mock
import org
of = org.func
def wrapped(a):
of('--{}--'.format(a))
with mock.patch('org.func', wraps=wrapped):
org.func('x')
org.func.assert_called_with('x')
result:
--x--
The trick is to pass the original underlying function that you still want to access as a parameter to the function.
Eg, for race condition testing, have tempfile.mktemp return an existing pathname:
def mock_mktemp(*, orig_mktemp=tempfile.mktemp, **kwargs):
"""Ensure mktemp returns an existing pathname."""
temp = orig_mktemp(**kwargs)
open(temp, 'w').close()
return temp
Above, orig_mktemp is evaluated when the function is declared, not when it is called, so all invocations will have access to the original method of tempfile.mktemp via orig_mktemp.
I used it as follows:
#unittest.mock.patch('tempfile.mktemp', side_effect=mock_mktemp)
def test_retry_on_existing_temp_path(self, mock_mktemp):
# Simulate race condition: creation of temp path after tempfile.mktemp
...