Changing decorator params in unit tests [duplicate] - python

I have the following code that I'm trying to test:
great_report.py
from retry import retry
#retry((ReportNotReadyException), tries=3, delay=10, backoff=3)
def get_link(self):
report_link = _get_report_link_from_3rd_party(params)
if report_link:
return report_link
else:
stats.count("report_not_ready", 1)
raise ReportNotReadyException
I've got my testing function which mocks _get_report_link_from_3rd_party which tests everything but I don't want this function to actually pause execution during when I run tests..
#mock.patch('repo.great_report._get_report_link_from_3rd_party', return_value=None)
test_get_link_raises_exception(self, mock_get_report_link):
self.assertRaises(ReportNotReadyException, get_link)
I tried mocking the retry parameters but am running into issues where get_link keeps retrying over and over which causes long build times instead of just raising the exception and continuing. How can I mock the parameters for the #retry call in my test?

As hinted here, an easy way to prevent the actual sleeping is by patching the time.sleep function. Here is the code that did that for me:
#patch('time.sleep', side_effect = lambda _: None)

There is no way to change decorators parameters after load the module. Decorators decorate the original function and change it at the module load time.
First I would like encourage you to change your design a little to make it more testable.
If you extract the body of get_link() method test the new method and trust retry decorator you will obtain your goal.
If you don't want add a new method to your class you can use a config module that store variables that you use when call retry decorator. After that you can use two different module for testing and production.
The last way is the hacking way where you replace retry.api.__retry_internal by a your version that invoke the original one by changing just the variables:
import unittest
from unittest.mock import *
from pd import get_link, ReportNotReadyException
import retry
orig_retry_internal = retry.api.__retry_internal
def _force_retry_params(new_tries=-1, new_delay=0, new_max_delay=None, new_backoff=1, new_jitter=0):
def my_retry_internals(f, exceptions, tries, delay, max_delay, backoff, jitter, logger):
# call original __retry_internal by new parameters
return orig_retry_internal(f, exceptions, tries=new_tries, delay=new_delay, max_delay=new_max_delay,
backoff=new_backoff, jitter=new_jitter, logger=logger)
return my_retry_internals
class MyTestCase(unittest.TestCase):
#patch("retry.api.__retry_internal", side_effect=_force_retry_params(new_tries=1))
def test_something(self, m_retry):
self.assertRaises(ReportNotReadyException, get_link, None)
IMHO you should use that hacking solution only if you are with the back on the wall and you have no chance to redesign you code to make it more testable. The internal function/class/method can change without notice and your test can be difficult to maintain in the future.

Related

pytest mocker.patch.object's return_value uses different mock than the one I passed it

I'm using pytest to patch the os.makedirs method for a test. In a particular test I wanted to add a side effect of an exception.
So I import the os object that I've imported in my script under test, patch it, and then set the side effect in my test:
from infrastructure.scripts.src.server_administrator import os
def mock_makedirs(self, mocker):
mock = MagicMock()
mocker.patch.object(os, "makedirs", return_value=mock)
return mock
def test_if_directory_exist_exception_is_not_raised(self, administrator, mock_makedirs):
mock_makedirs.side_effect = Exception("Directory already exists.")
with pytest.raises(Exception) as exception:
administrator.initialize_server()
assert exception.value == "Directory already exists."
The problem I ran into was that when the mock gets called in my script under test, the side effect no longer existed. While troubleshooting I stopped the tests in the debugger to look at the ID values for the mock I created and the mock that the patch should have set as the return value and found that they are different instances:
I'm still relatively new to some of the testing tools in python, so this may be me missing something in the documentation, but shouldn't the returned mock patched in here be the mock I created?? Am I patching it wrong?
UPDATE
I even adjusted the import style to grab makedirs directly to patch it:
def mock_makedirs(self, mocker):
mock = MagicMock()
mocker.patch("infrastructure.scripts.src.server_administrator.makedirs", return_value=mock)
return mock
And I still run into the same "different mocks" issue.
:facepalm:
I was patching incorrectly. I'm considering just deleting the whole question/answer, but I figured I'd leave it here in case someone runs into the same situation.
I'm defining the patch like this:
mocker.patch.object(os, "makedirs", return_value=mock)
Which would be a valid structure if I was patching the result a function/method. That is, what this patch is saying is "when you call the makedirs, return this.
What I actually want to do is return a mock in place of the method. In it's current form it makes sense that I see two different mocks because the patch logic is currently "replace makedirs with a new mock and then when that mock is called, return this other mock (the mock I made)"
What I really want is just:
mocker.patch.object(os, "makedirs", mock)
Where my third argument (in the patch.object form) is the mock module parameter (vs the named return_value parameter).
In retrospect, it's pretty obvious when I think about it which is why I'm considering deleting the question, but it's an easy enough trip-up that I'm going to leave it live for now.

"Mocking where it's defined" in python mock?

I want to test an evolving SQLite database application, which is in parallel used "productively". In fact I am investigating a pile of large text files by importing them to the database and fiddling around with it. I am used to develop test-driven, and I do not want to drop that for this investigation. But running tests against the "production" database feels somewhat strange. So my objective is to run the tests against a test database (a real SQLite database, not a mock) containing a controlled, but considerable amount of real data showing all kinds of variability I have met during the investigation.
To support this approach, I have a central module myconst.py containing a function returning the name of the database that is used like so:
import myconst
conn = sqlite3.connect(myconst.get_db_path())
Now in the unittest TestCases, I thought about mocking like so:
#patch("myconst.get_db_name", return_value="../test.sqlite")
def test_this_and_that(self, mo):
...
where the test calls functions that will, in nested functions, access the database using myconst.get_db_path().
I have tried to do a little mocking for myself first, but it tends to be clumsy and error prone so I decided to dive into the python mock module as shown before.
Unfortunately, I found warnings all over, that I am supposed to "mock where it's used and not where it's defined" like so:
#patch("mymodule.myconst.get_db_name", return_value="../test.sqlite")
def test_this_and_that(self, mo):
self.assertEqual(mymodule.func_to_be_tested(), 1)
But mymodule will likely not call database functions itself but delegate that to another module. This in turn would imply that my unit tests have to know the call tree where the database is actually access – something I really want to avoid because it would lead to unnecessary test refactoring when the code is refactored.
So I tried to create a minimal example to understand the behavior of mock and where it fails to allow me to mock "at the source". Because a multi module setup is clumsy here, I have provided the original code also on github for everybody's convenience. See this:
myconst.py
----------
# global definition of the database name
def get_db_name():
return "../production.sqlite"
# this will replace get_db_name()
TEST_VALUE = "../test.sqlite"
def fun():
return TEST_VALUE
inner.py
--------
import myconst
def functio():
return myconst.get_db_name()
print "inner:", functio()
test_inner.py
-------------
from mock import patch
import unittest
import myconst, inner
class Tests(unittest.TestCase):
#patch("inner.myconst.get_db_name", side_effect=myconst.fun)
def test_inner(self, mo):
"""mocking where used"""
self.assertEqual(inner.functio(), myconst.TEST_VALUE)
self.assertTrue(mo.called)
outer.py
--------
import inner
def functio():
return inner.functio()
print "outer:", functio()
test_outer.py
-------------
from mock import patch
import unittest
import myconst, outer
class Tests(unittest.TestCase):
#patch("myconst.get_db_name", side_effect=myconst.fun)
def test_outer(self, mo):
"""mocking where it comes from"""
self.assertEqual(outer.functio(), myconst.TEST_VALUE)
self.assertTrue(mo.called)
unittests.py
------------
"""Deeply mocking a database name..."""
import unittest
print(__doc__)
suite = unittest.TestLoader().discover('.', pattern='test_*.py')
unittest.TextTestRunner(verbosity=2).run(suite)
test_inner.py works like the sources linked above say, and so I was expecting it to pass. test_outer.py should fail when I understand the caveats right. But all the tests pass without complaint! So my mock is drawn all the time, even when the mocked function is called from down the callstack like in test_outer.py. From that example I would conclude that my approach is safe, but on the other hand the warnings are consistent across quite some sources and I do not want to recklessly risk my "production" database by using concepts that I do not grok.
So my question is: Do I misunderstand the warnings or are these warnings just over-cautious?
Finally I sorted it out. Maybe this will help future visitors, so I will share my findings:
When changing the code like so:
inner.py
--------
from myconst import get_db_name
def functio():
return get_db_name()
test_inner.py
-------------
#patch("inner.get_db_name", side_effect=myconst.fun)
def test_inner(self, mo):
self.assertEqual(inner.functio(), myconst.TEST_VALUE)
test_inner will succeed, but test_outer will break with
AssertionError: '../production.sqlite' != '../test.sqlite'
This is because mock.patch will not replace the referenced object, which is function get_db_name in module myconst in both cases. mock will instead replace the usages of the name "myconst.get_db_name" by the Mock object passed as the second parameter to the test.
test_outer.py
-------------
#patch("myconst.get_db_name", side_effect=myconst.fun)
def test_outer(self, mo):
self.assertEqual(outer.functio(), myconst.TEST_VALUE)
Since I mock only "myconst.getdb_name" here and inner.py accesses get_db_name via "inner.get_db_name", the test will fail.
By using the proper name, however, this can be fixed:
#patch("outer.inner.get_db_name", return_value=myconst.TEST_VALUE)
def test_outer(self, mo):
self.assertEqual(outer.functio(), myconst.TEST_VALUE)
So the conclusion is that my approach will be safe when I make sure that all modules accessing the database include myconst and use myconst.get_db_name. Alternatively all modules could from myconst import get_db_name and use get_db_name. But I have to draw this decision globally.
Because I control all code accessing get_db_name I am safe. One can argue whether this is good style or not (assumingly the latter), but technically it's safe. Would I mock a library function instead, I could hardly control access to that function and so mocking "where it's defined" becomes risky. This is why the sources cited are warning.

Override a "private" method in a python module

I want to test a function in python, but it relies on a module-level "private" function, that I don't want called, but I'm having trouble overriding/mocking it. Scenario:
module.py
_cmd(command, args):
# do something nasty
function_to_be_tested():
# do cool things
_cmd('rm', '-rf /')
return 1
test_module.py
import module
test_function():
assert module.function_to_be_tested() == 1
Ideally, in this test I dont want to call _cmd. I've looked at some other threads, and I've tried the following with no luck:
test_function():
def _cmd(command, args):
# do nothing
pass
module._cmd = _cmd
although checking module._cmd against _cmd doesn't give the correct reference. Using mock:
from mock import patch
def _cmd_mock(command, args):
# do nothing
pass
#patch('module._cmd', _cmd_mock)
test_function():
...
gives the correct reference when checking module._cmd, although `function_to_be_tested' still uses the original _cmd (as evidenced by it doing nasty things).
This is tricky because _cmd is a module-level function, and I dont want to move it into a module
[Disclaimer]
The synthetic example posted in this question works and the described issue become from specific implementation in production code. Maybe this question should be closed as off topic because the issue is not reproducible.
[Note] For impatient people Solution is at the end of the answer.
Anyway that question given to me a good point to thought: how we can patch a method reference when we cannot access to the variable where the reference is?
Lot of times I found some issue like this. There are lot of ways to meet that case and the commons are
Decorators: the instance we would like replace is passed as decorator argument or used in decorator static implementation
What we would like to patch is a default argument of a method
In both cases maybe refactor the code is the best way to play with that but what about if we are playing with some legacy code or the decorator is a third part decorator?
Ok, we have the back on the wall but we are using python and in python nothing is impossible. What we need is just the reference of the function/method to patch and instead of patching its reference we can patch the __code__: yes I'm speaking about patching the bytecode instead the function.
Get a real example. I'm using default parameter case that is simple, but it works either in decorator case.
def cmd(a):
print("ORIG {}".format(a))
def cmd_fake(a):
print("NEW {}".format(a))
def do_work(a, c=cmd):
c(a)
do_work("a")
cmd=cmd_fake
do_work("b")
Output:
ORIG a
ORIG b
Ok In this case we can test do_work by passing cmd_fake but there some cases where is impossible do it: for instance what about if we need to call something like that:
def what_the_hell():
list(map(lambda a:do_work(a), ["c","d"]))
what we can do is patch cmd.__code__ instead of _cmd by
cmd.__code__ = cmd_fake.__code__
So follow code
do_work("a")
what_the_hell()
cmd.__code__ = cmd_fake.__code__
do_work("b")
what_the_hell()
Give follow output:
ORIG a
ORIG c
ORIG d
NEW b
NEW c
NEW d
Moreover if we want to use a mock we can do it by add follow lines:
from unittest.mock import Mock, call
cmd_mock = Mock()
def cmd_mocker(a):
cmd_mock(a)
cmd.__code__=cmd_mocker.__code__
what_the_hell()
cmd_mock.assert_has_calls([call("c"),call("d")])
print("WORKS")
That print out
WORKS
Maybe I'm done... but OP still wait for a solution of his issue
from mock import patch, Mock
cmd_mock = Mock()
#A closure for grabbing the right function code
def cmd_mocker(a):
cmd_mock(a)
#patch.object(module._cmd,'__code__', new=cmd_mocker.__code__)
test_function():
...
Now I should say never use this trick unless you are with the back on the wall. Test should be simple to understand and to debug ... try to debug something like this and you will become mad!

Pythonic way of replacing real return values and implementation of functions with mock ones

My flask+python app pulls json from native binaries via a third party module nativelib.
Essentially, my functions look like this
def list_users():
return nativelib.all_users()
However, strong dependency on this third party module and a huge native component is proving to be a huge impediment in rapid development.
What I would like to do is mock the return value of my list_users function.
Additionally, I should be able to switch back to 'real data' by simply toggling some boolean.
This boolean could be some attribute within code or some command line argument.
The solution I have currently devised looks something like this:
#mockable('users_list')
def list_users():
return nativelib.all_users()
I have implemented the aforementioned mockable as a class:
class mockable:
mock = False
# A dictionary that contains a mapping between the api method
# ..and the file which contains corresponding the mock json response
__mock_json_resps = {'users_list':'/var/mock/list_user.json', 'user_by_id': '/var/mock/user_1.json'}
def __init__(self, api_method):
self.api_method = api_method
def __call__(self, fn):
#wraps(fn)
def wrapper():
if mock:
# If mocking is enabled,read mock data from json file
mock_resp = __mock_json_resps[self.api_method]
with open(mock_resp) as json_file:
return json.load(json_file)
else:
return self.fn()
return wrapper
In my entry point module I can then enable mocking using
mockable.mock = True
Now although this might work, I am interested in knowing if this is the 'pythonic' way of doing it.
If not, what is the best way to achieve this?
Personally, I prefer using the mock library, despite it being an additional dependency. Also, mocking is used in my unittests only, so the actual code is not mixed with code for testing. So, in your place, I'd leave the function you have as it is:
def list_users():
return nativelib.all_users()
And then in my unittest, I would patch the native library "temporarily":
def test_list_users(self):
with mock.patch.object(nativelib,
'all_users',
return_value='json_data_here')):
result = list_users()
self.assertEqual(result, 'expected_result')
In addition, for writing multiple tests, I usually turn the mocking code into a decorator, like:
def mock_native_all_users(return_value):
def _mock_native_all_users(func):
def __mock_native_all_users(self, *args, **kwargs):
with mock.patch.object(nativelib,
'all_users',
return_value=return_value):
return func(self, *args, **kwargs)
return __mock_native_all_users
return _mock_native_all_users
and then use it:
#mock_native_all_users('json_data_to_return')
def test_list_users(self):
result = list_users()
self.assertEqual(result, 'expected_result')
You set mockable.mock in your entry code, that is you don't dynamically switch it on and off at different times during a single run. So I'm inclined to think of this as a dependency configuration issue. This isn't necessarily better, just a slightly different way of framing the problem.
If you aren't bothered about inspection on nativelib.all_users then you don't need to touch list_users, just replace its dependency. For example as a quick one-shot if nativelib is only imported in one place:
if mock:
class NativeLibMock:
def all_users(self):
return whatever # maybe configure the mock with init params
nativelib = NativeLibMock()
else:
import nativelib
If you are bothered about inspection then there's a potential problem with the extra self parameter and so on. In practice you'd probably not want to duplicate my code above or have different instances of NativeLibMock kicking around in different modules that depend on nativelib. If you're defining a module anyway to contain your mock object then you might as well just implement a mock module.
So you can of course re-implement the module to return the mock data and either adjust PYTHONPATH (or sys.path) according to whether you want the mock, or do import mocklib as nativelib. Which you choose depends on whether you want to mock all uses of nativelib or just this one. Basically, the same thing you'd do during rapid development if the reason that you want to mock nativelib is because it hasn't been written yet ;-)
Consider how important it is that the specification of 'users_list' as the tag for the mock data for list_users, is located with list_users. If it's useful then decorating list_users is a benefit, but you still might want to just provide a different definition of the decorator in the mock vs non-mock cases. The non-mock "decorator" could just return fn instead of wrapping it.

Mock method that is imported into the module under test

Say I want to test this module:
import osutils
def check_ip6(xml):
ib_output = osutils.call('iconfig ib0')
# process and validate ib_output (to be unit tested)
...
This method is dependent on the environment, because it makes a System call (which expects a specific network interface), so its not callable on a testmachine.
I want to write a Unit test for that method which checks if the processing of ib_output works as expected. Therefore I want to mock osutils.call and let it just return testdata. What is the preferred way to do that? Do I have to do mocking or (monkey) patching?
Example test:
def test_ib6_check():
from migration import check_ib6
# how to mock os_utils.call used by the check_ib6-method?
assert check_ib6(test_xml) == True
One solution would be to do from osutils import call and then when patching things replace yourmodule.call with something else before calling test_ib6_check.
Ok, I found out that this has nothing to do with mocks, afaik I just need a monkey patch: I need to import and change the osutils.call-method and then import the method under test (and NOT the whole module, since it would then import the original call-method) afterwards. So this method will then use my changed call-method:
def test_ib6_check():
def call_mock(cmd):
return "testdata"
osutils.call = call_mock
from migration import check_ib6
# the check_ib6 now uses the mocked method
assert check_ib6(test_xml) == True

Categories

Resources