I am using the tenacity library to use its #retry decorator.
I am using this to make a function which makes a HTTP-request "repeat" multiple times in case of failure.
Here is a simple code snippet:
#retry(stop=stop_after_attempt(7), wait=wait_random_exponential(multiplier=1, max=60))
def func():
...
requests.post(...)
The function uses the tenacity wait-argument to wait some time in between calls.
The function together with the #retry-decorator seems to work fine.
But I also have a unit-test which checks that the function gets called indeed 7 times in case of a failure. This test takes a lot of time because of this wait in beetween tries.
Is it possible to somehow disable the wait-time only in the unit-test?
The solution came from the maintainer of tenacity himself in this Github issue: https://github.com/jd/tenacity/issues/106
You can simply change the wait function temporarily for your unit test:
from tenacity import wait_none
func.retry.wait = wait_none()
After reading the thread in tenacity repo (thanks #DanEEStar for starting it!), I came up with the following code:
#retry(
stop=stop_after_delay(20.0),
wait=wait_incrementing(
start=0,
increment=0.25,
),
retry=retry_if_exception_type(SomeExpectedException),
reraise=True,
)
def func() -> None:
raise SomeExpectedException()
def test_func_should_retry(monkeypatch: MonkeyPatch) -> None:
# Use monkeypatch to patch retry behavior.
# It will automatically revert patches when test finishes.
# Also, it doesn't create nested blocks as `unittest.mock.patch` does.
# Originally, it was `stop_after_delay` but the test could be
# unreasonably slow this way. After all, I don't care so much
# about which policy is applied exactly in this test.
monkeypatch.setattr(
func.retry, "stop", stop_after_attempt(3)
)
# Disable pauses between retries.
monkeypatch.setattr(func.retry, "wait", wait_none())
with pytest.raises(SomeExpectedException):
func()
# Ensure that there were retries.
stats: Dict[str, Any] = func.retry.statistics
assert "attempt_number" in stats
assert stats["attempt_number"] == 3
I use pytest-specific features in this test. Probably, it could be useful as an example for someone, at least for future me.
Thanks to discussion here, I found an elegant way to do this based on code from #steveb:
from tenacity import retry, stop_after_attempt, wait_exponential
#retry(reraise=True, stop=stop_after_attempt(5), wait=wait_exponential(multiplier=1, min=4, max=10))
def do_something_flaky(succeed):
print('Doing something flaky')
if not succeed:
print('Failed!')
raise Exception('Failed!')
And tests:
from unittest import TestCase, mock, skip
from main import do_something_flaky
class TestFlakyRetry(TestCase):
def test_succeeds_instantly(self):
try:
do_something_flaky(True)
except Exception:
self.fail('Flaky function should not have failed.')
def test_raises_exception_immediately_with_direct_mocking(self):
do_something_flaky.retry.sleep = mock.Mock()
with self.assertRaises(Exception):
do_something_flaky(False)
def test_raises_exception_immediately_with_indirect_mocking(self):
with mock.patch('main.do_something_flaky.retry.sleep'):
with self.assertRaises(Exception):
do_something_flaky(False)
#skip('Takes way too long to run!')
def test_raises_exception_after_full_retry_period(self):
with self.assertRaises(Exception):
do_something_flaky(False)
mock the base class wait func with:
mock.patch('tenacity.BaseRetrying.wait', side_effect=lambda *args, **kwargs: 0)
it always not wait
You can use unittest.mock module to mock some elements of tentacity library.
In your case all decorators you use are classes e.g. retry is a decorator class defined here. So it might be little bit tricky, but I think trying to
mock.patch('tentacity.wait.wait_random_exponential.__call__', ...)
may help.
I wanted to override the retry function of the retry attribute and while that sounds obvious, if you are playing with this for the first time it doesn't look right but it is.
sut.my_func.retry.retry = retry_if_not_result(lambda x: True)
Thanks to the others for pointing me in the right direction.
You can mock tenacity.nap.time in conftest.py in the root folder of unit test.
#pytest.fixture(autouse=True)
def tenacity_wait(mocker):
mocker.patch('tenacity.nap.time')
Related
I need to use pytest-stress to run tests for a period of time. Previously, I was using pytest-repeat which is based on the number of iterations. I am using pytest-html for reporting results.
When I switched to stress from repeat, all iterations of the test would all appear inside 1 test row. Repeat was able to separate the tests into different iterations.
I am trying to get stress to just list the test and the current iteration count in a separate row for pytest-html.
I know that pytest-stress is overriding pytest_runtestloop so it should be running at the session scope level.
I did try adding in some of the functionality from pytest_generate_tests that pytest-repeat overrides because it runs at the function scope level.
I would like to for the results to report each iteration separately (I removed the preceding path for readability here)
ex:
test_report.py::TestReport::test_fail[1]
test_report.py::TestReport::test_fail[2]
test_report.py::TestReport::test_fail[3]
pytest-repeat.png
pytest-stress.png
Example Code:
import logging
class TestReport:
def test_pass(self):
logging.info("assert(True)")
assert(True)
def test_fail(self):
logging.info("assert(False)")
assert(False)
Conftest.py
def pytest_html_results_table_header(cells):
cells.insert(2, html.th("Time", class_="sortable time", col="time"))
cells.pop()
def pytest_html_results_table_row(report, cells):
cells.insert(2, html.td(datetime.utcnow(), class_="col-time"))
cells.pop()
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
report = outcome.get_result()
setattr(report, "duration_formatter", "%H:%M:%S.%f")
#pytest.hookimpl(tryfirst=True)
def pytest_configure(config):
if not os.path.exists('results'):
os.makedirs('results')
config.option.htmlpath = 'results/results_'+datetime.now().strftime("%Y-%m-%d_T%H-%M-%S")+".html"
#change name based on iteration
def pytest_itemcollected(item):
cls = item.getparent(pytest.Class)
iter = 0
if hasattr(cls.obj, "iteration"):
iter = int(getattr(cls.obj, "iteration"))
item._nodeid = item._nodeid + f"[{iter}]"
I do not know if this is possible through a minor edit to conftest.py or if I would have to create my own plugin for pytest to run based on a period of time.
Created my own pytest plugin called pytest-loop. It merges pytest-stress and pytest-repeat with a fix to clear previous reports.
https://github.com/anogowski/pytest-loop
https://pypi.org/project/pytest-loop/
I have an endpoint that returns a list from my database. If something goes wrong along the way, I return an internal_server_error, which has 500 status_code and a message as a parameter.
def get_general_ranking():
try:
ranking_list = GamificationService.get_general_ranking()
return basic_response(ranking_list, 200)
except Exception as e:
logging.error(str(e))
cache.delete()
return internal_server_error_response('Could not get ranking. Check log for reasons.')
I am implementing an unit test to this endpoint. So, right now, I have this implementation:
class TestGamificationController(unittest.TestCase):
def setUp(self):
"""
Function called when the class is initialized.
"""
test_app = app.test_client()
self.general_ranking = test_app.get('/v1/gamification/general_ranking')
def test_endpoint_general_ranking(self):
"""
Testing the endpoint '/v1/gamification/general_ranking'.
"""
assert self.general_ranking.status_code == 200, "Wrong status code."
assert len(self.general_ranking.json) > 0, "/v1/gamification/general_ranking is returning an empty list."
assert self.general_ranking.content_type == 'application/json', "Wrong content_type"
But, as you can see below, when I run the test with coverage to check if I am covering 100% of my code, I get 75%. The missing lines are the exception ones.
---------- coverage: platform darwin, python 3.8.0-final-0 -----------
Name Stmts Miss Cover Missing
------------------------------------------------------------------------
api/controller/GamificationController.py 16 4 75% 18-21
Missing lines:
except Exception as e:
logging.error(str(e))
cache.delete()
return internal_server_error_response('Could not get ranking. Check log for reasons.')
How can I cover this exception too using pytest? Or should I use something else?
I see three possible fixes for this:
Create a custom route in your app that just raises said exception.
Programmatically add this custom route in your app when you start your tests.
Move your global error handler function to its own file and ignore it from your coverage.
Now personally 1 is the easiest with just applying a debug/dev environment check that just throws a route not found error if it's off.
2 is doable if you use your Flask app factory to generate an app and generate the custom route all on the test execution although I'm not sure if this is enough to pass the AssertionError that Flask throws if you make any modification to its app routes after the first request is received. Which is usually the case when you instantiate your app on your conftest.py.
3 is kind of cheating I suppose.
Hope you got this sorted out. Would like to know how you solved this.
Update: The best way I've found in doing this is by using the sessionstart hook of pytest that runs before any tests. I use this to initialize the custom error endpoints. This approach works best since you won't have to pollute the codebase with test-specific logic.
Something similar has been asked before, but I'm struggling to get this to work.
How do I mock an import module from another file
I have one file:
b.py (named to be consistent with the linked docs)
import cv2 # module 'a' in the linked docs
def get_video_frame(path):
vidcap = cv2.VideoCapture(path) # `a.SomeClass` in the linked docs
vidcap.isOpened()
...
test_b.py
import b
import pytest # with pytest-mock installed
def test_get_frame(mocker):
mock_vidcap = mocker.Mock()
mock_vidcap.isOpened.side_effect = AssertionError
mock_cv2 = mocker.patch('cv2.VideoCapture')
mock_cv2.return_value = mock_vidcap
b.get_video_frame('foo') # Doesn't fail
mock_vidcap.isOpened.assert_called() # fails
I set the tests up like this because in where to patch it specifies that if
In this case the class we want to patch is being looked up on the a module and so we have to patch a.SomeClass instead:
#patch(‘a.SomeClass’)
I've tried a few other combinations of patching, but it exhibits the same behavior, which suggests I'm not successfully patching the module. If the patch were to be applied b.get_video_frame('foo') would fail due to the side_effect; having assert_called fail, supports this.
Edit in an effort to reduce the length of the question I left off the rest of get_video_frame. Unfortunitly, the parts left off we're the critical parts. The full function is:
def get_video_frame(path):
vidcap = cv2.VideoCapture(path) # `a.SomeClass` in the linked docs
is_open = vidcap.isOpened()
while True:
is_open, frame = vidcap.read()
if is_open:
yield frame
else:
break
This line just creates a generator:
b.get_video_frame('foo')
The line is_open = vidcap.isOpened() is never reached, because in the test function the generator remains frozen at the start, therefore the side effect never raises.
You are otherwise using mocker and patch correctly.
The application is written by kivy.
I want to test a function via pytest, but in order to test that function, I need to initalize the object first, but the object needs something from the UI when initalizing, but I am at testing phase, so don't know how to retrieve something from the UI.
This is the class which has an error and has been handled
class SaltConfig(GridLayout):
def check_phone_number_on_first_contact(self, button):
s = self.instanciate_ServerMsg(tt)
try:
s.send()
except HTTPError as err:
print("[HTTPError] : " + str(err.code))
return
# some code when running without error
def instanciate_ServerMsg():
return ServerMsg()
This is the helper class which generates the ServerMsg object used by the former class.
class ServerMsg(OrderedDict):
def send(self,answerCallback=None):
#send something to server via urllib.urlopen
This is my tests code:
class TestSaltConfig:
def test_check_phone_number_on_first_contact(self):
myError = HTTPError(url="http://127.0.0.1", code=500,
msg="HTTP Error Occurs", hdrs="donotknow", fp=None)
mockServerMsg = mock.Mock(spec=ServerMsg)
mockServerMsg.send.side_effect = myError
sc = SaltConfig(ds_config_file_missing.data_store)
def mockreturn():
return mockServerMsg
monkeypatch.setattr(sc, 'instanciate_ServerMsg', mockreturn)
sc.check_phone_number_on_first_contact()
I can't initialize the object, it will throw an AttributeError when initialzing since it needs some value from UI.
So I get stuck.
I tried to mock the object then patch the function to the original one, but won't work either since the function itself has has logic related to UI.
How to solve it? Thanks
I made an article about testing Kivy apps together with a simple runner - KivyUnitTest. It works with unittest, not with pytest, but it shouldn't be hard to rewrite it, so that it fits your needs. In the article I explain how to "penetrate" the main loop of UI and this way you can happily go and do with button this:
button = <button you found in widget tree>
button.dispatch('on_release')
and many more. Basically you can do anything with such a test and you don't need to test each function independently. I mean... it's a good practice, but sometimes (mainly when testing UI), you can't just rip the thing out and put it into a nice 50-line test.
This way you can do exactly the same thing as a casual user would do when using your app and therefore you can even catch issues you'd have trouble with when testing the casual way e.g. some weird/unexpected user behavior.
Here's the skeleton:
import unittest
import os
import sys
import time
import os.path as op
from functools import partial
from kivy.clock import Clock
# when you have a test in <root>/tests/test.py
main_path = op.dirname(op.dirname(op.abspath(__file__)))
sys.path.append(main_path)
from main import My
class Test(unittest.TestCase):
def pause(*args):
time.sleep(0.000001)
# main test function
def run_test(self, app, *args):
Clock.schedule_interval(self.pause, 0.000001)
# Do something
# Comment out if you are editing the test, it'll leave the
# Window opened.
app.stop()
def test_example(self):
app = My()
p = partial(self.run_test, app)
Clock.schedule_once(p, 0.000001)
app.run()
if __name__ == '__main__':
unittest.main()
However, as Tomas said, you should separate UI and logic when possible, or better said, when it's an efficient thing to do. You don't want to mock your whole big application just to test a single function that requires communication with UI.
Finally made it, just get things done, I think there must be a more elegant solution. The idea is simple, given the fact that all lines are just simply value assignment except the s.send() statement.
Then we just mock the original object, every time when some errors pop up in the testing phase (since the object lack some values from the UI), we mock it, we repeat this step until the testing method can finally test if the function can handle the HTTPError or not.
In this example, we only need to mock a PhoneNumber class which is lucky, but some times we may need to handle more, so obviously #KeyWeeUsr 's answer is an more ideal choice for the production environment. But I just list my thinking here for somebody who wants a quick solution.
#pytest.fixture
def myHTTPError(request):
"""
Generating HTTPError with the pass-in parameters
from pytest_generate_tests(metafunc)
"""
httpError = HTTPError(url="http://127.0.0.1", code=request.param,
msg="HTTP Error Occurs", hdrs="donotknow", fp=None)
return httpError
class TestSaltConfig:
def test_check_phone_number( self, myHTTPError, ds_config_file_missing ):
"""
Raise an HTTP 500 error, and invoke the original function with this error.
Test to see if it could pass, if it can't handle, the test will fail.
The function locates in configs.py, line 211
This test will run 2 times with different HTTP status code, 404 and 500
"""
# A setup class used to cover the runtime error
# since Mock object can't fake properties which create via __init__()
class PhoneNumber:
text = "610274598038"
# Mock the ServerMsg class, and apply the custom
# HTTPError to the send() method
mockServerMsg = mock.Mock(spec=ServerMsg)
mockServerMsg.send.side_effect = myHTTPError
# Mock the SaltConfig class and change some of its
# members to our custom one
mockSalt = mock.Mock(spec=SaltConfig)
mockSalt.phoneNumber = PhoneNumber()
mockSalt.instanciate_ServerMsg.return_value = mockServerMsg
mockSalt.dataStore = ds_config_file_missing.data_store
# Make the check_phone_number_on_first_contact()
# to refer the original function
mockSalt.check_phone_number = SaltConfig.check_phone_number
# Call the function to do the test
mockSalt.check_phone_number_on_first_contact(mockSalt, "button")
In the process of hunting down performance bugs I finally identified that the source of the problem is the contextlib wrapper. The overhead is quite staggering and I did not expect that to be the source of the slowdown. The slowdown is in the range of 50X, I cannot afford to have that in a loop. I sure would have appreciated a warning in the docs if it has the potential of slowing things down so significantly.
It seems this has been known since 2010 https://gist.github.com/bdarnell/736778
It has a set of benchmarks you can try. Please change fn to fn() in simple_catch() before running. Thanks, DSM for pointing this out.
I am surprised that the situation has not improved since those times. What can I do about it? I can drop down to try/except, but I hope there are other ways to deal with it.
Here are some new timings:
import contextlib
import timeit
def work_pass():
pass
def work_fail():
1/0
def simple_catch(fn):
try:
fn()
except Exception:
pass
#contextlib.contextmanager
def catch_context():
try:
yield
except Exception:
pass
def with_catch(fn):
with catch_context():
fn()
class ManualCatchContext(object):
def __enter__(self):
pass
def __exit__(self, exc_type, exc_val, exc_tb):
return True
def manual_with_catch(fn):
with ManualCatchContext():
fn()
preinstantiated_manual_catch_context = ManualCatchContext()
def manual_with_catch_cache(fn):
with preinstantiated_manual_catch_context:
fn()
setup = 'from __main__ import simple_catch, work_pass, work_fail, with_catch, manual_with_catch, manual_with_catch_cache'
commands = [
'simple_catch(work_pass)',
'simple_catch(work_fail)',
'with_catch(work_pass)',
'with_catch(work_fail)',
'manual_with_catch(work_pass)',
'manual_with_catch(work_fail)',
'manual_with_catch_cache(work_pass)',
'manual_with_catch_cache(work_fail)',
]
for c in commands:
print c, ': ', timeit.timeit(c, setup)
I've made simple_catch actually call the function and I've added two new benchmarks.
Here's what I got:
>>> python2 bench.py
simple_catch(work_pass) : 0.413918972015
simple_catch(work_fail) : 3.16218209267
with_catch(work_pass) : 6.88726496696
with_catch(work_fail) : 11.8109841347
manual_with_catch(work_pass) : 1.60508012772
manual_with_catch(work_fail) : 4.03651213646
manual_with_catch_cache(work_pass) : 1.32663416862
manual_with_catch_cache(work_fail) : 3.82525682449
python2 p.py.py 33.06s user 0.00s system 99% cpu 33.099 total
And for PyPy:
>>> pypy bench.py
simple_catch(work_pass) : 0.0104489326477
simple_catch(work_fail) : 0.0212869644165
with_catch(work_pass) : 0.362847089767
with_catch(work_fail) : 0.400238037109
manual_with_catch(work_pass) : 0.0223228931427
manual_with_catch(work_fail) : 0.0208241939545
manual_with_catch_cache(work_pass) : 0.0138869285583
manual_with_catch_cache(work_fail) : 0.0213649272919
The overhead is much smaller than you claimed. Further, the only overhead PyPy doesn't seem to be able to remove relative to the try...catch for the manual variant is object creation, which is trivially removed in this case.
Unfortunately with is way too involved for good optimization by CPython, especially with regards to contextlib which even PyPy finds hard to optimize. This is normally OK because although object creation + a function call + creating a generator is expensive, it's cheap compared to what is normally done.
If you are sure that with is causing most of your overhead, convert the context managers into cached instances like I have. If that's still too much overhead, you've likely got a bigger problem with how your system is designed. Consider making the scope of the withs bigger (not normally a good idea, but acceptable if need be).
Also, PyPy. Dat JIT be fast.