Something similar has been asked before, but I'm struggling to get this to work.
How do I mock an import module from another file
I have one file:
b.py (named to be consistent with the linked docs)
import cv2 # module 'a' in the linked docs
def get_video_frame(path):
vidcap = cv2.VideoCapture(path) # `a.SomeClass` in the linked docs
vidcap.isOpened()
...
test_b.py
import b
import pytest # with pytest-mock installed
def test_get_frame(mocker):
mock_vidcap = mocker.Mock()
mock_vidcap.isOpened.side_effect = AssertionError
mock_cv2 = mocker.patch('cv2.VideoCapture')
mock_cv2.return_value = mock_vidcap
b.get_video_frame('foo') # Doesn't fail
mock_vidcap.isOpened.assert_called() # fails
I set the tests up like this because in where to patch it specifies that if
In this case the class we want to patch is being looked up on the a module and so we have to patch a.SomeClass instead:
#patch(‘a.SomeClass’)
I've tried a few other combinations of patching, but it exhibits the same behavior, which suggests I'm not successfully patching the module. If the patch were to be applied b.get_video_frame('foo') would fail due to the side_effect; having assert_called fail, supports this.
Edit in an effort to reduce the length of the question I left off the rest of get_video_frame. Unfortunitly, the parts left off we're the critical parts. The full function is:
def get_video_frame(path):
vidcap = cv2.VideoCapture(path) # `a.SomeClass` in the linked docs
is_open = vidcap.isOpened()
while True:
is_open, frame = vidcap.read()
if is_open:
yield frame
else:
break
This line just creates a generator:
b.get_video_frame('foo')
The line is_open = vidcap.isOpened() is never reached, because in the test function the generator remains frozen at the start, therefore the side effect never raises.
You are otherwise using mocker and patch correctly.
Related
I am seeking to better understand the following behavior when using dask.delayed to call a function that depends on parameters. The issue seems to arise when parameters are specified in a parameters file read by configparser. Here is a complete example:
parameter file:
#zpar.ini: parameter file for configparser
[my pars]
my_zpar = 2.
parser:
#zippy_parser
import configparser
def read(_rundir):
global rundir
rundir = _rundir
cp = configparser.ConfigParser()
cp.read(rundir + '/zpar.ini')
#[my pars]
global my_zpar
my_zpar = cp['my pars'].getfloat('my_zpar')
and the main python file:
# dask test with configparser
import dask
from dask.distributed import Client
import zippy_parser as zpar
def my_func(x, y):
# print stuff
print("parameter from main is: {}".format(main_par))
print("parameter from configparser is: {}".format(zpar.my_zpar))
# do stuff
return x + y
if __name__ == '__main__':
client = Client(n_workers = 4)
#read parameters from input file
rundir = '/path/to/parameter/file'
zpar.read(rundir)
#test zpar
print("zpar is {}".format(zpar.my_zpar))
#define parameter and call my_func
main_par = 5.
z = dask.delayed(my_func)(1., 2.)
z.compute()
client.close()
The first print statement in my_func() executes just fine, but the second print statement raises an exception. The output is:
zpar is 2.0
parameter from main is: 5.0
distributed.worker - WARNING - Compute Failed
Function: my_func
args: (1.0, 2.0)
kwargs: {}
Exception: AttributeError("module 'zippy_parser' has no attribute 'my_zpar'",)
I am new to dask. I suppose this has something to do with the serialization, which I do not understand. Can someone enlighten me and/or point to relevant documentation? Thanks!
I will try to keep this brief.
When a function is serialised in order to be sent to workers, python also sends local variables and functions needed by the function (its "closure"). However, it stores the modules it references by name, it does not try to serialise your whole runtime.
This means that zippy_parser is imported in the worker, not deserialised. Since the function read has never been called
in the worker, the global variable is never initialised.
So, you could call read in the workers as part of your function or otherwise, but probably the pattern or setting module-global variables from with a function isn't great. Dask's delayed mechanism prefers functional purity, that the result you get should not depend on the current state of the runtime.
(note that if you had created the client after calling read in the main script, the workers might have got the in-memory version, depending on how subprocesses are configured to be created on your system)
I encourage you to pass in all parameters to your dask delayed functions explicitly, rather than relying on the global namespace.
I am using the tenacity library to use its #retry decorator.
I am using this to make a function which makes a HTTP-request "repeat" multiple times in case of failure.
Here is a simple code snippet:
#retry(stop=stop_after_attempt(7), wait=wait_random_exponential(multiplier=1, max=60))
def func():
...
requests.post(...)
The function uses the tenacity wait-argument to wait some time in between calls.
The function together with the #retry-decorator seems to work fine.
But I also have a unit-test which checks that the function gets called indeed 7 times in case of a failure. This test takes a lot of time because of this wait in beetween tries.
Is it possible to somehow disable the wait-time only in the unit-test?
The solution came from the maintainer of tenacity himself in this Github issue: https://github.com/jd/tenacity/issues/106
You can simply change the wait function temporarily for your unit test:
from tenacity import wait_none
func.retry.wait = wait_none()
After reading the thread in tenacity repo (thanks #DanEEStar for starting it!), I came up with the following code:
#retry(
stop=stop_after_delay(20.0),
wait=wait_incrementing(
start=0,
increment=0.25,
),
retry=retry_if_exception_type(SomeExpectedException),
reraise=True,
)
def func() -> None:
raise SomeExpectedException()
def test_func_should_retry(monkeypatch: MonkeyPatch) -> None:
# Use monkeypatch to patch retry behavior.
# It will automatically revert patches when test finishes.
# Also, it doesn't create nested blocks as `unittest.mock.patch` does.
# Originally, it was `stop_after_delay` but the test could be
# unreasonably slow this way. After all, I don't care so much
# about which policy is applied exactly in this test.
monkeypatch.setattr(
func.retry, "stop", stop_after_attempt(3)
)
# Disable pauses between retries.
monkeypatch.setattr(func.retry, "wait", wait_none())
with pytest.raises(SomeExpectedException):
func()
# Ensure that there were retries.
stats: Dict[str, Any] = func.retry.statistics
assert "attempt_number" in stats
assert stats["attempt_number"] == 3
I use pytest-specific features in this test. Probably, it could be useful as an example for someone, at least for future me.
Thanks to discussion here, I found an elegant way to do this based on code from #steveb:
from tenacity import retry, stop_after_attempt, wait_exponential
#retry(reraise=True, stop=stop_after_attempt(5), wait=wait_exponential(multiplier=1, min=4, max=10))
def do_something_flaky(succeed):
print('Doing something flaky')
if not succeed:
print('Failed!')
raise Exception('Failed!')
And tests:
from unittest import TestCase, mock, skip
from main import do_something_flaky
class TestFlakyRetry(TestCase):
def test_succeeds_instantly(self):
try:
do_something_flaky(True)
except Exception:
self.fail('Flaky function should not have failed.')
def test_raises_exception_immediately_with_direct_mocking(self):
do_something_flaky.retry.sleep = mock.Mock()
with self.assertRaises(Exception):
do_something_flaky(False)
def test_raises_exception_immediately_with_indirect_mocking(self):
with mock.patch('main.do_something_flaky.retry.sleep'):
with self.assertRaises(Exception):
do_something_flaky(False)
#skip('Takes way too long to run!')
def test_raises_exception_after_full_retry_period(self):
with self.assertRaises(Exception):
do_something_flaky(False)
mock the base class wait func with:
mock.patch('tenacity.BaseRetrying.wait', side_effect=lambda *args, **kwargs: 0)
it always not wait
You can use unittest.mock module to mock some elements of tentacity library.
In your case all decorators you use are classes e.g. retry is a decorator class defined here. So it might be little bit tricky, but I think trying to
mock.patch('tentacity.wait.wait_random_exponential.__call__', ...)
may help.
I wanted to override the retry function of the retry attribute and while that sounds obvious, if you are playing with this for the first time it doesn't look right but it is.
sut.my_func.retry.retry = retry_if_not_result(lambda x: True)
Thanks to the others for pointing me in the right direction.
You can mock tenacity.nap.time in conftest.py in the root folder of unit test.
#pytest.fixture(autouse=True)
def tenacity_wait(mocker):
mocker.patch('tenacity.nap.time')
I am wondering if there is a way to create macros or aliases for functions in Python 2.7.
Example: I am trying to use the logging module and create aliases/macros for functions logging.debug, logging.info, logging.error, etc. If I use those functions as they are in the place where I want the log, everything works fine. But if I try to create an 'alias' function wrapper like this:
def debugLog(message):
logging.debug(message)
... then the line number reporting no longer works as intended, the line reported always states the location of the wrapper and not the actual log, which isn't any real use.
I did find this solution:
import logging
from logging import info as infoLog
from logging import debug as debugLog
from logging import error as errorLog
....
... but it is not suitable for me since I also create my own logging severity:
logging.addLevelName(60, "NORMAL")
... and I'd like to create an alias/macro like normalLog(message)=logging.log(60, message) for it as well if it's possible? I couldn't find anything comprehensive in Python Docs or online.
You can use functools.partial:
import functools
import logging
normalLog = functools.partial(logging.log, 60)
It works pretty well:
normalLog("Hey!!")
Level 60:root:Hey!!
partial binds arguments to function calls and return a partial object (a callable object that holds the necesary information), so you can also use it in the addLevelName method:
activateLevel = functools.partial(logging.addLevelName, 60, "NORMAL")
activateLevel()
Here you have a live working example, notice that the log line is properly reported.
You can use a frame object to get the line number. You can get a frame object in a number of ways, in the example below I use sys._getframe(), the parameter 1 gives the previous stack frame. sys._getframe() is not guaranteed to be present on all Python non-C implementations. Several other functions return frame objects, including the inspect module.
import sys
def debugLog(message):
line = sys._getframe(1).f_lineno
print line, ':', message
x = 42
print x
debugLog("A")
y = x + 1
print y
debugLog("B")
Gives:
42
10 : A
43
13 : B
The application is written by kivy.
I want to test a function via pytest, but in order to test that function, I need to initalize the object first, but the object needs something from the UI when initalizing, but I am at testing phase, so don't know how to retrieve something from the UI.
This is the class which has an error and has been handled
class SaltConfig(GridLayout):
def check_phone_number_on_first_contact(self, button):
s = self.instanciate_ServerMsg(tt)
try:
s.send()
except HTTPError as err:
print("[HTTPError] : " + str(err.code))
return
# some code when running without error
def instanciate_ServerMsg():
return ServerMsg()
This is the helper class which generates the ServerMsg object used by the former class.
class ServerMsg(OrderedDict):
def send(self,answerCallback=None):
#send something to server via urllib.urlopen
This is my tests code:
class TestSaltConfig:
def test_check_phone_number_on_first_contact(self):
myError = HTTPError(url="http://127.0.0.1", code=500,
msg="HTTP Error Occurs", hdrs="donotknow", fp=None)
mockServerMsg = mock.Mock(spec=ServerMsg)
mockServerMsg.send.side_effect = myError
sc = SaltConfig(ds_config_file_missing.data_store)
def mockreturn():
return mockServerMsg
monkeypatch.setattr(sc, 'instanciate_ServerMsg', mockreturn)
sc.check_phone_number_on_first_contact()
I can't initialize the object, it will throw an AttributeError when initialzing since it needs some value from UI.
So I get stuck.
I tried to mock the object then patch the function to the original one, but won't work either since the function itself has has logic related to UI.
How to solve it? Thanks
I made an article about testing Kivy apps together with a simple runner - KivyUnitTest. It works with unittest, not with pytest, but it shouldn't be hard to rewrite it, so that it fits your needs. In the article I explain how to "penetrate" the main loop of UI and this way you can happily go and do with button this:
button = <button you found in widget tree>
button.dispatch('on_release')
and many more. Basically you can do anything with such a test and you don't need to test each function independently. I mean... it's a good practice, but sometimes (mainly when testing UI), you can't just rip the thing out and put it into a nice 50-line test.
This way you can do exactly the same thing as a casual user would do when using your app and therefore you can even catch issues you'd have trouble with when testing the casual way e.g. some weird/unexpected user behavior.
Here's the skeleton:
import unittest
import os
import sys
import time
import os.path as op
from functools import partial
from kivy.clock import Clock
# when you have a test in <root>/tests/test.py
main_path = op.dirname(op.dirname(op.abspath(__file__)))
sys.path.append(main_path)
from main import My
class Test(unittest.TestCase):
def pause(*args):
time.sleep(0.000001)
# main test function
def run_test(self, app, *args):
Clock.schedule_interval(self.pause, 0.000001)
# Do something
# Comment out if you are editing the test, it'll leave the
# Window opened.
app.stop()
def test_example(self):
app = My()
p = partial(self.run_test, app)
Clock.schedule_once(p, 0.000001)
app.run()
if __name__ == '__main__':
unittest.main()
However, as Tomas said, you should separate UI and logic when possible, or better said, when it's an efficient thing to do. You don't want to mock your whole big application just to test a single function that requires communication with UI.
Finally made it, just get things done, I think there must be a more elegant solution. The idea is simple, given the fact that all lines are just simply value assignment except the s.send() statement.
Then we just mock the original object, every time when some errors pop up in the testing phase (since the object lack some values from the UI), we mock it, we repeat this step until the testing method can finally test if the function can handle the HTTPError or not.
In this example, we only need to mock a PhoneNumber class which is lucky, but some times we may need to handle more, so obviously #KeyWeeUsr 's answer is an more ideal choice for the production environment. But I just list my thinking here for somebody who wants a quick solution.
#pytest.fixture
def myHTTPError(request):
"""
Generating HTTPError with the pass-in parameters
from pytest_generate_tests(metafunc)
"""
httpError = HTTPError(url="http://127.0.0.1", code=request.param,
msg="HTTP Error Occurs", hdrs="donotknow", fp=None)
return httpError
class TestSaltConfig:
def test_check_phone_number( self, myHTTPError, ds_config_file_missing ):
"""
Raise an HTTP 500 error, and invoke the original function with this error.
Test to see if it could pass, if it can't handle, the test will fail.
The function locates in configs.py, line 211
This test will run 2 times with different HTTP status code, 404 and 500
"""
# A setup class used to cover the runtime error
# since Mock object can't fake properties which create via __init__()
class PhoneNumber:
text = "610274598038"
# Mock the ServerMsg class, and apply the custom
# HTTPError to the send() method
mockServerMsg = mock.Mock(spec=ServerMsg)
mockServerMsg.send.side_effect = myHTTPError
# Mock the SaltConfig class and change some of its
# members to our custom one
mockSalt = mock.Mock(spec=SaltConfig)
mockSalt.phoneNumber = PhoneNumber()
mockSalt.instanciate_ServerMsg.return_value = mockServerMsg
mockSalt.dataStore = ds_config_file_missing.data_store
# Make the check_phone_number_on_first_contact()
# to refer the original function
mockSalt.check_phone_number = SaltConfig.check_phone_number
# Call the function to do the test
mockSalt.check_phone_number_on_first_contact(mockSalt, "button")
I am generating llvm IR code using llvmlite and Python. I generate code for many functions inside just one given module. The problem is that when an Exception occurs, while the code is generated for one of those functions, the whole module code generation gets corrupted. I would like a way to recover from the Exception by saying to the module : "Hey, forget that function altogether" before taking other actions. For example:
# Create function
func = ir.Function(module, functype, funcname)
# Create the entry BB in the function and set a new builder to it.
bb_entry = func.append_basic_block('entry')
builder = ir.IRBuilder(bb_entry)
try:
# Generate code for func with the builder ...
except:
# Oops, a problem occured while generating code
# Remove func from module : How to do that ?
del module.globals[funcname] # does not work...
Any help?