Skipping an exception in all Python tests - python

I'm using Python's unittest with pytest for integration testing a library against a third-party API.
Some of the API calls are temporarily returning an error which raises a specific exception in my code. This behaviour is fine in the code.
However, rather than having the tests fail, I'd rather skip these temporary errors.
I have over 150 tests. Rather than rewriting each and every test like this:
class TestMyLibrary(unittest.TestCase):
def test_some_test(self):
try:
// run the test as normal
// assert the normal behaviour
except SomeException:
// skip the test
def test_some_other_test(self):
try:
// run the test as normal
// assert the normal behaviour
except SomeException:
// skip the test
Can I rather wrap them all somehow at the class level, or similar?

If you expect this exception why don't you check its raised when it should?
You can use :
pytest.raises(Exceptiontype, Foo())

This can be done with a decorator. For example:
def handle_lastfm_exceptions(f):
def wrapper(*args, **kw):
try:
return f(*args, **kw)
except pylast.WSError as e:
if (str(e) == "Invalid Method - "
"No method with that name in this package"):
msg = "Ignore broken Last.fm API: " + str(e)
print(msg)
pytest.skip(msg)
else:
raise(e)
return wrapper
And then decorate the problematic functions:
class TestMyLibrary(unittest.TestCase):
#handle_lastfm_exceptions
def test_some_bad_test(self):
// run the test as normal
// assert the normal behaviour
def test_some_good_test(self):
// run the test as normal
// assert the normal behaviour

had the same problem (instable 3rd party library, waiting for fix...). ended up with something like this:
def pytest_runtest_makereport(item, call):
from _pytest.runner import pytest_runtest_makereport as orig_pytest_runtest_makereport
tr = orig_pytest_runtest_makereport(item, call)
if call.excinfo is not None:
if call.excinfo.type == SomeExceptionFromLibrary:
tr.outcome = 'skipped'
tr.wasxfail = "reason: SomeExceptionFromLibrary. shame on them..."
return tr
works like a charm

Related

How can I provide a non-fixture pytest parameter via a custom decorator?

We have unit tests running via Pytest, which use a custom decorator to start up a context-managed mock echo server before each test, and provide its address to the test as an extra parameter. This works on Python 2.
However, if we try to run them on Python 3, then Pytest complains that it can't find a fixture matching the name of the extra parameter, and the tests fail.
Our tests look similar to this:
#with_mock_url('?status=404&content=test&content-type=csv')
def test_file_not_found(self, url):
res_id = self._test_resource(url)['id']
result = update_resource(None, res_id)
assert not result, result
self.assert_archival_error('Server reported status error: 404 Not Found', res_id)
With a decorator function like this:
from functools import wraps
def with_mock_url(url=''):
"""
Start a MockEchoTestServer and call the decorated function with the server's address prepended to ``url``.
"""
def decorator(func):
#wraps(func)
def decorated(*args, **kwargs):
with MockEchoTestServer().serve() as serveraddr:
return func(*(args + ('%s/%s' % (serveraddr, url),)), **kwargs)
return decorated
return decorator
On Python 2 this works; the mock server starts, the test gets a URL similar to "http://localhost:1234/?status=404&content=test&content-type=csv", and then the mock is shut down afterward.
On Python 3, however, we get an error, "fixture 'url' not found".
Is there perhaps a way to tell Python, "This parameter is supplied from elsewhere and doesn't need a fixture"? Or is there, perhaps, an easy way to turn this into a fixture?
You can use url as args parameter
#with_mock_url('?status=404&content=test&content-type=csv')
def test_file_not_found(self, *url):
url[0] # the test url
Looks like Pytest is content to ignore it if I add a default value for the injected parameter, to make it non-mandatory:
#with_mock_url('?status=404&content=test&content-type=csv')
def test_file_not_found(self, url=None):
The decorator can then inject the value as intended.
consider separating the address from the service of the url. Using marks and changing fixture behavior based on the presence of said marks is clear enough. Mock should not really involve any communication, but if you must start some service, then make it separate from
with_mock_url = pytest.mark.mock_url('http://www.darknet.go')
#pytest.fixture
def url(request):
marker = request.get_closest_marker('mock_url')
if marker:
earl = marker.args[0] if args else marker.kwargs['fake']
if earl:
return earl
try:
#
earl = request.param
except AttributeError:
earl = None
return earl
#fixture
def server(request):
marker = request.get_closest_marker('mock_url')
if marker:
# start fake_server
#with_mock_url
def test_resolve(url, server):
server.request(url)

Implementing retry decorator one method higher than exception

I am trying to implement the retry decorator on a serial query. A general idea of my code is shown below. I am struggling to get it to retry when the decorator is one method up in the hierarchy. How can I have the method be retried when it's one method up from the method that throws the exception?
One complication that is frustrating is my increment time per retry depends on the actual command. Some commands require more time than others. That's why I have the extra_time_per_retry passed in, and couldn't implement the retry decorator using the traditional #retry style.
FYI the _serial is created in the class on init via pySerial.
I got it to work with the retry decorator directly above the method that throws the exception. I would like it to be two above to keep my code clean.
I have tried feeding the retry decorator the exact exception type, but couldn't get it to work.
def _query_with_retries(self, cmd, extra_time_per_retry):
_retriable_query = retry(stop_max_attempt_number=5,
wait_incrementing_start=self._serial.timeout + extra_time_per_retry,
wait_incrementing_increment=10)(self._query)
return _retriable_query(cmd)
def _query(self, cmd):
cmd_msg = cmd + '\r'
self._serial.reset_input_buffer()
self._serial.reset_output_buffer()
self._serial.write(cmd_msg)
return self._readlines()
def _readlines(self):
response_str = self._serial.read_until('\r', 256) # Max 256 bytes
# Parse response here, if a bad one set bad_response = true
if bad_response:
raise ResponseError("Response had custom error xyz")
I guess you could pack your call to _readlines() into an exception handling block, reraising the error:
python 3.x
#retry
def _query(self, cmd):
cmd_msg = cmd + '\r'
self._serial.reset_input_buffer()
self._serial.reset_output_buffer()
self._serial.write(cmd_msg)
try:
answer = self._readlines()
except Exception as e:
raise e
return answer
python 2.x
#retry
def _query(self, cmd):
cmd_msg = cmd + '\r'
self._serial.reset_input_buffer()
self._serial.reset_output_buffer()
self._serial.write(cmd_msg)
try:
answer = self._readlines()
except Exception:
t, v, tb = sys.exc_info()
raise t, v, tb
return answer
This way, you catch the exception directly when it occurs, and raise it inside the method which will be retried. I am not sure whether this declutters enough for you, however it should work.
Some might complain about using a blank except Exception, however since I am reraising it always immediately, I do not see any harm.

How to throw exception from mocked instance's method?

This demo function I want to test is pretty straight forward.
def is_email_deliverable(email):
try:
return external.verify(email)
except Exception:
logger.error("External failed failed")
return False
This function uses an external service which I want to mock out.
But I can't figure out how to throw an exception from external.verify(email) i.e. how to force the except clause to be executed.
My attempt:
#patch.object(other_module, 'external')
def test_is_email_deliverable(patched_external):
def my_side_effect(email):
raise Exception("Test")
patched_external.verify.side_effects = my_side_effect
# Or,
# patched_external.verify.side_effects = Exception("Test")
# Or,
# patched_external.verify.side_effects = Mock(side_effect=Exception("Test"))
assert is_email_deliverable("some_mail#domain.com") == False
This question claims to have the answer, but didn't work for me.
You have used side_effects instead of side_effect.
Its something like this
#patch.object(Class, "attribute")
def foo(attribute):
attribute.side_effect = Exception()
# Other things can go here
BTW, its not good approach to catch all the Exception and handle according to it.
You can set the side_effect value to None.

Handling Exceptions in Python Behave Testing framework

I've been thinking about switching from nose to behave for testing (mocha/chai etc have spoiled me). So far so good, but I can't seem to figure out any way of testing for exceptions besides:
#then("It throws a KeyError exception")
def step_impl(context):
try:
konfigure.load_env_mapping("baz", context.configs)
except KeyError, e:
assert (e.message == "No baz configuration found")
With nose I can annotate a test with
#raises(KeyError)
I can't find anything like this in behave (not in the source, not in the examples, not here). It sure would be grand to be able to specify exceptions that might be thrown in the scenario outlines.
Anyone been down this path?
I'm pretty new to BDD myself, but generally, the idea would be that the tests document what behaves the client can expect - not the step implementations. So I'd expect the canonical way to test this would be something like:
When I try to load config baz
Then it throws a KeyError with message "No baz configuration found"
With steps defined like:
#when('...')
def step(context):
try:
# do some loading here
context.exc = None
except Exception, e:
context.exc = e
#then('it throws a {type} with message "{msg}"')
def step(context, type, msg):
assert isinstance(context.exc, eval(type)), "Invalid exception - expected " + type
assert context.exc.message == msg, "Invalid message - expected " + msg
If that's a common pattern, you could just write your own decorator:
def catch_all(func):
def wrapper(context, *args, **kwargs):
try:
func(context, *args, **kwargs)
context.exc = None
except Exception, e:
context.exc = e
return wrapper
#when('... ...')
#catch_all
def step(context):
# do some loading here - same as before
This try/catch approach by Barry works, but I see some issues:
Adding a try/except to your steps means that errors will be hidden.
Adding an extra decorator is inelegant. I would like my decorator to be a modified #where
My suggestion is to
have the expect exception before the failing statement
in the try/catch, raise if the error was not expected
in the after_scenario, raise error if expected error not found.
use the modified given/when/then everywhere
Code:
def given(regexp):
return _wrapped_step(behave.given, regexp) #pylint: disable=no-member
def then(regexp):
return _wrapped_step(behave.then, regexp) #pylint: disable=no-member
def when(regexp):
return _wrapped_step(behave.when, regexp) #pylint: disable=no-member
def _wrapped_step(step_function, regexp):
def wrapper(func):
"""
This corresponds to, for step_function=given
#given(regexp)
#accept_expected_exception
def a_given_step_function(context, ...
"""
return step_function(regexp)(_accept_expected_exception(func))
return wrapper
def _accept_expected_exception(func):
"""
If an error is expected, check if it matches the error.
Otherwise raise it again.
"""
def wrapper(context, *args, **kwargs):
try:
func(context, *args, **kwargs)
except Exception, e: #pylint: disable=W0703
expected_fail = context.expected_fail
# Reset expected fail, only try matching once.
context.expected_fail = None
if expected_fail:
expected_fail.assert_exception(e)
else:
raise
return wrapper
class ErrorExpected(object):
def __init__(self, message):
self.message = message
def get_message_from_exception(self, exception):
return str(exception)
def assert_exception(self, exception):
actual_msg = self.get_message_from_exception(exception)
assert self.message == actual_msg, self.failmessage(exception)
def failmessage(self, exception):
msg = "Not getting expected error: {0}\nInstead got{1}"
msg = msg.format(self.message, self.get_message_from_exception(exception))
return msg
#given('the next step shall fail with')
def expect_fail(context):
if context.expected_fail:
msg = 'Already expecting failure:\n {0}'.format(context.expected_fail.message)
context.expected_fail = None
util.show_gherkin_error(msg)
context.expected_fail = ErrorExpected(context.text)
I import my modified given/then/when instead of behave, and add to my environment.py initiating context.expected fail before scenario and checking it after:
def after_scenario(context, scenario):
if context.expected_fail:
msg = "Expected failure not found: %s" % (context.expected_fail.message)
util.show_gherkin_error(msg)
The try / except approach you show is actually completely correct because it shows the way that you would actually use the code in real life. However, there's a reason that you don't completely like it. It leads to ugly problems with things like the following:
Scenario: correct password accepted
Given that I have a correct password
When I attempt to log in
Then I should get a prompt
Scenario: incorrect password rejected
Given that I have an incorrect password
When I attempt to log in
Then I should get an exception
If I write the step definition without try/except then the second scenario will fail. If I write it with try/except then the first scenario risks hiding an exception, especially if the exception happens after the prompt has already been printed.
Instead those scenarios should, IMHO, be written as something like
Scenario: correct password accepted
Given that I have a correct password
When I log in
Then I should get a prompt
Scenario: correct password accepted
Given that I have a correct password
When I try to log in
Then I should get an exception
The "I log in" step should not use try; The "I try to log in" matches neatly to try and gives away the fact that there might not be success.
Then there comes the question about code reuse between the two almost, but not quite identical steps. Probably we don't want to have two functions which both login. Apart from simply having a common other function you call, you could also do something like this near the end of your step file.
#when(u'{who} try to {what}')
def step_impl(context):
try:
context.execute_steps("when" + who + " " + what)
context.exception=None
except Exception as e:
context.exception=e
This will automatically convert all steps containing the word "try to" into steps with the same name but with try to deleted and then protect them with a try/except.
There are some questions about when you actually should deal with exceptions in BDD since they aren't user visible. It's not part of the answer to this question though so I've put them in a separate posting.
Behave is not in the assertion matcher business. Therefore, it does not provide a solution for this. There are already enough Python packages that solve this problem.
SEE ALSO: behave.example: Select an assertion matcher library

PyUnit: stop after first failing test?

I'm using the following code in my testing framework:
testModules = ["test_foo", "test_bar"]
suite = unittest.TestLoader().loadTestsFromNames(testModules)
runner = unittest.TextTestRunner(sys.stdout, verbosity=2)
results = runner.run(suite)
return results.wasSuccessful()
Is there a way to make the reporting (runner.run?) abort after the first failure to prevent excessive verbosity?
Nine years after the question was asked, this is still one of the top search results for "python unit test fail early" and, as I discovered when looking at the other search results, these answers are no longer correct for more recent versions of the unittest module.
The documentation for the unittest module https://docs.python.org/3/library/unittest.html#command-line-options and https://docs.python.org/2.7/library/unittest.html#command-line-options show that there is an argument, failfast=True, that can be added to unittest.main, or equivalently a command line option, -f, or --failfast, to stop the test run on the first error or failure. This option was added in version 2.7. Using that option is a lot easier than the previously-necessary workarounds suggested in the other answers.
That is, simply change your
unittest.main()
to
unittest.main(failfast=True)
It's a feature. If you want to override this, you'll need to subclass TestCase and/or TestSuite classes and override logic in the run() method.
P.S.:
I think you have to subclass unittest.TestCase and override method run() in your class:
def run(self, result=None):
if result is None: result = self.defaultTestResult()
result.startTest(self)
testMethod = getattr(self, self._testMethodName)
try:
try:
self.setUp()
except KeyboardInterrupt:
raise
except:
result.addError(self, self._exc_info())
return
ok = False
try:
testMethod()
ok = True
except self.failureException:
result.addFailure(self, self._exc_info())
result.stop()
except KeyboardInterrupt:
raise
except:
result.addError(self, self._exc_info())
result.stop()
try:
self.tearDown()
except KeyboardInterrupt:
raise
except:
result.addError(self, self._exc_info())
ok = False
if ok: result.addSuccess(self)
finally:
result.stopTest(self)
(I've added two result.stop() calls to the default run definition).
Then you'll have to modify all your testcases to make them subclasses of this new class, instead of unittest.TestCase.
WARNING: I didn't test this code. :)
Based on Eugene's guidance, I've come up with the following:
class TestCase(unittest.TestCase):
def run(self, result=None):
if result.failures or result.errors:
print "aborted"
else:
super(TestCase, self).run(result)
While this works fairly well, it's a bit annoying that each individual test module has to define whether it wants to use this custom class or the default one (a command-line switch, similar to py.test's --exitfirst, would be ideal)...
Building on AnC's answer, this is what I'm using...
def aborting_run(self, result=None):
if result.failures or result.errors:
print "aborted"
else:
original_run(self, result)
original_run = unittest.TestCase.run
unittest.TestCase.run = aborting_run

Categories

Resources