side_effect function of PropertyMock gets called only once - python

I have two questions regarding the mocking of a property with unittest.mock.PropertyMock (code will follow below):
Why does the test test_prop output "text_0, text_0" and not "text_0, text_1" as the test test_func? The output indicates, that the function side_effect_func in test_prop gets called only once, while I would have expected it to get called twice, like in test_func.
How can I specify a side_effect function for a property, that gets called every time the property is accessed?
My use case is that I would like to have a mock that returns a different name (which is a property) depending on how often it was called. This would "simulate" two different instances of Class1 to Class2 in the following minimal example.
The code:
File dut.py:
class Class1():
def __init__(self, name):
self.__name = name
#property
def name(self):
return self.__name
def name_func(self):
return self.__name
class Class2():
def __init__(self, name, class1):
self.__name = name
self.__class1 = class1
#property
def name(self):
return self.__name
#property
def class1(self):
return self.__class1
File test\test_dut.py (the second with-statement produces the exact same behavior when swapped with the first one):
import dut
import unittest
from unittest.mock import patch, PropertyMock
class TestClass2(unittest.TestCase):
def test_func(self):
side_effect_counter = -1
def side_effect_func(_):
nonlocal side_effect_counter
side_effect_counter += 1
return f'text_{side_effect_counter}'
c2_1 = dut.Class2('class2', dut.Class1('class1'))
c2_2 = dut.Class2('class2_2', dut.Class1('class1_2'))
with patch('test_dut.dut.Class1.name_func', side_effect=side_effect_func, autospec=True):
print(f'{c2_2.class1.name_func()}, {c2_1.class1.name_func()}')
def test_prop(self):
side_effect_counter = -1
def side_effect_func():
nonlocal side_effect_counter
side_effect_counter += 1
return f'text_{side_effect_counter}'
c2_1 = dut.Class2('class2', dut.Class1('class1'))
c2_2 = dut.Class2('class2_2', dut.Class1('class1_2'))
with patch.object(dut.Class1, 'name', new_callable=PropertyMock(side_effect=side_effect_func)):
# with patch('test_dut.dut.Class1.name', new_callable=PropertyMock(side_effect=side_effect_func)):
print(f'{c2_2.class1.name}, {c2_1.class1.name}')
Call from command line: pytest -rP test\test_dut.py
This leads to the following output (problematic line marked by me):
============================================================================================== test session starts ==============================================================================================
platform win32 -- Python 3.9.12, pytest-7.1.2, pluggy-1.0.0
rootdir: C:\Users\klosemic\Documents\playground_mocks
plugins: hypothesis-6.46.5, cov-3.0.0, forked-1.4.0, html-3.1.1, metadata-2.0.1, xdist-2.5.0
collected 2 items
test\test_dut.py .. [100%]
==================================================================================================== PASSES =====================================================================================================
_____________________________________________________________________________________________ TestClass2.test_func ______________________________________________________________________________________________
--------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------
text_0, text_1
_____________________________________________________________________________________________ TestClass2.test_prop ______________________________________________________________________________________________
--------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------
text_0, text_0 <<<<<< HERE IS THE PROBLEM
=============================================================================================== 2 passed in 0.46s ===============================================================================================

The issue has to do with how you are instantiating the PropertyMock. To answer your first question about why the second test prints test_0 for both calls, you instantiate the PropertyMock class during your with patch.object(get_events.Class1, 'name', new_callable=PropertyMock(side_effect=side_effect_func)) call.
Since you instantiate the class, it gets called immediately at the end of that line due to the logic of the __enter__ method in the patch object. You can see that logic in this line and then this one. Due to that the value of your side_effect immediately becomes a string, which is essentially the output of the first call to the PropertyMock. This can be confirmed by changing your function to the following and observing the output:
with patch.object(dut.Class1, 'name', new_callable=PropertyMock(side_effect=side_effect_func)) as mock_prop:
# print(f'{c2_2.class1.name}, {c2_1.class1.name}')
print(mock_prop)
You will notice that this prints text_0 in the console, confirming what has been mentioned above.
To answer your second question, the way to use PropertyMock in this case would be to change the second test to the following:
with patch.object(dut.Class1, 'name', new_callable=PropertyMock) as mock_prop:
mock_prop.side_effect = side_effect_func
print(f'{c2_2.class1.name}, {c2_1.class1.name}')
Then when you run the tests you get the correct output as shown below.
============================================================= test session starts =============================================================
platform darwin -- Python 3.8.9, pytest-7.0.1, pluggy-1.0.0
rootdir: ***
plugins: asyncio-0.18.3, mock-3.7.0
asyncio: mode=strict
collected 2 items
tests/test_dut.py .. [100%]
=================================================================== PASSES ====================================================================
____________________________________________________________ TestClass2.test_func _____________________________________________________________
------------------------------------------------------------ Captured stdout call -------------------------------------------------------------
text_0, text_1
____________________________________________________________ TestClass2.test_prop _____________________________________________________________
------------------------------------------------------------ Captured stdout call -------------------------------------------------------------
text_0, text_1
============================================================== 2 passed in 0.01s ==============================================================

Related

Mocking class method with pytest-mock returns error

I'm trying to mock a class method with pytest-mock. I have the code below in a single file, and when the test is run I get ModuleNotFoundError: No module named 'RealClass' in the patch function. How to make this work?
class RealClass:
def some_function():
return 'real'
def function_to_test():
x = RealClass()
return x.some_function()
def test_the_function(mocker):
mock_function = mocker.patch('RealClass.some_function')
mock_function.return_value = 'mocked'
ret = function_to_test()
assert ret == 'mocked'
In your case since you are patching the class that is present within the test file itself you would use mocker.patch.object.
mock_function = mocker.patch.object(RealClass, 'some_function')
collected 1 item
tests/test_grab.py::test_the_function PASSED [100%]
============================== 1 passed in 0.03s ===============================

How to access marks list outside test script

I have a mark, let say, specific_case = pytest.mark.skipif(<CONDITION>) which I need to apply to some test-cases. I want property value to return different value in case mark applied. This is my simplified code:
module.py:
import pytest
class A():
#property
def value(self):
_marks = pytest.mark._markers # current code to get applied marks list
if 'specific_case' in _marks:
return 1
else:
return 2
test_1.py:
import pytest
from module import A
pytestmark = [pytest.mark.test_id.TC_1, pytest.mark.specific_case]
def test_1():
a = A()
assert a.value == 1
But that doesn't work as pytest.mark._markers returns set(['TC_1', 'skipif']) but not exact pytestmark list (I expect set(['TC_1', 'specific_case']) or at least pytestmark as it is - [pytest.mark.test_id.TC_1, pytest.mark.specific_case]).
So is there any way I can access exact pytestmark list outside test function?
P.S. I also found some tips of how to get mark list using fixtures, but I should stick to current implementation of module.py and test_1.py, so cannot use fixture.
Also there are many other marks with skip conditions (specific_case_2 = pytest.mark.skipif(<CONDITION_2>), specific_case_3 = pytest.mark.skipif(<CONDITION_3>),...), so I cannot use just if 'skipif' in _marks solution
Since your module.py accesses pytest marks, then it is safe to assume that it is part of the test code.
With that said, in case you are you open to changing the class property A.value into a pytest fixture, then this alternative solution might work fine for you. Otherwise, this wouldn't suffice.
Alternative Solution
Instead of using pytest.mark._markers to retrieve the marks list, use request.keywords.
class FixtureRequest
keywords
Keywords/markers dictionary for the underlying node.
import pytest
# Data
class A():
#property
def value(self):
_marks = pytest.mark._markers # Current code to get applied marks list
print("Using class property A.value:", list(_marks))
if 'specific_case' in _marks:
return 1
else:
return 2
#pytest.fixture
def a_value(request): # This fixture can be in conftest.py so all test files can see it. Or use pytest_plugins to include the file containing this.
_marks = request.keywords # Alternative style of getting applied marks list
print("Using pytest fixture a_value:", list(_marks))
if 'specific_case' in _marks:
return 1
else:
return 2
# Tests
pytestmark = [pytest.mark.test_id, pytest.mark.specific_case]
def test_first():
a = A()
assert a.value != 1 # 'specific_case' was not recognized as a marker
def test_second(a_value):
assert a_value == 1 # 'specific_case' was recognized as a marker
Output:
pytest -q -rP --disable-pytest-warnings
.. [100%]
================================================================================================= PASSES ==================================================================================================
_______________________________________________________________________________________________ test_first ________________________________________________________________________________________________
------------------------------------------------------------------------------------------ Captured stdout call -------------------------------------------------------------------------------------------
Using class property A.value: ['parametrize', 'skipif', 'skip', 'trylast', 'filterwarnings', 'tryfirst', 'usefixtures', 'xfail']
_______________________________________________________________________________________________ test_second _______________________________________________________________________________________________
------------------------------------------------------------------------------------------ Captured stdout setup ------------------------------------------------------------------------------------------
Using pytest fixture a_value: ['specific_case', '2', 'test_1.py', 'test_second', 'test_id']
2 passed, 2 warnings in 0.01s

How to run unittest test cases in the order they are declared

I fully realize that the order of unit tests should not matter. But these unit tests are as much for instructional use as for actual unit testing, so I would like the test output to match up with the test case source code.
I see that there is a way to set the sort order by setting the sortTestMethodsUsing attribute on the test loader. The default is a simple cmp() call to lexically compare names. So I tried writing a cmp-like function that would take two names, find their declaration line numbers and them return the cmp()-equivalent of them:
import unittest
class TestCaseB(unittest.TestCase):
def test(self):
print("running test case B")
class TestCaseA(unittest.TestCase):
def test(self):
print("running test case A")
import inspect
def get_decl_line_no(cls_name):
cls = globals()[cls_name]
return inspect.getsourcelines(cls)[1]
def sgn(x):
return -1 if x < 0 else 1 if x > 0 else 0
def cmp_class_names_by_decl_order(cls_a, cls_b):
a = get_decl_line_no(cls_a)
b = get_decl_line_no(cls_b)
return sgn(a - b)
unittest.defaultTestLoader.sortTestMethodsUsing = cmp_class_names_by_decl_order
unittest.main()
When I run this, I get this output:
running test case A
.running test case B
.
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK
indicating that the test cases are not running in the declaration order.
My sort function is just not being called, so I suspect that main() is building a new test loader, which is wiping out my sort function.
The solution is to create a TestSuite explicitly, instead of letting unittest.main() follow all its default test discovery and ordering behavior. Here's how I got it to work:
import unittest
class TestCaseB(unittest.TestCase):
def runTest(self):
print("running test case B")
class TestCaseA(unittest.TestCase):
def runTest(self):
print("running test case A")
import inspect
def get_decl_line_no(cls):
return inspect.getsourcelines(cls)[1]
# get all test cases defined in this module
test_case_classes = list(filter(lambda c: c.__name__ in globals(),
unittest.TestCase.__subclasses__()))
# sort them by decl line no
test_case_classes.sort(key=get_decl_line_no)
# make into a suite and run it
suite = unittest.TestSuite(cls() for cls in test_case_classes)
unittest.TextTestRunner().run(suite)
This gives the desired output:
running test case B
.running test case A
.
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK
It is important to note that the test method in each class must be named runTest.
You can manually build a TestSuite where your TestCases and all tests inside them run by line number:
# Python 3.8.3
import unittest
import sys
import inspect
def isTestClass(x):
return inspect.isclass(x) and issubclass(x, unittest.TestCase)
def isTestFunction(x):
return inspect.isfunction(x) and x.__name__.startswith("test")
class TestB(unittest.TestCase):
def test_B(self):
print("Running test_B")
self.assertEqual((2+2), 4)
def test_A(self):
print("Running test_A")
self.assertEqual((2+2), 4)
def setUpClass():
print("TestB Class Setup")
class TestA(unittest.TestCase):
def test_A(self):
print("Running test_A")
self.assertEqual((2+2), 4)
def test_B(self):
print("Running test_B")
self.assertEqual((2+2), 4)
def setUpClass():
print("TestA Class Setup")
def suite():
# get current module object
module = sys.modules[__name__]
# get all test className,class tuples in current module
testClasses = [
tup for tup in
inspect.getmembers(module, isTestClass)
]
# sort classes by line number
testClasses.sort(key=lambda t: inspect.getsourcelines(t[1])[1])
testSuite = unittest.TestSuite()
for testClass in testClasses:
# get list of testFunctionName,testFunction tuples in current class
classTests = [
tup for tup in
inspect.getmembers(testClass[1], isTestFunction)
]
# sort TestFunctions by line number
classTests.sort(key=lambda t: inspect.getsourcelines(t[1])[1])
# create TestCase instances and add to testSuite;
for test in classTests:
testSuite.addTest(testClass[1](test[0]))
return testSuite
if __name__ == '__main__':
runner = unittest.TextTestRunner()
runner.run(suite())
Output:
TestB Class Setup
Running test_B
.Running test_A
.TestA Class Setup
Running test_A
.Running test_B
.
----------------------------------------------------------------------
Ran 4 tests in 0.000s
OK
As stated in the name, sortTestMethodsUsing is used to sort test methods. It is not used to sort classes. (It is not used to sort methods in different classes either; separate classes are handled separately.)
If you had two test methods in the same class, sortTestMethodsUsing would be used to determine their order. (At that point, you would get an exception because your function expects class names.)

How do I get the name of a pytest mark for a test function?

import pytest
class TestSomething(object):
#pytest.mark.somethinga
def test_something(self):
In the function test_something I want to check what mark I gave the function, if it's somethinga the function behaves differently than, for instance somethingb.
I think I should use inspect.py (introspection) but I haven't found how to do it. Many thanks guys!
There is no need to use inspect here; the decorator adds an attribute to the function object, it doesn't wrap the object. From the source code:
holder = getattr(func, self.name, None)
if holder is None:
holder = MarkInfo(
self.name, self.args, self.kwargs
)
setattr(func, self.name, holder)
else:
holder.add(self.args, self.kwargs)
you can detect what names were set by iterating over the function attributes:
from _pytest.mark import MarkInfo
def function_marks(func):
return [name for name, ob in vars(func).items() if isinstance(ob, MarkInfo)]
or you can just test for the attribute name and assume it is a marker instance:
if hasattr(TestSomething.test_something, 'somethinga'):
# the somethinga mark is set
Demo:
>>> import pytest
>>> from _pytest.mark import MarkInfo
>>> #pytest.mark.foo
... #pytest.mark.bar
... def demo(): pass
...
>>> [name for name, ob in vars(demo).items() if isinstance(ob, MarkInfo)]
['foo', 'bar']
>>> demo.foo
<MarkInfo 'foo' args=() kwargs={}>
>>> demo.bar
<MarkInfo 'bar' args=() kwargs={}>
So basically when you call pytest.mark.<attribute> it sets that <attribute> as an instance of _pytest.mark.Markinfo() on the function you've decorated / marked.
Example:
Code:
import pytest
from _pytest.mark import MarkInfo
#pytest.mark.foo
def test_foo():
assert hasattr(test_foo, "foo")
assert isinstance(test_foo.foo, MarkInfo)
def test_bar():
assert not hasattr(test_bar, "foo")
Output:
$ py.test -x -s -v test_foo.py
======================================= test session starts ========================================
platform linux2 -- Python 2.7.9 -- py-1.4.20 -- pytest-2.5.2 -- /home/prologic/bin/python
cachedir: /home/prologic/tmp/.cache
plugins: pylama, cov, cache, pep8, flakes
collected 2 items
test_foo.py:5: test_foo PASSED
test_foo.py:11: test_bar PASSED
===================================== 2 passed in 0.01 seconds =====================================
The "name" of the "mark" is also stored in MarkInfo() as the attribute name:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> PDB set_trace (IO-capturing turned off) >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /home/prologic/tmp/test_foo.py(8)test_foo()
-> assert hasattr(test_foo, "foo")
(Pdb) dir(test_foo.foo)
['__doc__', '__init__', '__iter__', '__module__', '__repr__', '_arglist', 'add', 'args', 'kwargs', 'name']
(Pdb) test_foo.foo.name
'foo'
Not sure if this could help you, but I was having a similar issue when having a function which I had parametrized to run in the same way over different inputs. Some I was expecting to xfail, and for these I wanted to do a particular follow up action. The parameter zipped_parameters_list contained all these zipped input parameters together with their specific pytest.marks I had created for each of these input sets. These are applied when you use the pytest.mark.parametrize decorator. In this case, I could not find the mark specific to that run with that specific set of parameters as indicated in the above answers. Here's how I got it to do what I wanted, which was to see if this particular test_case had been marked with xfail.
#pytest.mark.parametrize("stage_sql_data,stage_mongo_data,test_data",
zipped_test_parameters_list,
ids=test_ids)
def test_my_test(stage_sql_data, stage_mongo_data, test_data,
tmpdir_factory, request):
# Do my test steps
# Assertions
if 'xfail' in [marker.name for marker in request.node.own_markers]:
do_some_specific_action()
Test function marker:
import pytest
#pytest.mark.major_test
def test_foo():
print("Marker:")
print(test_foo.pytestmark[0].name)
assert(True)
Pytest input marker:
import pytest
#pytest.mark.major_test
def test_bar():
print("Marker:")
print(pytest.config.option.markexpr)
assert(True)
I wondered the same thing until I discovered pytest has a request fixture which contains its own markers you can iterate over
#pytest.mark.my_custom_mark
def test_my_method(request):
using_my_custom_mark = False
for mark in request.node.own_markers:
if mark.name == 'my_custom_mark':
using_my_custom_mark = True
break
assert using_my_custom_mark is True

Getting Python's unittest results in a tearDown() method

Is it possible to get the results of a test (i.e. whether all assertions have passed) in a tearDown() method? I'm running Selenium scripts, and I'd like to do some reporting from inside tearDown(), however I don't know if this is possible.
As of March 2022 this answer is updated to support Python versions between 3.4 and 3.11 (including the newest development Python version). Classification of errors / failures is the same that is used in the output unittest. It works without any modification of code before tearDown(). It correctly recognizes decorators skipIf() and expectedFailure. It is compatible also with pytest.
Code:
import unittest
class MyTest(unittest.TestCase):
def tearDown(self):
if hasattr(self._outcome, 'errors'):
# Python 3.4 - 3.10 (These two methods have no side effects)
result = self.defaultTestResult()
self._feedErrorsToResult(result, self._outcome.errors)
else:
# Python 3.11+
result = self._outcome.result
ok = all(test != self for test, text in result.errors + result.failures)
# Demo output: (print short info immediately - not important)
if ok:
print('\nOK: %s' % (self.id(),))
for typ, errors in (('ERROR', result.errors), ('FAIL', result.failures)):
for test, text in errors:
if test is self:
# the full traceback is in the variable `text`
msg = [x for x in text.split('\n')[1:]
if not x.startswith(' ')][0]
print("\n\n%s: %s\n %s" % (typ, self.id(), msg))
If you don't need the exception info then the second half can be removed. If you want also the tracebacks then use the whole variable text instead of msg. It only can't recognize an unexpected success in a expectedFailure block
Example test methods:
def test_error(self):
self.assertEqual(1 / 0, 1)
def test_fail(self):
self.assertEqual(2, 1)
def test_success(self):
self.assertEqual(1, 1)
Example output:
$ python3 -m unittest test
ERROR: q.MyTest.test_error
ZeroDivisionError: division by zero
E
FAIL: q.MyTest.test_fail
AssertionError: 2 != 1
F
OK: q.MyTest.test_success
.
======================================================================
... skipped the usual output from unittest with tracebacks ...
...
Ran 3 tests in 0.001s
FAILED (failures=1, errors=1)
Complete code including expectedFailure decorator example
EDIT: When I updated this solution to Python 3.11, I dropped everything related to old Python < 3.4 and also many minor notes.
If you take a look at the implementation of unittest.TestCase.run, you can see that all test results are collected in the result object (typically a unittest.TestResult instance) passed as argument. No result status is left in the unittest.TestCase object.
So there isn't much you can do in the unittest.TestCase.tearDown method unless you mercilessly break the elegant decoupling of test cases and test results with something like this:
import unittest
class MyTest(unittest.TestCase):
currentResult = None # Holds last result object passed to run method
def setUp(self):
pass
def tearDown(self):
ok = self.currentResult.wasSuccessful()
errors = self.currentResult.errors
failures = self.currentResult.failures
print ' All tests passed so far!' if ok else \
' %d errors and %d failures so far' % \
(len(errors), len(failures))
def run(self, result=None):
self.currentResult = result # Remember result for use in tearDown
unittest.TestCase.run(self, result) # call superclass run method
def test_onePlusOneEqualsTwo(self):
self.assertTrue(1 + 1 == 2) # Succeeds
def test_onePlusOneEqualsThree(self):
self.assertTrue(1 + 1 == 3) # Fails
def test_onePlusNoneIsNone(self):
self.assertTrue(1 + None is None) # Raises TypeError
if __name__ == '__main__':
unittest.main()
This works for Python 2.6 - 3.3 (modified for new Python below).
CAVEAT: I have no way of double checking the following theory at the moment, being away from a dev box. So this may be a shot in the dark.
Perhaps you could check the return value of sys.exc_info() inside your tearDown() method, if it returns (None, None, None), you know the test case succeeded. Otherwise, you could use returned tuple to interrogate the exception object.
See sys.exc_info documentation.
Another more explicit approach is to write a method decorator that you could slap onto all your test case methods that require this special handling. This decorator can intercept assertion exceptions and based on that modify some state in self allowing your tearDown method to learn what's up.
#assertion_tracker
def test_foo(self):
# some test logic
It depends what kind of reporting you'd like to produce.
In case you'd like to do some actions on failure (such as generating a screenshots), instead of using tearDown(), you may achieve that by overriding failureException.
For example:
#property
def failureException(self):
class MyFailureException(AssertionError):
def __init__(self_, *args, **kwargs):
screenshot_dir = 'reports/screenshots'
if not os.path.exists(screenshot_dir):
os.makedirs(screenshot_dir)
self.driver.save_screenshot('{0}/{1}.png'.format(screenshot_dir, self.id()))
return super(MyFailureException, self_).__init__(*args, **kwargs)
MyFailureException.__name__ = AssertionError.__name__
return MyFailureException
If you are using Python 2 you can use the method _resultForDoCleanups. This method return a TextTestResult object:
<unittest.runner.TextTestResult run=1 errors=0 failures=0>
You can use this object to check the result of your tests:
def tearDown(self):
if self._resultForDoCleanups.failures:
...
elif self._resultForDoCleanups.errors:
...
else:
# Success
If you are using Python 3 you can use _outcomeForDoCleanups:
def tearDown(self):
if not self._outcomeForDoCleanups.success:
...
Following on from amatellanes' answer, if you're on Python 3.4, you can't use _outcomeForDoCleanups. Here's what I managed to hack together:
def _test_has_failed(self):
for method, error in self._outcome.errors:
if error:
return True
return False
It is yucky, but it seems to work.
Here's a solution for those of us who are uncomfortable using solutions that rely on unittest internals:
First, we create a decorator that will set a flag on the TestCase instance to determine whether or not the test case failed or passed:
import unittest
import functools
def _tag_error(func):
"""Decorates a unittest test function to add failure information to the TestCase."""
#functools.wraps(func)
def decorator(self, *args, **kwargs):
"""Add failure information to `self` when `func` raises an exception."""
self.test_failed = False
try:
func(self, *args, **kwargs)
except unittest.SkipTest:
raise
except Exception: # pylint: disable=broad-except
self.test_failed = True
raise # re-raise the error with the original traceback.
return decorator
This decorator is actually pretty simple. It relies on the fact that unittest detects failed tests via Exceptions. As far as I'm aware, the only special exception that needs to be handled is unittest.SkipTest (which does not indicate a test failure). All other exceptions indicate test failures so we mark them as such when they bubble up to us.
We can now use this decorator directly:
class MyTest(unittest.TestCase):
test_failed = False
def tearDown(self):
super(MyTest, self).tearDown()
print(self.test_failed)
#_tag_error
def test_something(self):
self.fail('Bummer')
It's going to get really annoying writing this decorator all the time. Is there a way we can simplify? Yes there is!* We can write a metaclass to handle applying the decorator for us:
class _TestFailedMeta(type):
"""Metaclass to decorate test methods to append error information to the TestCase instance."""
def __new__(cls, name, bases, dct):
for name, prop in dct.items():
# assume that TestLoader.testMethodPrefix hasn't been messed with -- otherwise, we're hosed.
if name.startswith('test') and callable(prop):
dct[name] = _tag_error(prop)
return super(_TestFailedMeta, cls).__new__(cls, name, bases, dct)
Now we apply this to our base TestCase subclass and we're all set:
import six # For python2.x/3.x compatibility
class BaseTestCase(six.with_metaclass(_TestFailedMeta, unittest.TestCase)):
"""Base class for all our other tests.
We don't really need this, but it demonstrates that the
metaclass gets applied to all subclasses too.
"""
class MyTest(BaseTestCase):
def tearDown(self):
super(MyTest, self).tearDown()
print(self.test_failed)
def test_something(self):
self.fail('Bummer')
There are likely a number of cases that this doesn't handle properly. For example, it does not correctly detect failed subtests or expected failures. I'd be interested in other failure modes of this, so if you find a case that I'm not handling properly, let me know in the comments and I'll look into it.
*If there wasn't an easier way, I wouldn't have made _tag_error a private function ;-)
I think the proper answer to your question is that there isn't a clean way to get test results in tearDown(). Most of the answers here involve accessing some private parts of the Python unittest module and in general feel like workarounds. I'd strongly suggest avoiding these since the test results and test cases are decoupled and you should not work against that.
If you are in love with clean code (like I am) I think what you should do instead is instantiating your TestRunner with your own TestResult class. Then you could add whatever reporting you wanted by overriding these methods:
addError(test, err)
Called when the test case test raises an unexpected exception. err is a tuple of the form returned by sys.exc_info(): (type, value, traceback).
The default implementation appends a tuple (test, formatted_err) to the instance’s errors attribute, where formatted_err is a formatted traceback derived from err.
addFailure(test, err)
Called when the test case test signals a failure. err is a tuple of the form returned by sys.exc_info(): (type, value, traceback).
The default implementation appends a tuple (test, formatted_err) to the instance’s failures attribute, where formatted_err is a formatted traceback derived from err.
addSuccess(test)
Called when the test case test succeeds.
The default implementation does nothing.
Python 2.7.
You can also get result after unittest.main():
t = unittest.main(exit=False)
print t.result
Or use suite:
suite.addTests(tests)
result = unittest.result.TestResult()
suite.run(result)
print result
Inspired by scoffey’s answer, I decided to take mercilessnes to the next level, and have come up with the following.
It works in both vanilla unittest, and also when run via nosetests, and also works in Python versions 2.7, 3.2, 3.3, and 3.4 (I did not specifically test 3.0, 3.1, or 3.5, as I don’t have these installed at the moment, but if I read the source code correctly, it should work in 3.5 as well):
#! /usr/bin/env python
from __future__ import unicode_literals
import logging
import os
import sys
import unittest
# Log file to see squawks during testing
formatter = logging.Formatter(fmt='%(levelname)-8s %(name)s: %(message)s')
log_file = os.path.splitext(os.path.abspath(__file__))[0] + '.log'
handler = logging.FileHandler(log_file)
handler.setFormatter(formatter)
logging.root.addHandler(handler)
logging.root.setLevel(logging.DEBUG)
log = logging.getLogger(__name__)
PY = tuple(sys.version_info)[:3]
class SmartTestCase(unittest.TestCase):
"""Knows its state (pass/fail/error) by the time its tearDown is called."""
def run(self, result):
# Store the result on the class so tearDown can behave appropriately
self.result = result.result if hasattr(result, 'result') else result
if PY >= (3, 4, 0):
self._feedErrorsToResultEarly = self._feedErrorsToResult
self._feedErrorsToResult = lambda *args, **kwargs: None # no-op
super(SmartTestCase, self).run(result)
#property
def errored(self):
if (3, 0, 0) <= PY < (3, 4, 0):
return bool(self._outcomeForDoCleanups.errors)
return self.id() in [case.id() for case, _ in self.result.errors]
#property
def failed(self):
if (3, 0, 0) <= PY < (3, 4, 0):
return bool(self._outcomeForDoCleanups.failures)
return self.id() in [case.id() for case, _ in self.result.failures]
#property
def passed(self):
return not (self.errored or self.failed)
def tearDown(self):
if PY >= (3, 4, 0):
self._feedErrorsToResultEarly(self.result, self._outcome.errors)
class TestClass(SmartTestCase):
def test_1(self):
self.assertTrue(True)
def test_2(self):
self.assertFalse(True)
def test_3(self):
self.assertFalse(False)
def test_4(self):
self.assertTrue(False)
def test_5(self):
self.assertHerp('Derp')
def tearDown(self):
super(TestClass, self).tearDown()
log.critical('---- RUNNING {} ... -----'.format(self.id()))
if self.errored:
log.critical('----- ERRORED -----')
elif self.failed:
log.critical('----- FAILED -----')
else:
log.critical('----- PASSED -----')
if __name__ == '__main__':
unittest.main()
When run with unittest:
$ ./test.py -v
test_1 (__main__.TestClass) ... ok
test_2 (__main__.TestClass) ... FAIL
test_3 (__main__.TestClass) ... ok
test_4 (__main__.TestClass) ... FAIL
test_5 (__main__.TestClass) ... ERROR
[…]
$ cat ./test.log
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_1 ... -----
CRITICAL __main__: ----- PASSED -----
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_2 ... -----
CRITICAL __main__: ----- FAILED -----
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_3 ... -----
CRITICAL __main__: ----- PASSED -----
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_4 ... -----
CRITICAL __main__: ----- FAILED -----
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_5 ... -----
CRITICAL __main__: ----- ERRORED -----
When run with nosetests:
$ nosetests ./test.py -v
test_1 (test.TestClass) ... ok
test_2 (test.TestClass) ... FAIL
test_3 (test.TestClass) ... ok
test_4 (test.TestClass) ... FAIL
test_5 (test.TestClass) ... ERROR
$ cat ./test.log
CRITICAL test: ---- RUNNING test.TestClass.test_1 ... -----
CRITICAL test: ----- PASSED -----
CRITICAL test: ---- RUNNING test.TestClass.test_2 ... -----
CRITICAL test: ----- FAILED -----
CRITICAL test: ---- RUNNING test.TestClass.test_3 ... -----
CRITICAL test: ----- PASSED -----
CRITICAL test: ---- RUNNING test.TestClass.test_4 ... -----
CRITICAL test: ----- FAILED -----
CRITICAL test: ---- RUNNING test.TestClass.test_5 ... -----
CRITICAL test: ----- ERRORED -----
Background
I started with this:
class SmartTestCase(unittest.TestCase):
"""Knows its state (pass/fail/error) by the time its tearDown is called."""
def run(self, result):
# Store the result on the class so tearDown can behave appropriately
self.result = result.result if hasattr(result, 'result') else result
super(SmartTestCase, self).run(result)
#property
def errored(self):
return self.id() in [case.id() for case, _ in self.result.errors]
#property
def failed(self):
return self.id() in [case.id() for case, _ in self.result.failures]
#property
def passed(self):
return not (self.errored or self.failed)
However, this only works in Python 2. In Python 3, up to and including 3.3, the control flow appears to have changed a bit: Python 3’s unittest package processes results after calling each test’s tearDown() method… this behavior can be confirmed if we simply add an extra line (or six) to our test class:
## -63,6 +63,12 ##
log.critical('----- FAILED -----')
else:
log.critical('----- PASSED -----')
+ log.warning(
+ 'ERRORS THUS FAR:\n'
+ + '\n'.join(tc.id() for tc, _ in self.result.errors))
+ log.warning(
+ 'FAILURES THUS FAR:\n'
+ + '\n'.join(tc.id() for tc, _ in self.result.failures))
if __name__ == '__main__':
Then just rerun the tests:
$ python3.3 ./test.py -v
test_1 (__main__.TestClass) ... ok
test_2 (__main__.TestClass) ... FAIL
test_3 (__main__.TestClass) ... ok
test_4 (__main__.TestClass) ... FAIL
test_5 (__main__.TestClass) ... ERROR
[…]
…and you will see that you get this as a result:
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_1 ... -----
CRITICAL __main__: ----- PASSED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_2 ... -----
CRITICAL __main__: ----- PASSED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_3 ... -----
CRITICAL __main__: ----- PASSED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
__main__.TestClass.test_2
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_4 ... -----
CRITICAL __main__: ----- PASSED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
__main__.TestClass.test_2
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_5 ... -----
CRITICAL __main__: ----- PASSED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
__main__.TestClass.test_2
__main__.TestClass.test_4
Now, compare the above to Python 2’s output:
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_1 ... -----
CRITICAL __main__: ----- PASSED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_2 ... -----
CRITICAL __main__: ----- FAILED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
__main__.TestClass.test_2
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_3 ... -----
CRITICAL __main__: ----- PASSED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
__main__.TestClass.test_2
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_4 ... -----
CRITICAL __main__: ----- FAILED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
__main__.TestClass.test_2
__main__.TestClass.test_4
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_5 ... -----
CRITICAL __main__: ----- ERRORED -----
WARNING __main__: ERRORS THUS FAR:
__main__.TestClass.test_5
WARNING __main__: FAILURES THUS FAR:
__main__.TestClass.test_2
__main__.TestClass.test_4
Since Python 3 processes errors/failures after the test is torn down, we can’t readily infer the result of a test using result.errors or result.failures in every case. (I think it probably makes more sense architecturally to process a test’s results after tearing it down, however, it does make the perfectly valid use-case of following a different end-of-test procedure depending on a test’s pass/fail status a bit harder to meet…)
Therefore, instead of relying on the overall result object, instead we can reference _outcomeForDoCleanups as others have already mentioned, which contains the result object for the currently running test, and has the necessary errors and failrues attributes, which we can use to infer a test’s status by the time tearDown() has been called:
## -3,6 +3,7 ##
from __future__ import unicode_literals
import logging
import os
+import sys
import unittest
## -16,6 +17,9 ##
log = logging.getLogger(__name__)
+PY = tuple(sys.version_info)[:3]
+
+
class SmartTestCase(unittest.TestCase):
"""Knows its state (pass/fail/error) by the time its tearDown is called."""
## -27,10 +31,14 ##
#property
def errored(self):
+ if PY >= (3, 0, 0):
+ return bool(self._outcomeForDoCleanups.errors)
return self.id() in [case.id() for case, _ in self.result.errors]
#property
def failed(self):
+ if PY >= (3, 0, 0):
+ return bool(self._outcomeForDoCleanups.failures)
return self.id() in [case.id() for case, _ in self.result.failures]
#property
This adds support for the early versions of Python 3.
As of Python 3.4, however, this private member variable no longer exists, and instead, a new (albeit also private) method was added: _feedErrorsToResult.
This means that for versions 3.4 (and later), if the need is great enough, one can — very hackishly — force one’s way in to make it all work again like it did in version 2…
## -27,17 +27,20 ##
def run(self, result):
# Store the result on the class so tearDown can behave appropriately
self.result = result.result if hasattr(result, 'result') else result
+ if PY >= (3, 4, 0):
+ self._feedErrorsToResultEarly = self._feedErrorsToResult
+ self._feedErrorsToResult = lambda *args, **kwargs: None # no-op
super(SmartTestCase, self).run(result)
#property
def errored(self):
- if PY >= (3, 0, 0):
+ if (3, 0, 0) <= PY < (3, 4, 0):
return bool(self._outcomeForDoCleanups.errors)
return self.id() in [case.id() for case, _ in self.result.errors]
#property
def failed(self):
- if PY >= (3, 0, 0):
+ if (3, 0, 0) <= PY < (3, 4, 0):
return bool(self._outcomeForDoCleanups.failures)
return self.id() in [case.id() for case, _ in self.result.failures]
## -45,6 +48,10 ##
def passed(self):
return not (self.errored or self.failed)
+ def tearDown(self):
+ if PY >= (3, 4, 0):
+ self._feedErrorsToResultEarly(self.result, self._outcome.errors)
+
class TestClass(SmartTestCase):
## -64,6 +71,7 ##
self.assertHerp('Derp')
def tearDown(self):
+ super(TestClass, self).tearDown()
log.critical('---- RUNNING {} ... -----'.format(self.id()))
if self.errored:
log.critical('----- ERRORED -----')
…provided, of course, all consumers of this class remember to super(…, self).tearDown() in their respective tearDown methods…
Disclaimer: This is purely educational, don’t try this at home, etc. etc. etc. I’m not particularly proud of this solution, but it seems to work well enough for the time being, and is the best I could hack up after fiddling for an hour or two on a Saturday afternoon…
The name of the current test can be retrieved with the unittest.TestCase.id() method. So in tearDown you can check self.id().
The example shows how to:
find if the current test has an error or failure in errors or failures list
print the test id with PASS or FAIL or EXCEPTION
The tested example here works with scoffey's nice example.
def tearDown(self):
result = "PASS"
#### Find and show result for current test
# I did not find any nicer/neater way of comparing self.id() with test id stored in errors or failures lists :-7
id = str(self.id()).split('.')[-1]
# id() e.g. tup[0]:<__main__.MyTest testMethod=test_onePlusNoneIsNone>
# str(tup[0]):"test_onePlusOneEqualsThree (__main__.MyTest)"
# str(self.id()) = __main__.MyTest.test_onePlusNoneIsNone
for tup in self.currentResult.failures:
if str(tup[0]).startswith(id):
print ' test %s failure:%s' % (self.id(), tup[1])
## DO TEST FAIL ACTION HERE
result = "FAIL"
for tup in self.currentResult.errors:
if str(tup[0]).startswith(id):
print ' test %s error:%s' % (self.id(), tup[1])
## DO TEST EXCEPTION ACTION HERE
result = "EXCEPTION"
print "Test:%s Result:%s" % (self.id(), result)
Example of result:
python run_scripts/tut2.py 2>&1
E test __main__.MyTest.test_onePlusNoneIsNone error:Traceback (most recent call last):
File "run_scripts/tut2.py", line 80, in test_onePlusNoneIsNone
self.assertTrue(1 + None is None) # raises TypeError
TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'
Test:__main__.MyTest.test_onePlusNoneIsNone Result:EXCEPTION
F test __main__.MyTest.test_onePlusOneEqualsThree failure:Traceback (most recent call last):
File "run_scripts/tut2.py", line 77, in test_onePlusOneEqualsThree
self.assertTrue(1 + 1 == 3) # fails
AssertionError: False is not true
Test:__main__.MyTest.test_onePlusOneEqualsThree Result:FAIL
Test:__main__.MyTest.test_onePlusOneEqualsTwo Result:PASS
.
======================================================================
ERROR: test_onePlusNoneIsNone (__main__.MyTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "run_scripts/tut2.py", line 80, in test_onePlusNoneIsNone
self.assertTrue(1 + None is None) # raises TypeError
TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'
======================================================================
FAIL: test_onePlusOneEqualsThree (__main__.MyTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "run_scripts/tut2.py", line 77, in test_onePlusOneEqualsThree
self.assertTrue(1 + 1 == 3) # fails
AssertionError: False is not true
----------------------------------------------------------------------
Ran 3 tests in 0.001s
FAILED (failures=1, errors=1)
Tested for Python 3.7 - sample code for getting information of failing assertions, but can give an idea of how to deal with errors:
def tearDown(self):
if self._outcome.errors[1][1] and hasattr(self._outcome.errors[1][1][1], 'actual'):
print(self._testMethodName)
print(self._outcome.errors[1][1][1].actual)
print(self._outcome.errors[1][1][1].expected)
In a few words, this gives True if all tests run so far exited with no errors or failures:
class WatheverTestCase(TestCase):
def tear_down(self):
return not self._outcome.result.errors and not self._outcome.result.failures
Explore _outcome's properties to access more detailed possibilities.
This is simple, makes use of the public API only, and shall work on any python version:
import unittest
class MyTest(unittest.TestCase):
def defaultTestResult():
self.lastResult = unittest.result.TestResult()
return self.lastResult
...
Python version independent code using global variable
import unittest
global test_case_id
global test_title
global test_result
test_case_id =''
test_title = ''
test_result = ''
class Dummy(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
global test_case_id
global test_title
global test_result
self.test_case_id = test_case_id
self.test_title = test_title
self.test_result = test_result
print('Test case id is : ',self.test_case_id)
print('test title is : ',self.test_title)
print('Test test result is : ',self.test_result)
def test_a(self):
global test_case_id
global test_title
global test_result
test_case_id = 'test1'
test_title = 'To verify test1'
test_result=self.assertTrue(True)
def test_b(self):
global test_case_id
global test_title
global test_result
test_case_id = 'test2'
test_title = 'To verify test2'
test_result=self.assertFalse(False)
if __name__ == "__main__":
unittest.main()

Categories

Resources