Outputting data from unit test in Python - python

If I'm writing unit tests in Python (using the unittest module), is it possible to output data from a failed test, so I can examine it to help deduce what caused the error?
I am aware of the ability to create a customized message, which can carry some information, but sometimes you might deal with more complex data, that can't easily be represented as a string.
For example, suppose you had a class Foo, and were testing a method bar, using data from a list called testdata:
class TestBar(unittest.TestCase):
def runTest(self):
for t1, t2 in testdata:
f = Foo(t1)
self.assertEqual(f.bar(t2), 2)
If the test failed, I might want to output t1, t2 and/or f, to see why this particular data resulted in a failure. By output, I mean that the variables can be accessed like any other variables, after the test has been run.

We use the logging module for this.
For example:
import logging
class SomeTest( unittest.TestCase ):
def testSomething( self ):
log= logging.getLogger( "SomeTest.testSomething" )
log.debug( "this= %r", self.this )
log.debug( "that= %r", self.that )
self.assertEqual( 3.14, pi )
if __name__ == "__main__":
logging.basicConfig( stream=sys.stderr )
logging.getLogger( "SomeTest.testSomething" ).setLevel( logging.DEBUG )
unittest.main()
That allows us to turn on debugging for specific tests which we know are failing and for which we want additional debugging information.
My preferred method, however, isn't to spend a lot of time on debugging, but spend it writing more fine-grained tests to expose the problem.

In Python 2.7 you could use an additional parameter, msg, to add information to the error message like this:
self.assertEqual(f.bar(t2), 2, msg='{0}, {1}'.format(t1, t2))
The official documentation is here.

You can use simple print statements, or any other way of writing to standard output. You can also invoke the Python debugger anywhere in your tests.
If you use nose to run your tests (which I recommend), it will collect the standard output for each test and only show it to you if the test failed, so you don't have to live with the cluttered output when the tests pass.
nose also has switches to automatically show variables mentioned in asserts, or to invoke the debugger on failed tests. For example, -s (--nocapture) prevents the capture of standard output.

I don't think this is quite what you're looking for. There's no way to display variable values that don't fail, but this may help you get closer to outputting the results the way you want.
You can use the TestResult object returned by the TestRunner.run() for results analysis and processing. Particularly, TestResult.errors and TestResult.failures
About the TestResults Object:
http://docs.python.org/library/unittest.html#id3
And some code to point you in the right direction:
>>> import random
>>> import unittest
>>>
>>> class TestSequenceFunctions(unittest.TestCase):
... def setUp(self):
... self.seq = range(5)
... def testshuffle(self):
... # make sure the shuffled sequence does not lose any elements
... random.shuffle(self.seq)
... self.seq.sort()
... self.assertEqual(self.seq, range(10))
... def testchoice(self):
... element = random.choice(self.seq)
... error_test = 1/0
... self.assert_(element in self.seq)
... def testsample(self):
... self.assertRaises(ValueError, random.sample, self.seq, 20)
... for element in random.sample(self.seq, 5):
... self.assert_(element in self.seq)
...
>>> suite = unittest.TestLoader().loadTestsFromTestCase(TestSequenceFunctions)
>>> testResult = unittest.TextTestRunner(verbosity=2).run(suite)
testchoice (__main__.TestSequenceFunctions) ... ERROR
testsample (__main__.TestSequenceFunctions) ... ok
testshuffle (__main__.TestSequenceFunctions) ... FAIL
======================================================================
ERROR: testchoice (__main__.TestSequenceFunctions)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<stdin>", line 11, in testchoice
ZeroDivisionError: integer division or modulo by zero
======================================================================
FAIL: testshuffle (__main__.TestSequenceFunctions)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<stdin>", line 8, in testshuffle
AssertionError: [0, 1, 2, 3, 4] != [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
----------------------------------------------------------------------
Ran 3 tests in 0.031s
FAILED (failures=1, errors=1)
>>>
>>> testResult.errors
[(<__main__.TestSequenceFunctions testMethod=testchoice>, 'Traceback (most recent call last):\n File "<stdin>"
, line 11, in testchoice\nZeroDivisionError: integer division or modulo by zero\n')]
>>>
>>> testResult.failures
[(<__main__.TestSequenceFunctions testMethod=testshuffle>, 'Traceback (most recent call last):\n File "<stdin>
", line 8, in testshuffle\nAssertionError: [0, 1, 2, 3, 4] != [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n')]
>>>

The method I use is really simple. I just log it as a warning so it will actually show up.
import logging
class TestBar(unittest.TestCase):
def runTest(self):
#this line is important
logging.basicConfig()
log = logging.getLogger("LOG")
for t1, t2 in testdata:
f = Foo(t1)
self.assertEqual(f.bar(t2), 2)
log.warning(t1)

In these cases I use a log.debug() with some messages in my application. Since the default logging level is WARNING, such messages don't show in the normal execution.
Then, in the unit test I change the logging level to DEBUG, so that such messages are shown while running them.
import logging
log.debug("Some messages to be shown just when debugging or unit testing")
In the unit tests:
# Set log level
loglevel = logging.DEBUG
logging.basicConfig(level=loglevel)
See a full example:
This is daikiri.py, a basic class that implements a daikiri with its name and price. There is method make_discount() that returns the price of that specific daikiri after applying a given discount:
import logging
log = logging.getLogger(__name__)
class Daikiri(object):
def __init__(self, name, price):
self.name = name
self.price = price
def make_discount(self, percentage):
log.debug("Deducting discount...") # I want to see this message
return self.price * percentage
Then, I create a unit test, test_daikiri.py, that checks its usage:
import unittest
import logging
from .daikiri import Daikiri
class TestDaikiri(unittest.TestCase):
def setUp(self):
# Changing log level to DEBUG
loglevel = logging.DEBUG
logging.basicConfig(level=loglevel)
self.mydaikiri = Daikiri("cuban", 25)
def test_drop_price(self):
new_price = self.mydaikiri.make_discount(0)
self.assertEqual(new_price, 0)
if __name__ == "__main__":
unittest.main()
So when I execute it I get the log.debug messages:
$ python -m test_daikiri
DEBUG:daikiri:Deducting discount...
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK

Another option - start a debugger where the test fails.
Try running your tests with Testoob (it will run your unit test suite without changes), and you can use the '--debug' command line switch to open a debugger when a test fails.
Here's a terminal session on Windows:
C:\work> testoob tests.py --debug
F
Debugging for failure in test: test_foo (tests.MyTests.test_foo)
> c:\python25\lib\unittest.py(334)failUnlessEqual()
-> (msg or '%r != %r' % (first, second))
(Pdb) up
> c:\work\tests.py(6)test_foo()
-> self.assertEqual(x, y)
(Pdb) l
1 from unittest import TestCase
2 class MyTests(TestCase):
3 def test_foo(self):
4 x = 1
5 y = 2
6 -> self.assertEqual(x, y)
[EOF]
(Pdb)

I think I might have been overthinking this. One way I've come up with that does the job, is simply to have a global variable that accumulates the diagnostic data.
Something like this:
log1 = dict()
class TestBar(unittest.TestCase):
def runTest(self):
for t1, t2 in testdata:
f = Foo(t1)
if f.bar(t2) != 2:
log1("TestBar.runTest") = (f, t1, t2)
self.fail("f.bar(t2) != 2")

Use logging:
import unittest
import logging
import inspect
import os
logging_level = logging.INFO
try:
log_file = os.environ["LOG_FILE"]
except KeyError:
log_file = None
def logger(stack=None):
if not hasattr(logger, "initialized"):
logging.basicConfig(filename=log_file, level=logging_level)
logger.initialized = True
if not stack:
stack = inspect.stack()
name = stack[1][3]
try:
name = stack[1][0].f_locals["self"].__class__.__name__ + "." + name
except KeyError:
pass
return logging.getLogger(name)
def todo(msg):
logger(inspect.stack()).warning("TODO: {}".format(msg))
def get_pi():
logger().info("sorry, I know only three digits")
return 3.14
class Test(unittest.TestCase):
def testName(self):
todo("use a better get_pi")
pi = get_pi()
logger().info("pi = {}".format(pi))
todo("check more digits in pi")
self.assertAlmostEqual(pi, 3.14)
logger().debug("end of this test")
pass
Usage:
# LOG_FILE=/tmp/log python3 -m unittest LoggerDemo
.
----------------------------------------------------------------------
Ran 1 test in 0.047s
OK
# cat /tmp/log
WARNING:Test.testName:TODO: use a better get_pi
INFO:get_pi:sorry, I know only three digits
INFO:Test.testName:pi = 3.14
WARNING:Test.testName:TODO: check more digits in pi
If you do not set LOG_FILE, logging will got to stderr.

You can use logging module for that.
So in the unit test code, use:
import logging as log
def test_foo(self):
log.debug("Some debug message.")
log.info("Some info message.")
log.warning("Some warning message.")
log.error("Some error message.")
By default warnings and errors are outputted to /dev/stderr, so they should be visible on the console.
To customize logs (such as formatting), try the following sample:
# Set-up logger
if args.verbose or args.debug:
logging.basicConfig( stream=sys.stdout )
root = logging.getLogger()
root.setLevel(logging.INFO if args.verbose else logging.DEBUG)
ch = logging.StreamHandler(sys.stdout)
ch.setLevel(logging.INFO if args.verbose else logging.DEBUG)
ch.setFormatter(logging.Formatter('%(asctime)s %(levelname)s: %(name)s: %(message)s'))
root.addHandler(ch)
else:
logging.basicConfig(stream=sys.stderr)

inspect.trace will let you get local variables after an exception has been thrown. You can then wrap the unit tests with a decorator like the following one to save off those local variables for examination during the post mortem.
import random
import unittest
import inspect
def store_result(f):
"""
Store the results of a test
On success, store the return value.
On failure, store the local variables where the exception was thrown.
"""
def wrapped(self):
if 'results' not in self.__dict__:
self.results = {}
# If a test throws an exception, store local variables in results:
try:
result = f(self)
except Exception as e:
self.results[f.__name__] = {'success':False, 'locals':inspect.trace()[-1][0].f_locals}
raise e
self.results[f.__name__] = {'success':True, 'result':result}
return result
return wrapped
def suite_results(suite):
"""
Get all the results from a test suite
"""
ans = {}
for test in suite:
if 'results' in test.__dict__:
ans.update(test.results)
return ans
# Example:
class TestSequenceFunctions(unittest.TestCase):
def setUp(self):
self.seq = range(10)
#store_result
def test_shuffle(self):
# make sure the shuffled sequence does not lose any elements
random.shuffle(self.seq)
self.seq.sort()
self.assertEqual(self.seq, range(10))
# should raise an exception for an immutable sequence
self.assertRaises(TypeError, random.shuffle, (1,2,3))
return {1:2}
#store_result
def test_choice(self):
element = random.choice(self.seq)
self.assertTrue(element in self.seq)
return {7:2}
#store_result
def test_sample(self):
x = 799
with self.assertRaises(ValueError):
random.sample(self.seq, 20)
for element in random.sample(self.seq, 5):
self.assertTrue(element in self.seq)
return {1:99999}
suite = unittest.TestLoader().loadTestsFromTestCase(TestSequenceFunctions)
unittest.TextTestRunner(verbosity=2).run(suite)
from pprint import pprint
pprint(suite_results(suite))
The last line will print the returned values where the test succeeded and the local variables, in this case x, when it fails:
{'test_choice': {'result': {7: 2}, 'success': True},
'test_sample': {'locals': {'self': <__main__.TestSequenceFunctions testMethod=test_sample>,
'x': 799},
'success': False},
'test_shuffle': {'result': {1: 2}, 'success': True}}

Catch the exception that gets generated from the assertion failure. In your catch block you could output the data however you wanted to wherever. Then when you were done you could rethrow the exception. The test runner probably wouldn't know the difference.
Disclaimer: I haven't tried this with Python's unit test framework, but I have with other unit test frameworks.

Expanding on Facundo Casco's answer, this works quite well for me:
class MyTest(unittest.TestCase):
def messenger(self, message):
try:
self.assertEqual(1, 2, msg=message)
except AssertionError as e:
print "\nMESSENGER OUTPUT: %s" % str(e),

Related

What is the best practice for testing a function with a list of inputs and outputs with python's unittest module? [duplicate]

I have some kind of test data and want to create a unit test for each item. My first idea was to do it like this:
import unittest
l = [["foo", "a", "a",], ["bar", "a", "b"], ["lee", "b", "b"]]
class TestSequence(unittest.TestCase):
def testsample(self):
for name, a,b in l:
print "test", name
self.assertEqual(a,b)
if __name__ == '__main__':
unittest.main()
The downside of this is that it handles all data in one test. I would like to generate one test for each item on the fly. Any suggestions?
This is called "parametrization".
There are several tools that support this approach. E.g.:
pytest's decorator
parameterized
The resulting code looks like this:
from parameterized import parameterized
class TestSequence(unittest.TestCase):
#parameterized.expand([
["foo", "a", "a",],
["bar", "a", "b"],
["lee", "b", "b"],
])
def test_sequence(self, name, a, b):
self.assertEqual(a,b)
Which will generate the tests:
test_sequence_0_foo (__main__.TestSequence) ... ok
test_sequence_1_bar (__main__.TestSequence) ... FAIL
test_sequence_2_lee (__main__.TestSequence) ... ok
======================================================================
FAIL: test_sequence_1_bar (__main__.TestSequence)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/parameterized/parameterized.py", line 233, in <lambda>
standalone_func = lambda *a: func(*(a + p.args), **p.kwargs)
File "x.py", line 12, in test_sequence
self.assertEqual(a,b)
AssertionError: 'a' != 'b'
For historical reasons I'll leave the original answer circa 2008):
I use something like this:
import unittest
l = [["foo", "a", "a",], ["bar", "a", "b"], ["lee", "b", "b"]]
class TestSequense(unittest.TestCase):
pass
def test_generator(a, b):
def test(self):
self.assertEqual(a,b)
return test
if __name__ == '__main__':
for t in l:
test_name = 'test_%s' % t[0]
test = test_generator(t[1], t[2])
setattr(TestSequense, test_name, test)
unittest.main()
Using unittest (since 3.4)
Since Python 3.4, the standard library unittest package has the subTest context manager.
See the documentation:
26.4.7. Distinguishing test iterations using subtests
subTest
Example:
from unittest import TestCase
param_list = [('a', 'a'), ('a', 'b'), ('b', 'b')]
class TestDemonstrateSubtest(TestCase):
def test_works_as_expected(self):
for p1, p2 in param_list:
with self.subTest():
self.assertEqual(p1, p2)
You can also specify a custom message and parameter values to subTest():
with self.subTest(msg="Checking if p1 equals p2", p1=p1, p2=p2):
Using nose
The nose testing framework supports this.
Example (the code below is the entire contents of the file containing the test):
param_list = [('a', 'a'), ('a', 'b'), ('b', 'b')]
def test_generator():
for params in param_list:
yield check_em, params[0], params[1]
def check_em(a, b):
assert a == b
The output of the nosetests command:
> nosetests -v
testgen.test_generator('a', 'a') ... ok
testgen.test_generator('a', 'b') ... FAIL
testgen.test_generator('b', 'b') ... ok
======================================================================
FAIL: testgen.test_generator('a', 'b')
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.5/site-packages/nose-0.10.1-py2.5.egg/nose/case.py", line 203, in runTest
self.test(*self.arg)
File "testgen.py", line 7, in check_em
assert a == b
AssertionError
----------------------------------------------------------------------
Ran 3 tests in 0.006s
FAILED (failures=1)
This can be solved elegantly using Metaclasses:
import unittest
l = [["foo", "a", "a",], ["bar", "a", "b"], ["lee", "b", "b"]]
class TestSequenceMeta(type):
def __new__(mcs, name, bases, dict):
def gen_test(a, b):
def test(self):
self.assertEqual(a, b)
return test
for tname, a, b in l:
test_name = "test_%s" % tname
dict[test_name] = gen_test(a,b)
return type.__new__(mcs, name, bases, dict)
class TestSequence(unittest.TestCase):
__metaclass__ = TestSequenceMeta
if __name__ == '__main__':
unittest.main()
As of Python 3.4, subtests have been introduced to unittest for this purpose. See the documentation for details. TestCase.subTest is a context manager which allows one to isolate asserts in a test so that a failure will be reported with parameter information, but it does not stop the test execution. Here's the example from the documentation:
class NumbersTest(unittest.TestCase):
def test_even(self):
"""
Test that numbers between 0 and 5 are all even.
"""
for i in range(0, 6):
with self.subTest(i=i):
self.assertEqual(i % 2, 0)
The output of a test run would be:
======================================================================
FAIL: test_even (__main__.NumbersTest) (i=1)
----------------------------------------------------------------------
Traceback (most recent call last):
File "subtests.py", line 32, in test_even
self.assertEqual(i % 2, 0)
AssertionError: 1 != 0
======================================================================
FAIL: test_even (__main__.NumbersTest) (i=3)
----------------------------------------------------------------------
Traceback (most recent call last):
File "subtests.py", line 32, in test_even
self.assertEqual(i % 2, 0)
AssertionError: 1 != 0
======================================================================
FAIL: test_even (__main__.NumbersTest) (i=5)
----------------------------------------------------------------------
Traceback (most recent call last):
File "subtests.py", line 32, in test_even
self.assertEqual(i % 2, 0)
AssertionError: 1 != 0
This is also part of unittest2, so it is available for earlier versions of Python.
load_tests is a little known mechanism introduced in 2.7 to dynamically create a TestSuite. With it, you can easily create parametrized tests.
For example:
import unittest
class GeneralTestCase(unittest.TestCase):
def __init__(self, methodName, param1=None, param2=None):
super(GeneralTestCase, self).__init__(methodName)
self.param1 = param1
self.param2 = param2
def runTest(self):
pass # Test that depends on param 1 and 2.
def load_tests(loader, tests, pattern):
test_cases = unittest.TestSuite()
for p1, p2 in [(1, 2), (3, 4)]:
test_cases.addTest(GeneralTestCase('runTest', p1, p2))
return test_cases
That code will run all the TestCases in the TestSuite returned by load_tests. No other tests are automatically run by the discovery mechanism.
Alternatively, you can also use inheritance as shown in this ticket: http://bugs.python.org/msg151444
It can be done by using pytest. Just write the file test_me.py with content:
import pytest
#pytest.mark.parametrize('name, left, right', [['foo', 'a', 'a'],
['bar', 'a', 'b'],
['baz', 'b', 'b']])
def test_me(name, left, right):
assert left == right, name
And run your test with command py.test --tb=short test_me.py. Then the output will look like:
=========================== test session starts ============================
platform darwin -- Python 2.7.6 -- py-1.4.23 -- pytest-2.6.1
collected 3 items
test_me.py .F.
================================= FAILURES =================================
_____________________________ test_me[bar-a-b] _____________________________
test_me.py:8: in test_me
assert left == right, name
E AssertionError: bar
==================== 1 failed, 2 passed in 0.01 seconds ====================
It is simple! Also pytest has more features like fixtures, mark, assert, etc.
Use the ddt library. It adds simple decorators for the test methods:
import unittest
from ddt import ddt, data
from mycode import larger_than_two
#ddt
class FooTestCase(unittest.TestCase):
#data(3, 4, 12, 23)
def test_larger_than_two(self, value):
self.assertTrue(larger_than_two(value))
#data(1, -3, 2, 0)
def test_not_larger_than_two(self, value):
self.assertFalse(larger_than_two(value))
This library can be installed with pip. It doesn't require nose, and works excellent with the standard library unittest module.
You would benefit from trying the TestScenarios library.
testscenarios provides clean dependency injection for python unittest style tests. This can be used for interface testing (testing many implementations via a single test suite) or for classic dependency injection (provide tests with dependencies externally to the test code itself, allowing easy testing in different situations).
There's also Hypothesis which adds fuzz or property based testing.
This is a very powerful testing method.
This is effectively the same as parameterized as mentioned in a previous answer, but specific to unittest:
def sub_test(param_list):
"""Decorates a test case to run it as a set of subtests."""
def decorator(f):
#functools.wraps(f)
def wrapped(self):
for param in param_list:
with self.subTest(**param):
f(self, **param)
return wrapped
return decorator
Example usage:
class TestStuff(unittest.TestCase):
#sub_test([
dict(arg1='a', arg2='b'),
dict(arg1='x', arg2='y'),
])
def test_stuff(self, arg1, arg2):
...
You can use the nose-ittr plugin (pip install nose-ittr).
It's very easy to integrate with existing tests, and minimal changes (if any) are required. It also supports the nose multiprocessing plugin.
Note that you can also have a customize setup function per test.
#ittr(number=[1, 2, 3, 4])
def test_even(self):
assert_equal(self.number % 2, 0)
It is also possible to pass nosetest parameters like with their built-in plugin attrib. This way you can run only a specific test with specific parameter:
nosetest -a number=2
I use metaclasses and decorators for generate tests. You can check my implementation python_wrap_cases. This library doesn't require any test frameworks.
Your example:
import unittest
from python_wrap_cases import wrap_case
#wrap_case
class TestSequence(unittest.TestCase):
#wrap_case("foo", "a", "a")
#wrap_case("bar", "a", "b")
#wrap_case("lee", "b", "b")
def testsample(self, name, a, b):
print "test", name
self.assertEqual(a, b)
Console output:
testsample_u'bar'_u'a'_u'b' (tests.example.test_stackoverflow.TestSequence) ... test bar
FAIL
testsample_u'foo'_u'a'_u'a' (tests.example.test_stackoverflow.TestSequence) ... test foo
ok
testsample_u'lee'_u'b'_u'b' (tests.example.test_stackoverflow.TestSequence) ... test lee
ok
Also you may use generators. For example this code generate all possible combinations of tests with arguments a__list and b__list
import unittest
from python_wrap_cases import wrap_case
#wrap_case
class TestSequence(unittest.TestCase):
#wrap_case(a__list=["a", "b"], b__list=["a", "b"])
def testsample(self, a, b):
self.assertEqual(a, b)
Console output:
testsample_a(u'a')_b(u'a') (tests.example.test_stackoverflow.TestSequence) ... ok
testsample_a(u'a')_b(u'b') (tests.example.test_stackoverflow.TestSequence) ... FAIL
testsample_a(u'b')_b(u'a') (tests.example.test_stackoverflow.TestSequence) ... FAIL
testsample_a(u'b')_b(u'b') (tests.example.test_stackoverflow.TestSequence) ... ok
I came across ParamUnittest the other day when looking at the source code for radon (example usage on the GitHub repository). It should work with other frameworks that extend TestCase (like Nose).
Here is an example:
import unittest
import paramunittest
#paramunittest.parametrized(
('1', '2'),
#(4, 3), <---- Uncomment to have a failing test
('2', '3'),
(('4', ), {'b': '5'}),
((), {'a': 5, 'b': 6}),
{'a': 5, 'b': 6},
)
class TestBar(TestCase):
def setParameters(self, a, b):
self.a = a
self.b = b
def testLess(self):
self.assertLess(self.a, self.b)
import unittest
def generator(test_class, a, b):
def test(self):
self.assertEqual(a, b)
return test
def add_test_methods(test_class):
# The first element of list is variable "a", then variable "b", then name of test case that will be used as suffix.
test_list = [[2,3, 'one'], [5,5, 'two'], [0,0, 'three']]
for case in test_list:
test = generator(test_class, case[0], case[1])
setattr(test_class, "test_%s" % case[2], test)
class TestAuto(unittest.TestCase):
def setUp(self):
print 'Setup'
pass
def tearDown(self):
print 'TearDown'
pass
_add_test_methods(TestAuto) # It's better to start with underscore so it is not detected as a test itself
if __name__ == '__main__':
unittest.main(verbosity=1)
RESULT:
>>>
Setup
FTearDown
Setup
TearDown
.Setup
TearDown
.
======================================================================
FAIL: test_one (__main__.TestAuto)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:/inchowar/Desktop/PyTrash/test_auto_3.py", line 5, in test
self.assertEqual(a, b)
AssertionError: 2 != 3
----------------------------------------------------------------------
Ran 3 tests in 0.019s
FAILED (failures=1)
Just use metaclasses, as seen here;
class DocTestMeta(type):
"""
Test functions are generated in metaclass due to the way some
test loaders work. For example, setupClass() won't get called
unless there are other existing test methods, and will also
prevent unit test loader logic being called before the test
methods have been defined.
"""
def __init__(self, name, bases, attrs):
super(DocTestMeta, self).__init__(name, bases, attrs)
def __new__(cls, name, bases, attrs):
def func(self):
"""Inner test method goes here"""
self.assertTrue(1)
func.__name__ = 'test_sample'
attrs[func.__name__] = func
return super(DocTestMeta, cls).__new__(cls, name, bases, attrs)
class ExampleTestCase(TestCase):
"""Our example test case, with no methods defined"""
__metaclass__ = DocTestMeta
Output:
test_sample (ExampleTestCase) ... OK
I'd been having trouble with a very particular style of parameterized tests. All our Selenium tests can run locally, but they also should be able to be run remotely against several platforms on SauceLabs. Basically, I wanted to take a large amount of already-written test cases and parameterize them with the fewest changes to code possible. Furthermore, I needed to be able to pass the parameters into the setUp method, something which I haven't seen any solutions for elsewhere.
Here's what I've come up with:
import inspect
import types
test_platforms = [
{'browserName': "internet explorer", 'platform': "Windows 7", 'version': "10.0"},
{'browserName': "internet explorer", 'platform': "Windows 7", 'version': "11.0"},
{'browserName': "firefox", 'platform': "Linux", 'version': "43.0"},
]
def sauce_labs():
def wrapper(cls):
return test_on_platforms(cls)
return wrapper
def test_on_platforms(base_class):
for name, function in inspect.getmembers(base_class, inspect.isfunction):
if name.startswith('test_'):
for platform in test_platforms:
new_name = '_'.join(list([name, ''.join(platform['browserName'].title().split()), platform['version']]))
new_function = types.FunctionType(function.__code__, function.__globals__, new_name,
function.__defaults__, function.__closure__)
setattr(new_function, 'platform', platform)
setattr(base_class, new_name, new_function)
delattr(base_class, name)
return base_class
With this, all I had to do was add a simple decorator #sauce_labs() to each regular old TestCase, and now when running them, they're wrapped up and rewritten, so that all the test methods are parameterized and renamed. LoginTests.test_login(self) runs as LoginTests.test_login_internet_explorer_10.0(self), LoginTests.test_login_internet_explorer_11.0(self), and LoginTests.test_login_firefox_43.0(self), and each one has the parameter self.platform to decide what browser/platform to run against, even in LoginTests.setUp, which is crucial for my task since that's where the connection to SauceLabs is initialized.
Anyway, I hope this might be of help to someone looking to do a similar "global" parameterization of their tests!
You can use TestSuite and custom TestCase classes.
import unittest
class CustomTest(unittest.TestCase):
def __init__(self, name, a, b):
super().__init__()
self.name = name
self.a = a
self.b = b
def runTest(self):
print("test", self.name)
self.assertEqual(self.a, self.b)
if __name__ == '__main__':
suite = unittest.TestSuite()
suite.addTest(CustomTest("Foo", 1337, 1337))
suite.addTest(CustomTest("Bar", 0xDEAD, 0xC0DE))
unittest.TextTestRunner().run(suite)
I have found that this works well for my purposes, especially if I need to generate tests that do slightly difference processes on a collection of data.
import unittest
def rename(newName):
def renamingFunc(func):
func.__name__ == newName
return func
return renamingFunc
class TestGenerator(unittest.TestCase):
TEST_DATA = {}
#classmethod
def generateTests(cls):
for dataName, dataValue in TestGenerator.TEST_DATA:
for func in cls.getTests(dataName, dataValue):
setattr(cls, "test_{:s}_{:s}".format(func.__name__, dataName), func)
#classmethod
def getTests(cls):
raise(NotImplementedError("This must be implemented"))
class TestCluster(TestGenerator):
TEST_CASES = []
#staticmethod
def getTests(dataName, dataValue):
def makeTest(case):
#rename("{:s}".format(case["name"]))
def test(self):
# Do things with self, case, data
pass
return test
return [makeTest(c) for c in TestCluster.TEST_CASES]
TestCluster.generateTests()
The TestGenerator class can be used to spawn different sets of test cases like TestCluster.
TestCluster can be thought of as an implementation of the TestGenerator interface.
This solution works with unittest and nose for Python 2 and Python 3:
#!/usr/bin/env python
import unittest
def make_function(description, a, b):
def ghost(self):
self.assertEqual(a, b, description)
print(description)
ghost.__name__ = 'test_{0}'.format(description)
return ghost
class TestsContainer(unittest.TestCase):
pass
testsmap = {
'foo': [1, 1],
'bar': [1, 2],
'baz': [5, 5]}
def generator():
for name, params in testsmap.iteritems():
test_func = make_function(name, params[0], params[1])
setattr(TestsContainer, 'test_{0}'.format(name), test_func)
generator()
if __name__ == '__main__':
unittest.main()
Meta-programming is fun, but it can get in the way. Most solutions here make it difficult to:
selectively launch a test
point back to the code given the test's name
So, my first suggestion is to follow the simple/explicit path (works with any test runner):
import unittest
class TestSequence(unittest.TestCase):
def _test_complex_property(self, a, b):
self.assertEqual(a,b)
def test_foo(self):
self._test_complex_property("a", "a")
def test_bar(self):
self._test_complex_property("a", "b")
def test_lee(self):
self._test_complex_property("b", "b")
if __name__ == '__main__':
unittest.main()
Since we shouldn't repeat ourselves, my second suggestion builds on Javier's answer: embrace property based testing. Hypothesis library:
is "more relentlessly devious about test case generation than us mere humans"
will provide simple count-examples
works with any test runner
has many more interesting features (statistics, additional test output, ...)
class TestSequence(unittest.TestCase):
#given(st.text(), st.text())
def test_complex_property(self, a, b):
self.assertEqual(a,b)
To test your specific examples, just add:
#example("a", "a")
#example("a", "b")
#example("b", "b")
To run only one particular example, you can comment out the other examples (provided example will be run first). You may want to use #given(st.nothing()). Another option is to replace the whole block by:
#given(st.just("a"), st.just("b"))
OK, you don't have distinct test names. But maybe you just need:
a descriptive name of the property under test.
which input leads to failure (falsifying example).
Funnier example
I had trouble making these work for setUpClass.
Here's a version of Javier's answer that gives setUpClass access to dynamically allocated attributes.
import unittest
class GeneralTestCase(unittest.TestCase):
#classmethod
def setUpClass(cls):
print ''
print cls.p1
print cls.p2
def runTest1(self):
self.assertTrue((self.p2 - self.p1) == 1)
def runTest2(self):
self.assertFalse((self.p2 - self.p1) == 2)
def load_tests(loader, tests, pattern):
test_cases = unittest.TestSuite()
for p1, p2 in [(1, 2), (3, 4)]:
clsname = 'TestCase_{}_{}'.format(p1, p2)
dct = {
'p1': p1,
'p2': p2,
}
cls = type(clsname, (GeneralTestCase,), dct)
test_cases.addTest(cls('runTest1'))
test_cases.addTest(cls('runTest2'))
return test_cases
Outputs
1
2
..
3
4
..
----------------------------------------------------------------------
Ran 4 tests in 0.000s
OK
The metaclass-based answers still work in Python 3, but instead of the __metaclass__ attribute, one has to use the metaclass parameter, as in:
class ExampleTestCase(TestCase,metaclass=DocTestMeta):
pass
import unittest
def generator(test_class, a, b,c,d,name):
def test(self):
print('Testexecution=',name)
print('a=',a)
print('b=',b)
print('c=',c)
print('d=',d)
return test
def add_test_methods(test_class):
test_list = [[3,3,5,6, 'one'], [5,5,8,9, 'two'], [0,0,5,6, 'three'],[0,0,2,3,'Four']]
for case in test_list:
print('case=',case[0], case[1],case[2],case[3],case[4])
test = generator(test_class, case[0], case[1],case[2],case[3],case[4])
setattr(test_class, "test_%s" % case[4], test)
class TestAuto(unittest.TestCase):
def setUp(self):
print ('Setup')
pass
def tearDown(self):
print ('TearDown')
pass
add_test_methods(TestAuto)
if __name__ == '__main__':
unittest.main(verbosity=1)
Following is my solution. I find this useful when:
Should work for unittest.Testcase and unittest discover
Have a set of tests to be run for different parameter settings.
Very simple and no dependency on other packages
import unittest
class BaseClass(unittest.TestCase):
def setUp(self):
self.param = 2
self.base = 2
def test_me(self):
self.assertGreaterEqual(5, self.param+self.base)
def test_me_too(self):
self.assertLessEqual(3, self.param+self.base)
class Child_One(BaseClass):
def setUp(self):
BaseClass.setUp(self)
self.param = 4
class Child_Two(BaseClass):
def setUp(self):
BaseClass.setUp(self)
self.param = 1
Besides using setattr, we can use load_tests with Python 3.2 and later.
class Test(unittest.TestCase):
pass
def _test(self, file_name):
open(file_name, 'r') as f:
self.assertEqual('test result',f.read())
def _generate_test(file_name):
def test(self):
_test(self, file_name)
return test
def _generate_tests():
for file in files:
file_name = os.path.splitext(os.path.basename(file))[0]
setattr(Test, 'test_%s' % file_name, _generate_test(file))
test_cases = (Test,)
def load_tests(loader, tests, pattern):
_generate_tests()
suite = TestSuite()
for test_class in test_cases:
tests = loader.loadTestsFromTestCase(test_class)
suite.addTests(tests)
return suite
if __name__ == '__main__':
_generate_tests()
unittest.main()

How to run unittest test cases in the order they are declared

I fully realize that the order of unit tests should not matter. But these unit tests are as much for instructional use as for actual unit testing, so I would like the test output to match up with the test case source code.
I see that there is a way to set the sort order by setting the sortTestMethodsUsing attribute on the test loader. The default is a simple cmp() call to lexically compare names. So I tried writing a cmp-like function that would take two names, find their declaration line numbers and them return the cmp()-equivalent of them:
import unittest
class TestCaseB(unittest.TestCase):
def test(self):
print("running test case B")
class TestCaseA(unittest.TestCase):
def test(self):
print("running test case A")
import inspect
def get_decl_line_no(cls_name):
cls = globals()[cls_name]
return inspect.getsourcelines(cls)[1]
def sgn(x):
return -1 if x < 0 else 1 if x > 0 else 0
def cmp_class_names_by_decl_order(cls_a, cls_b):
a = get_decl_line_no(cls_a)
b = get_decl_line_no(cls_b)
return sgn(a - b)
unittest.defaultTestLoader.sortTestMethodsUsing = cmp_class_names_by_decl_order
unittest.main()
When I run this, I get this output:
running test case A
.running test case B
.
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK
indicating that the test cases are not running in the declaration order.
My sort function is just not being called, so I suspect that main() is building a new test loader, which is wiping out my sort function.
The solution is to create a TestSuite explicitly, instead of letting unittest.main() follow all its default test discovery and ordering behavior. Here's how I got it to work:
import unittest
class TestCaseB(unittest.TestCase):
def runTest(self):
print("running test case B")
class TestCaseA(unittest.TestCase):
def runTest(self):
print("running test case A")
import inspect
def get_decl_line_no(cls):
return inspect.getsourcelines(cls)[1]
# get all test cases defined in this module
test_case_classes = list(filter(lambda c: c.__name__ in globals(),
unittest.TestCase.__subclasses__()))
# sort them by decl line no
test_case_classes.sort(key=get_decl_line_no)
# make into a suite and run it
suite = unittest.TestSuite(cls() for cls in test_case_classes)
unittest.TextTestRunner().run(suite)
This gives the desired output:
running test case B
.running test case A
.
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK
It is important to note that the test method in each class must be named runTest.
You can manually build a TestSuite where your TestCases and all tests inside them run by line number:
# Python 3.8.3
import unittest
import sys
import inspect
def isTestClass(x):
return inspect.isclass(x) and issubclass(x, unittest.TestCase)
def isTestFunction(x):
return inspect.isfunction(x) and x.__name__.startswith("test")
class TestB(unittest.TestCase):
def test_B(self):
print("Running test_B")
self.assertEqual((2+2), 4)
def test_A(self):
print("Running test_A")
self.assertEqual((2+2), 4)
def setUpClass():
print("TestB Class Setup")
class TestA(unittest.TestCase):
def test_A(self):
print("Running test_A")
self.assertEqual((2+2), 4)
def test_B(self):
print("Running test_B")
self.assertEqual((2+2), 4)
def setUpClass():
print("TestA Class Setup")
def suite():
# get current module object
module = sys.modules[__name__]
# get all test className,class tuples in current module
testClasses = [
tup for tup in
inspect.getmembers(module, isTestClass)
]
# sort classes by line number
testClasses.sort(key=lambda t: inspect.getsourcelines(t[1])[1])
testSuite = unittest.TestSuite()
for testClass in testClasses:
# get list of testFunctionName,testFunction tuples in current class
classTests = [
tup for tup in
inspect.getmembers(testClass[1], isTestFunction)
]
# sort TestFunctions by line number
classTests.sort(key=lambda t: inspect.getsourcelines(t[1])[1])
# create TestCase instances and add to testSuite;
for test in classTests:
testSuite.addTest(testClass[1](test[0]))
return testSuite
if __name__ == '__main__':
runner = unittest.TextTestRunner()
runner.run(suite())
Output:
TestB Class Setup
Running test_B
.Running test_A
.TestA Class Setup
Running test_A
.Running test_B
.
----------------------------------------------------------------------
Ran 4 tests in 0.000s
OK
As stated in the name, sortTestMethodsUsing is used to sort test methods. It is not used to sort classes. (It is not used to sort methods in different classes either; separate classes are handled separately.)
If you had two test methods in the same class, sortTestMethodsUsing would be used to determine their order. (At that point, you would get an exception because your function expects class names.)

How to capture assert in python script

Is there any way in Python2.7, we can capture and log the assert statements in general python scripting despite of assert is True or False
Suppose I assert following line in code:
assert len(lst) ==4
So is any way can log what statement is passed, at what line and it is true or false. I do not want to use a wrapper function, looking for something inbuilt in python .
Note: What I want to achieve is , suppose I have legacy code having 1000's of assert statements , without changing the code, I should be able to log which assert statement is executed and what is the output, Is its achievable in python 2.7.
try:
assert len(lst) == 4
print "True"
except AssertionError as e:
print "False"
Yes, you can define a custom excepthook to log some extra information:
import sys
import logging
def excepthook(*args):
logging.getLogger().error('Uncaught exception:', exc_info=args)
sys.excepthook = excepthook
assert 1==2
EDIT: Whoops I forgot you wanted to log even if it's true :) oh well I'll leave this for a little bit in case it informs you or someone else...
This is what I could achieve closet, as does not seem it is possible in Python2.7 . I created a wrapper function.
import inspect
def assertit(condition,message):
# The caller method
(frame, filename, line_number,function_name, lines, index) = inspect.getouterframes(inspect.currentframe())[1]
detailed_message=" => {message} [ {filename} : {line_number}]".format(message=message,line_number=line_number,
filename=filename)
if condition:
print "True %s"%detailed_message
return
raise AssertionError("False: %s"%detailed_message)
assertit(1==1,"Check if 1 equal 1")
assertit(1==2,"Check if 1 equal 2")
### HERE IS THE OUTPUT
True => Check if 1 equal 1 [ test_cases.py : 20]
Traceback (most recent call last):
File "test_cases.py", line 21, in <module>
assertit(1==2,"Check if 1 equal 2")
File "test_cases.py", line 19, in assertit
raise AssertionError("False: %s"%detailed_message)
AssertionError: False: => Check if 1 equal 2 [ test_cases.py : 21]
This is a proof of concept. So please downvote, or I will delete it...
The idea is to replace the assert statement with print statement at the execution.
import ast
import inspect
from your_module import main
def my_assert(test_result, msg):
assert test_result, msg
# if test_result is True, just print the msg
return "assert: {}, {}".format(test_result, msg)
class Transformer(ast.NodeTransformer):
def visit_Assert(self, node):
f = ast.Name(id='my_assert', ctx=ast.Load())
c = ast.Call(func=f, args=[node.test, node.msg], keywords=[])
p = ast.Print(values=[c], nl=True)
# set line#, column offset same as the original assert statement
f.lineno = c.lineno = p.lineno = node.lineno
f.col_offset =c.col_offset = p.col_offset = node.col_offset
return p
def replace_assert_with_print(func):
source = inspect.getsource(func)
ast_tree = ast.parse(source)
Transformer().visit(ast_tree)
return compile(ast_tree, filename="<ast>", mode="exec")
if __name__ == '__main__':
exec(replace_assert_with_print(main))
main(4)
And here is your_module.py
def main(x):
assert x == 4, "hey {} is not 4".format(x)
return x

Getting Python's unittest results in a tearDown() method

Is it possible to get the results of a test (i.e. whether all assertions have passed) in a tearDown() method? I'm running Selenium scripts, and I'd like to do some reporting from inside tearDown(), however I don't know if this is possible.
As of March 2022 this answer is updated to support Python versions between 3.4 and 3.11 (including the newest development Python version). Classification of errors / failures is the same that is used in the output unittest. It works without any modification of code before tearDown(). It correctly recognizes decorators skipIf() and expectedFailure. It is compatible also with pytest.
Code:
import unittest
class MyTest(unittest.TestCase):
def tearDown(self):
if hasattr(self._outcome, 'errors'):
# Python 3.4 - 3.10 (These two methods have no side effects)
result = self.defaultTestResult()
self._feedErrorsToResult(result, self._outcome.errors)
else:
# Python 3.11+
result = self._outcome.result
ok = all(test != self for test, text in result.errors + result.failures)
# Demo output: (print short info immediately - not important)
if ok:
print('\nOK: %s' % (self.id(),))
for typ, errors in (('ERROR', result.errors), ('FAIL', result.failures)):
for test, text in errors:
if test is self:
# the full traceback is in the variable `text`
msg = [x for x in text.split('\n')[1:]
if not x.startswith(' ')][0]
print("\n\n%s: %s\n %s" % (typ, self.id(), msg))
If you don't need the exception info then the second half can be removed. If you want also the tracebacks then use the whole variable text instead of msg. It only can't recognize an unexpected success in a expectedFailure block
Example test methods:
def test_error(self):
self.assertEqual(1 / 0, 1)
def test_fail(self):
self.assertEqual(2, 1)
def test_success(self):
self.assertEqual(1, 1)
Example output:
$ python3 -m unittest test
ERROR: q.MyTest.test_error
ZeroDivisionError: division by zero
E
FAIL: q.MyTest.test_fail
AssertionError: 2 != 1
F
OK: q.MyTest.test_success
.
======================================================================
... skipped the usual output from unittest with tracebacks ...
...
Ran 3 tests in 0.001s
FAILED (failures=1, errors=1)
Complete code including expectedFailure decorator example
EDIT: When I updated this solution to Python 3.11, I dropped everything related to old Python < 3.4 and also many minor notes.
If you take a look at the implementation of unittest.TestCase.run, you can see that all test results are collected in the result object (typically a unittest.TestResult instance) passed as argument. No result status is left in the unittest.TestCase object.
So there isn't much you can do in the unittest.TestCase.tearDown method unless you mercilessly break the elegant decoupling of test cases and test results with something like this:
import unittest
class MyTest(unittest.TestCase):
currentResult = None # Holds last result object passed to run method
def setUp(self):
pass
def tearDown(self):
ok = self.currentResult.wasSuccessful()
errors = self.currentResult.errors
failures = self.currentResult.failures
print ' All tests passed so far!' if ok else \
' %d errors and %d failures so far' % \
(len(errors), len(failures))
def run(self, result=None):
self.currentResult = result # Remember result for use in tearDown
unittest.TestCase.run(self, result) # call superclass run method
def test_onePlusOneEqualsTwo(self):
self.assertTrue(1 + 1 == 2) # Succeeds
def test_onePlusOneEqualsThree(self):
self.assertTrue(1 + 1 == 3) # Fails
def test_onePlusNoneIsNone(self):
self.assertTrue(1 + None is None) # Raises TypeError
if __name__ == '__main__':
unittest.main()
This works for Python 2.6 - 3.3 (modified for new Python below).
CAVEAT: I have no way of double checking the following theory at the moment, being away from a dev box. So this may be a shot in the dark.
Perhaps you could check the return value of sys.exc_info() inside your tearDown() method, if it returns (None, None, None), you know the test case succeeded. Otherwise, you could use returned tuple to interrogate the exception object.
See sys.exc_info documentation.
Another more explicit approach is to write a method decorator that you could slap onto all your test case methods that require this special handling. This decorator can intercept assertion exceptions and based on that modify some state in self allowing your tearDown method to learn what's up.
#assertion_tracker
def test_foo(self):
# some test logic
It depends what kind of reporting you'd like to produce.
In case you'd like to do some actions on failure (such as generating a screenshots), instead of using tearDown(), you may achieve that by overriding failureException.
For example:
#property
def failureException(self):
class MyFailureException(AssertionError):
def __init__(self_, *args, **kwargs):
screenshot_dir = 'reports/screenshots'
if not os.path.exists(screenshot_dir):
os.makedirs(screenshot_dir)
self.driver.save_screenshot('{0}/{1}.png'.format(screenshot_dir, self.id()))
return super(MyFailureException, self_).__init__(*args, **kwargs)
MyFailureException.__name__ = AssertionError.__name__
return MyFailureException
If you are using Python 2 you can use the method _resultForDoCleanups. This method return a TextTestResult object:
<unittest.runner.TextTestResult run=1 errors=0 failures=0>
You can use this object to check the result of your tests:
def tearDown(self):
if self._resultForDoCleanups.failures:
...
elif self._resultForDoCleanups.errors:
...
else:
# Success
If you are using Python 3 you can use _outcomeForDoCleanups:
def tearDown(self):
if not self._outcomeForDoCleanups.success:
...
Following on from amatellanes' answer, if you're on Python 3.4, you can't use _outcomeForDoCleanups. Here's what I managed to hack together:
def _test_has_failed(self):
for method, error in self._outcome.errors:
if error:
return True
return False
It is yucky, but it seems to work.
Here's a solution for those of us who are uncomfortable using solutions that rely on unittest internals:
First, we create a decorator that will set a flag on the TestCase instance to determine whether or not the test case failed or passed:
import unittest
import functools
def _tag_error(func):
"""Decorates a unittest test function to add failure information to the TestCase."""
#functools.wraps(func)
def decorator(self, *args, **kwargs):
"""Add failure information to `self` when `func` raises an exception."""
self.test_failed = False
try:
func(self, *args, **kwargs)
except unittest.SkipTest:
raise
except Exception: # pylint: disable=broad-except
self.test_failed = True
raise # re-raise the error with the original traceback.
return decorator
This decorator is actually pretty simple. It relies on the fact that unittest detects failed tests via Exceptions. As far as I'm aware, the only special exception that needs to be handled is unittest.SkipTest (which does not indicate a test failure). All other exceptions indicate test failures so we mark them as such when they bubble up to us.
We can now use this decorator directly:
class MyTest(unittest.TestCase):
test_failed = False
def tearDown(self):
super(MyTest, self).tearDown()
print(self.test_failed)
#_tag_error
def test_something(self):
self.fail('Bummer')
It's going to get really annoying writing this decorator all the time. Is there a way we can simplify? Yes there is!* We can write a metaclass to handle applying the decorator for us:
class _TestFailedMeta(type):
"""Metaclass to decorate test methods to append error information to the TestCase instance."""
def __new__(cls, name, bases, dct):
for name, prop in dct.items():
# assume that TestLoader.testMethodPrefix hasn't been messed with -- otherwise, we're hosed.
if name.startswith('test') and callable(prop):
dct[name] = _tag_error(prop)
return super(_TestFailedMeta, cls).__new__(cls, name, bases, dct)
Now we apply this to our base TestCase subclass and we're all set:
import six # For python2.x/3.x compatibility
class BaseTestCase(six.with_metaclass(_TestFailedMeta, unittest.TestCase)):
"""Base class for all our other tests.
We don't really need this, but it demonstrates that the
metaclass gets applied to all subclasses too.
"""
class MyTest(BaseTestCase):
def tearDown(self):
super(MyTest, self).tearDown()
print(self.test_failed)
def test_something(self):
self.fail('Bummer')
There are likely a number of cases that this doesn't handle properly. For example, it does not correctly detect failed subtests or expected failures. I'd be interested in other failure modes of this, so if you find a case that I'm not handling properly, let me know in the comments and I'll look into it.
*If there wasn't an easier way, I wouldn't have made _tag_error a private function ;-)
I think the proper answer to your question is that there isn't a clean way to get test results in tearDown(). Most of the answers here involve accessing some private parts of the Python unittest module and in general feel like workarounds. I'd strongly suggest avoiding these since the test results and test cases are decoupled and you should not work against that.
If you are in love with clean code (like I am) I think what you should do instead is instantiating your TestRunner with your own TestResult class. Then you could add whatever reporting you wanted by overriding these methods:
addError(test, err)
Called when the test case test raises an unexpected exception. err is a tuple of the form returned by sys.exc_info(): (type, value, traceback).
The default implementation appends a tuple (test, formatted_err) to the instance’s errors attribute, where formatted_err is a formatted traceback derived from err.
addFailure(test, err)
Called when the test case test signals a failure. err is a tuple of the form returned by sys.exc_info(): (type, value, traceback).
The default implementation appends a tuple (test, formatted_err) to the instance’s failures attribute, where formatted_err is a formatted traceback derived from err.
addSuccess(test)
Called when the test case test succeeds.
The default implementation does nothing.
Python 2.7.
You can also get result after unittest.main():
t = unittest.main(exit=False)
print t.result
Or use suite:
suite.addTests(tests)
result = unittest.result.TestResult()
suite.run(result)
print result
Inspired by scoffey’s answer, I decided to take mercilessnes to the next level, and have come up with the following.
It works in both vanilla unittest, and also when run via nosetests, and also works in Python versions 2.7, 3.2, 3.3, and 3.4 (I did not specifically test 3.0, 3.1, or 3.5, as I don’t have these installed at the moment, but if I read the source code correctly, it should work in 3.5 as well):
#! /usr/bin/env python
from __future__ import unicode_literals
import logging
import os
import sys
import unittest
# Log file to see squawks during testing
formatter = logging.Formatter(fmt='%(levelname)-8s %(name)s: %(message)s')
log_file = os.path.splitext(os.path.abspath(__file__))[0] + '.log'
handler = logging.FileHandler(log_file)
handler.setFormatter(formatter)
logging.root.addHandler(handler)
logging.root.setLevel(logging.DEBUG)
log = logging.getLogger(__name__)
PY = tuple(sys.version_info)[:3]
class SmartTestCase(unittest.TestCase):
"""Knows its state (pass/fail/error) by the time its tearDown is called."""
def run(self, result):
# Store the result on the class so tearDown can behave appropriately
self.result = result.result if hasattr(result, 'result') else result
if PY >= (3, 4, 0):
self._feedErrorsToResultEarly = self._feedErrorsToResult
self._feedErrorsToResult = lambda *args, **kwargs: None # no-op
super(SmartTestCase, self).run(result)
#property
def errored(self):
if (3, 0, 0) <= PY < (3, 4, 0):
return bool(self._outcomeForDoCleanups.errors)
return self.id() in [case.id() for case, _ in self.result.errors]
#property
def failed(self):
if (3, 0, 0) <= PY < (3, 4, 0):
return bool(self._outcomeForDoCleanups.failures)
return self.id() in [case.id() for case, _ in self.result.failures]
#property
def passed(self):
return not (self.errored or self.failed)
def tearDown(self):
if PY >= (3, 4, 0):
self._feedErrorsToResultEarly(self.result, self._outcome.errors)
class TestClass(SmartTestCase):
def test_1(self):
self.assertTrue(True)
def test_2(self):
self.assertFalse(True)
def test_3(self):
self.assertFalse(False)
def test_4(self):
self.assertTrue(False)
def test_5(self):
self.assertHerp('Derp')
def tearDown(self):
super(TestClass, self).tearDown()
log.critical('---- RUNNING {} ... -----'.format(self.id()))
if self.errored:
log.critical('----- ERRORED -----')
elif self.failed:
log.critical('----- FAILED -----')
else:
log.critical('----- PASSED -----')
if __name__ == '__main__':
unittest.main()
When run with unittest:
$ ./test.py -v
test_1 (__main__.TestClass) ... ok
test_2 (__main__.TestClass) ... FAIL
test_3 (__main__.TestClass) ... ok
test_4 (__main__.TestClass) ... FAIL
test_5 (__main__.TestClass) ... ERROR
[…]
$ cat ./test.log
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_1 ... -----
CRITICAL __main__: ----- PASSED -----
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_2 ... -----
CRITICAL __main__: ----- FAILED -----
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_3 ... -----
CRITICAL __main__: ----- PASSED -----
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_4 ... -----
CRITICAL __main__: ----- FAILED -----
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_5 ... -----
CRITICAL __main__: ----- ERRORED -----
When run with nosetests:
$ nosetests ./test.py -v
test_1 (test.TestClass) ... ok
test_2 (test.TestClass) ... FAIL
test_3 (test.TestClass) ... ok
test_4 (test.TestClass) ... FAIL
test_5 (test.TestClass) ... ERROR
$ cat ./test.log
CRITICAL test: ---- RUNNING test.TestClass.test_1 ... -----
CRITICAL test: ----- PASSED -----
CRITICAL test: ---- RUNNING test.TestClass.test_2 ... -----
CRITICAL test: ----- FAILED -----
CRITICAL test: ---- RUNNING test.TestClass.test_3 ... -----
CRITICAL test: ----- PASSED -----
CRITICAL test: ---- RUNNING test.TestClass.test_4 ... -----
CRITICAL test: ----- FAILED -----
CRITICAL test: ---- RUNNING test.TestClass.test_5 ... -----
CRITICAL test: ----- ERRORED -----
Background
I started with this:
class SmartTestCase(unittest.TestCase):
"""Knows its state (pass/fail/error) by the time its tearDown is called."""
def run(self, result):
# Store the result on the class so tearDown can behave appropriately
self.result = result.result if hasattr(result, 'result') else result
super(SmartTestCase, self).run(result)
#property
def errored(self):
return self.id() in [case.id() for case, _ in self.result.errors]
#property
def failed(self):
return self.id() in [case.id() for case, _ in self.result.failures]
#property
def passed(self):
return not (self.errored or self.failed)
However, this only works in Python 2. In Python 3, up to and including 3.3, the control flow appears to have changed a bit: Python 3’s unittest package processes results after calling each test’s tearDown() method… this behavior can be confirmed if we simply add an extra line (or six) to our test class:
## -63,6 +63,12 ##
log.critical('----- FAILED -----')
else:
log.critical('----- PASSED -----')
+ log.warning(
+ 'ERRORS THUS FAR:\n'
+ + '\n'.join(tc.id() for tc, _ in self.result.errors))
+ log.warning(
+ 'FAILURES THUS FAR:\n'
+ + '\n'.join(tc.id() for tc, _ in self.result.failures))
if __name__ == '__main__':
Then just rerun the tests:
$ python3.3 ./test.py -v
test_1 (__main__.TestClass) ... ok
test_2 (__main__.TestClass) ... FAIL
test_3 (__main__.TestClass) ... ok
test_4 (__main__.TestClass) ... FAIL
test_5 (__main__.TestClass) ... ERROR
[…]
…and you will see that you get this as a result:
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_1 ... -----
CRITICAL __main__: ----- PASSED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_2 ... -----
CRITICAL __main__: ----- PASSED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_3 ... -----
CRITICAL __main__: ----- PASSED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
__main__.TestClass.test_2
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_4 ... -----
CRITICAL __main__: ----- PASSED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
__main__.TestClass.test_2
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_5 ... -----
CRITICAL __main__: ----- PASSED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
__main__.TestClass.test_2
__main__.TestClass.test_4
Now, compare the above to Python 2’s output:
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_1 ... -----
CRITICAL __main__: ----- PASSED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_2 ... -----
CRITICAL __main__: ----- FAILED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
__main__.TestClass.test_2
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_3 ... -----
CRITICAL __main__: ----- PASSED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
__main__.TestClass.test_2
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_4 ... -----
CRITICAL __main__: ----- FAILED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
__main__.TestClass.test_2
__main__.TestClass.test_4
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_5 ... -----
CRITICAL __main__: ----- ERRORED -----
WARNING __main__: ERRORS THUS FAR:
__main__.TestClass.test_5
WARNING __main__: FAILURES THUS FAR:
__main__.TestClass.test_2
__main__.TestClass.test_4
Since Python 3 processes errors/failures after the test is torn down, we can’t readily infer the result of a test using result.errors or result.failures in every case. (I think it probably makes more sense architecturally to process a test’s results after tearing it down, however, it does make the perfectly valid use-case of following a different end-of-test procedure depending on a test’s pass/fail status a bit harder to meet…)
Therefore, instead of relying on the overall result object, instead we can reference _outcomeForDoCleanups as others have already mentioned, which contains the result object for the currently running test, and has the necessary errors and failrues attributes, which we can use to infer a test’s status by the time tearDown() has been called:
## -3,6 +3,7 ##
from __future__ import unicode_literals
import logging
import os
+import sys
import unittest
## -16,6 +17,9 ##
log = logging.getLogger(__name__)
+PY = tuple(sys.version_info)[:3]
+
+
class SmartTestCase(unittest.TestCase):
"""Knows its state (pass/fail/error) by the time its tearDown is called."""
## -27,10 +31,14 ##
#property
def errored(self):
+ if PY >= (3, 0, 0):
+ return bool(self._outcomeForDoCleanups.errors)
return self.id() in [case.id() for case, _ in self.result.errors]
#property
def failed(self):
+ if PY >= (3, 0, 0):
+ return bool(self._outcomeForDoCleanups.failures)
return self.id() in [case.id() for case, _ in self.result.failures]
#property
This adds support for the early versions of Python 3.
As of Python 3.4, however, this private member variable no longer exists, and instead, a new (albeit also private) method was added: _feedErrorsToResult.
This means that for versions 3.4 (and later), if the need is great enough, one can — very hackishly — force one’s way in to make it all work again like it did in version 2…
## -27,17 +27,20 ##
def run(self, result):
# Store the result on the class so tearDown can behave appropriately
self.result = result.result if hasattr(result, 'result') else result
+ if PY >= (3, 4, 0):
+ self._feedErrorsToResultEarly = self._feedErrorsToResult
+ self._feedErrorsToResult = lambda *args, **kwargs: None # no-op
super(SmartTestCase, self).run(result)
#property
def errored(self):
- if PY >= (3, 0, 0):
+ if (3, 0, 0) <= PY < (3, 4, 0):
return bool(self._outcomeForDoCleanups.errors)
return self.id() in [case.id() for case, _ in self.result.errors]
#property
def failed(self):
- if PY >= (3, 0, 0):
+ if (3, 0, 0) <= PY < (3, 4, 0):
return bool(self._outcomeForDoCleanups.failures)
return self.id() in [case.id() for case, _ in self.result.failures]
## -45,6 +48,10 ##
def passed(self):
return not (self.errored or self.failed)
+ def tearDown(self):
+ if PY >= (3, 4, 0):
+ self._feedErrorsToResultEarly(self.result, self._outcome.errors)
+
class TestClass(SmartTestCase):
## -64,6 +71,7 ##
self.assertHerp('Derp')
def tearDown(self):
+ super(TestClass, self).tearDown()
log.critical('---- RUNNING {} ... -----'.format(self.id()))
if self.errored:
log.critical('----- ERRORED -----')
…provided, of course, all consumers of this class remember to super(…, self).tearDown() in their respective tearDown methods…
Disclaimer: This is purely educational, don’t try this at home, etc. etc. etc. I’m not particularly proud of this solution, but it seems to work well enough for the time being, and is the best I could hack up after fiddling for an hour or two on a Saturday afternoon…
The name of the current test can be retrieved with the unittest.TestCase.id() method. So in tearDown you can check self.id().
The example shows how to:
find if the current test has an error or failure in errors or failures list
print the test id with PASS or FAIL or EXCEPTION
The tested example here works with scoffey's nice example.
def tearDown(self):
result = "PASS"
#### Find and show result for current test
# I did not find any nicer/neater way of comparing self.id() with test id stored in errors or failures lists :-7
id = str(self.id()).split('.')[-1]
# id() e.g. tup[0]:<__main__.MyTest testMethod=test_onePlusNoneIsNone>
# str(tup[0]):"test_onePlusOneEqualsThree (__main__.MyTest)"
# str(self.id()) = __main__.MyTest.test_onePlusNoneIsNone
for tup in self.currentResult.failures:
if str(tup[0]).startswith(id):
print ' test %s failure:%s' % (self.id(), tup[1])
## DO TEST FAIL ACTION HERE
result = "FAIL"
for tup in self.currentResult.errors:
if str(tup[0]).startswith(id):
print ' test %s error:%s' % (self.id(), tup[1])
## DO TEST EXCEPTION ACTION HERE
result = "EXCEPTION"
print "Test:%s Result:%s" % (self.id(), result)
Example of result:
python run_scripts/tut2.py 2>&1
E test __main__.MyTest.test_onePlusNoneIsNone error:Traceback (most recent call last):
File "run_scripts/tut2.py", line 80, in test_onePlusNoneIsNone
self.assertTrue(1 + None is None) # raises TypeError
TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'
Test:__main__.MyTest.test_onePlusNoneIsNone Result:EXCEPTION
F test __main__.MyTest.test_onePlusOneEqualsThree failure:Traceback (most recent call last):
File "run_scripts/tut2.py", line 77, in test_onePlusOneEqualsThree
self.assertTrue(1 + 1 == 3) # fails
AssertionError: False is not true
Test:__main__.MyTest.test_onePlusOneEqualsThree Result:FAIL
Test:__main__.MyTest.test_onePlusOneEqualsTwo Result:PASS
.
======================================================================
ERROR: test_onePlusNoneIsNone (__main__.MyTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "run_scripts/tut2.py", line 80, in test_onePlusNoneIsNone
self.assertTrue(1 + None is None) # raises TypeError
TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'
======================================================================
FAIL: test_onePlusOneEqualsThree (__main__.MyTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "run_scripts/tut2.py", line 77, in test_onePlusOneEqualsThree
self.assertTrue(1 + 1 == 3) # fails
AssertionError: False is not true
----------------------------------------------------------------------
Ran 3 tests in 0.001s
FAILED (failures=1, errors=1)
Tested for Python 3.7 - sample code for getting information of failing assertions, but can give an idea of how to deal with errors:
def tearDown(self):
if self._outcome.errors[1][1] and hasattr(self._outcome.errors[1][1][1], 'actual'):
print(self._testMethodName)
print(self._outcome.errors[1][1][1].actual)
print(self._outcome.errors[1][1][1].expected)
In a few words, this gives True if all tests run so far exited with no errors or failures:
class WatheverTestCase(TestCase):
def tear_down(self):
return not self._outcome.result.errors and not self._outcome.result.failures
Explore _outcome's properties to access more detailed possibilities.
This is simple, makes use of the public API only, and shall work on any python version:
import unittest
class MyTest(unittest.TestCase):
def defaultTestResult():
self.lastResult = unittest.result.TestResult()
return self.lastResult
...
Python version independent code using global variable
import unittest
global test_case_id
global test_title
global test_result
test_case_id =''
test_title = ''
test_result = ''
class Dummy(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
global test_case_id
global test_title
global test_result
self.test_case_id = test_case_id
self.test_title = test_title
self.test_result = test_result
print('Test case id is : ',self.test_case_id)
print('test title is : ',self.test_title)
print('Test test result is : ',self.test_result)
def test_a(self):
global test_case_id
global test_title
global test_result
test_case_id = 'test1'
test_title = 'To verify test1'
test_result=self.assertTrue(True)
def test_b(self):
global test_case_id
global test_title
global test_result
test_case_id = 'test2'
test_title = 'To verify test2'
test_result=self.assertFalse(False)
if __name__ == "__main__":
unittest.main()

Writing a re-usable (parametrized) unittest.TestCase method [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to generate dynamic (parametrized) unit tests in python?
I'm writing tests using the unittest package, and I want to avoid repeated code. I am going to carry out a number of tests which all require a very similar method, but with only one value different each time. A simplistic and useless example would be:
class ExampleTestCase(unittest.TestCase):
def test_1(self):
self.assertEqual(self.somevalue, 1)
def test_2(self):
self.assertEqual(self.somevalue, 2)
def test_3(self):
self.assertEqual(self.somevalue, 3)
def test_4(self):
self.assertEqual(self.somevalue, 4)
Is there a way to write the above example without repeating all the code each time, but instead writing a generic method, e.g.
def test_n(self, n):
self.assertEqual(self.somevalue, n)
and telling unittest to try this test with different inputs?
Some of the tools available for doing parametrized tests in Python are:
Nose test generators (only for function tests, not TestCase classes)
nose-parametrized by by David Wolever (also for TestCase classes)
Unittest template by Boris Feld
Parametrized tests in py.test
parametrized-testcase by Austin Bingham
DDT (Data Driven Tests) by Carles Barrobés, for UnitTests
If you really want to have multiple unitttest then you need multiple methods. The only way to get that is through some sort of code generation. You can do that through a metaclasses, or by tweaking the class after the definition, including (if you are using Python 2.6) through a class decorator.
Here's a solution which looks for the special 'multitest' and 'multitest_values' members and uses those to build the test methods on the fly. Not elegant, but it does roughly what you want:
import unittest
import inspect
class SomeValue(object):
def __eq__(self, other):
return other in [1, 3, 4]
class ExampleTestCase(unittest.TestCase):
somevalue = SomeValue()
multitest_values = [1, 2, 3, 4]
def multitest(self, n):
self.assertEqual(self.somevalue, n)
multitest_gt_values = "ABCDEF"
def multitest_gt(self, c):
self.assertTrue(c > "B", c)
def add_test_cases(cls):
values = {}
functions = {}
# Find all the 'multitest*' functions and
# matching list of test values.
for key, value in inspect.getmembers(cls):
if key.startswith("multitest"):
if key.endswith("_values"):
values[key[:-7]] = value
else:
functions[key] = value
# Put them together to make a list of new test functions.
# One test function for each value
for key in functions:
if key in values:
function = functions[key]
for i, value in enumerate(values[key]):
def test_function(self, function=function, value=value):
function(self, value)
name ="test%s_%d" % (key[9:], i+1)
test_function.__name__ = name
setattr(cls, name, test_function)
add_test_cases(ExampleTestCase)
if __name__ == "__main__":
unittest.main()
This is the output from when I run it
% python stackoverflow.py
.F..FF....
======================================================================
FAIL: test_2 (__main__.ExampleTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "stackoverflow.py", line 34, in test_function
function(self, value)
File "stackoverflow.py", line 13, in multitest
self.assertEqual(self.somevalue, n)
AssertionError: <__main__.SomeValue object at 0xd9870> != 2
======================================================================
FAIL: test_gt_1 (__main__.ExampleTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "stackoverflow.py", line 34, in test_function
function(self, value)
File "stackoverflow.py", line 17, in multitest_gt
self.assertTrue(c > "B", c)
AssertionError: A
======================================================================
FAIL: test_gt_2 (__main__.ExampleTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "stackoverflow.py", line 34, in test_function
function(self, value)
File "stackoverflow.py", line 17, in multitest_gt
self.assertTrue(c > "B", c)
AssertionError: B
----------------------------------------------------------------------
Ran 10 tests in 0.001s
FAILED (failures=3)
You can immediately see some of the problems which occur with code generation. Where does "test_gt_1" come from? I could change the name to the longer "test_multitest_gt_1" but then which test is 1? Better here would be to start from _0 instead of _1, and perhaps in your case you know the values can be used as a Python function name.
I do not like this approach. I've worked on code bases which autogenerated test methods (in one case using a metaclass) and found it was much harder to understand than it was useful. When a test failed it was hard to figure out the source of the failure case, and it was hard to stick in debugging code to probe the reason for the failure.
(Debugging failures in the example I wrote here isn't as hard as that specific metaclass approach I had to work with.)
I guess what you want is "parameterized tests".
I don't think unittest module supports this (unfortunately),
but if I were adding this feature it would look something like this:
# Will run the test for all combinations of parameters
#RunTestWith(x=[0, 1, 2, 3], y=[-1, 0, 1])
def testMultiplication(self, x, y):
self.assertEqual(multiplication.multiply(x, y), x*y)
With the existing unittest module, a simple decorator like this won't be able to "replicate" the test multiple times, but I think this is doable using a combination of a decorator and a metaclass (metaclass should observe all 'test*' methods and replicate (under different auto-generated names) those that have a decorator applied).
A more data-oriented approach might be clearer than the one used in Andrew Dalke's answer:
"""Parametrized unit test.
Builds a single TestCase class which tests if its
`somevalue` method is equal to the numbers 1 through 4.
This is accomplished by
creating a list (`cases`)
of dictionaries which contain test specifications
and then feeding the list to a function which creates a test case class.
When run, the output shows that three of the four cases fail,
as expected:
>>> import sys
>>> from unittest import TextTestRunner
>>> run_tests(TextTestRunner(stream=sys.stdout, verbosity=9))
... # doctest: +ELLIPSIS
Test if self.somevalue equals 4 ... FAIL
Test if self.somevalue equals 1 ... FAIL
Test if self.somevalue equals 3 ... FAIL
Test if self.somevalue equals 2 ... ok
<BLANKLINE>
======================================================================
FAIL: Test if self.somevalue equals 4
----------------------------------------------------------------------
Traceback (most recent call last):
...
AssertionError: 2 != 4
<BLANKLINE>
======================================================================
FAIL: Test if self.somevalue equals 1
----------------------------------------------------------------------
Traceback (most recent call last):
...
AssertionError: 2 != 1
<BLANKLINE>
======================================================================
FAIL: Test if self.somevalue equals 3
----------------------------------------------------------------------
Traceback (most recent call last):
...
AssertionError: 2 != 3
<BLANKLINE>
----------------------------------------------------------------------
Ran 4 tests in ...s
<BLANKLINE>
FAILED (failures=3)
"""
from unittest import TestCase, TestSuite, defaultTestLoader
cases = [{'name': "somevalue_equals_one",
'doc': "Test if self.somevalue equals 1",
'value': 1},
{'name': "somevalue_equals_two",
'doc': "Test if self.somevalue equals 2",
'value': 2},
{'name': "somevalue_equals_three",
'doc': "Test if self.somevalue equals 3",
'value': 3},
{'name': "somevalue_equals_four",
'doc': "Test if self.somevalue equals 4",
'value': 4}]
class BaseTestCase(TestCase):
def setUp(self):
self.somevalue = 2
def test_n(self, n):
self.assertEqual(self.somevalue, n)
def make_parametrized_testcase(class_name, base_classes, test_method, cases):
def make_parametrized_test_method(name, value, doc=None):
def method(self):
return test_method(self, value)
method.__name__ = "test_" + name
method.__doc__ = doc
return (method.__name__, method)
test_methods = (make_parametrized_test_method(**case) for case in cases)
class_dict = dict(test_methods)
return type(class_name, base_classes, class_dict)
TestCase = make_parametrized_testcase('TestOneThroughFour',
(BaseTestCase,),
test_n,
cases)
def make_test_suite():
load = defaultTestLoader.loadTestsFromTestCase
return TestSuite(load(TestCase))
def run_tests(runner):
runner.run(make_test_suite())
if __name__ == '__main__':
from unittest import TextTestRunner
run_tests(TextTestRunner(verbosity=9))
I'm not sure what voodoo is involved in determining the order in which the tests are run, but the doctest passes consistently for me, at least.
For more complex situations it's possible to replace the values element of the cases dictionaries with a tuple containing a list of arguments and a dict of keyword arguments. Though at that point you're basically coding lisp in python.
Write a single test method that performs all of your tests and captures all of the results, write your own diagnostic messages to stderr, and fail the test if any of its subtests fail:
def test_with_multiple_parameters(self):
failed = False
for k in sorted(self.test_parameters.keys()):
if not self.my_test(self.test_parameters[k]):
print >> sys.stderr, "Test {0} failed.".format(k)
failed = True
self.assertFalse(failed)
Note that of course my_test()'s name can't begin with test.
Perhaps something like:
def test_many(self):
for n in range(0,1000):
self.assertEqual(self.somevalue, n)

Categories

Resources