Python: How to print all AssertionErrors from unittest? [duplicate] - python

EDIT: switched to a better example, and clarified why this is a real problem.
I'd like to write unit tests in Python that continue executing when an assertion fails, so that I can see multiple failures in a single test. For example:
class Car(object):
def __init__(self, make, model):
self.make = make
self.model = make # Copy and paste error: should be model.
self.has_seats = True
self.wheel_count = 3 # Typo: should be 4.
class CarTest(unittest.TestCase):
def test_init(self):
make = "Ford"
model = "Model T"
car = Car(make=make, model=model)
self.assertEqual(car.make, make)
self.assertEqual(car.model, model) # Failure!
self.assertTrue(car.has_seats)
self.assertEqual(car.wheel_count, 4) # Failure!
Here, the purpose of the test is to ensure that Car's __init__ sets its fields correctly. I could break it up into four methods (and that's often a great idea), but in this case I think it's more readable to keep it as a single method that tests a single concept ("the object is initialized correctly").
If we assume that it's best here to not break up the method, then I have a new problem: I can't see all of the errors at once. When I fix the model error and re-run the test, then the wheel_count error appears. It would save me time to see both errors when I first run the test.
For comparison, Google's C++ unit testing framework distinguishes between between non-fatal EXPECT_* assertions and fatal ASSERT_* assertions:
The assertions come in pairs that test the same thing but have different effects on the current function. ASSERT_* versions generate fatal failures when they fail, and abort the current function. EXPECT_* versions generate nonfatal failures, which don't abort the current function. Usually EXPECT_* are preferred, as they allow more than one failures to be reported in a test. However, you should use ASSERT_* if it doesn't make sense to continue when the assertion in question fails.
Is there a way to get EXPECT_*-like behavior in Python's unittest? If not in unittest, then is there another Python unit test framework that does support this behavior?
Incidentally, I was curious about how many real-life tests might benefit from non-fatal assertions, so I looked at some code examples (edited 2014-08-19 to use searchcode instead of Google Code Search, RIP). Out of 10 randomly selected results from the first page, all contained tests that made multiple independent assertions in the same test method. All would benefit from non-fatal assertions.

Another way to have non-fatal assertions is to capture the assertion exception and store the exceptions in a list. Then assert that that list is empty as part of the tearDown.
import unittest
class Car(object):
def __init__(self, make, model):
self.make = make
self.model = make # Copy and paste error: should be model.
self.has_seats = True
self.wheel_count = 3 # Typo: should be 4.
class CarTest(unittest.TestCase):
def setUp(self):
self.verificationErrors = []
def tearDown(self):
self.assertEqual([], self.verificationErrors)
def test_init(self):
make = "Ford"
model = "Model T"
car = Car(make=make, model=model)
try: self.assertEqual(car.make, make)
except AssertionError, e: self.verificationErrors.append(str(e))
try: self.assertEqual(car.model, model) # Failure!
except AssertionError, e: self.verificationErrors.append(str(e))
try: self.assertTrue(car.has_seats)
except AssertionError, e: self.verificationErrors.append(str(e))
try: self.assertEqual(car.wheel_count, 4) # Failure!
except AssertionError, e: self.verificationErrors.append(str(e))
if __name__ == "__main__":
unittest.main()

Since Python 3.4 you can also use subtests:
def test_init(self):
make = "Ford"
model = "Model T"
car = Car(make=make, model=model)
with self.subTest(msg='Car.make check'):
self.assertEqual(car.make, make)
with self.subTest(msg='Car.model check'):
self.assertEqual(car.model, model)
with self.subTest(msg='Car.has_seats check'):
self.assertTrue(car.has_seats)
with self.subTest(msg='Car.wheel_count check'):
self.assertEqual(car.wheel_count, 4)
(msg parameter is used to more easily determine which test failed.)
Output:
======================================================================
FAIL: test_init (__main__.CarTest) [Car.model check]
----------------------------------------------------------------------
Traceback (most recent call last):
File "test.py", line 23, in test_init
self.assertEqual(car.model, model)
AssertionError: 'Ford' != 'Model T'
- Ford
+ Model T
======================================================================
FAIL: test_init (__main__.CarTest) [Car.wheel_count check]
----------------------------------------------------------------------
Traceback (most recent call last):
File "test.py", line 27, in test_init
self.assertEqual(car.wheel_count, 4)
AssertionError: 3 != 4
----------------------------------------------------------------------
Ran 1 test in 0.001s
FAILED (failures=2)

One option is assert on all the values at once as a tuple.
For example:
class CarTest(unittest.TestCase):
def test_init(self):
make = "Ford"
model = "Model T"
car = Car(make=make, model=model)
self.assertEqual(
(car.make, car.model, car.has_seats, car.wheel_count),
(make, model, True, 4))
The output from this tests would be:
======================================================================
FAIL: test_init (test.CarTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\temp\py_mult_assert\test.py", line 17, in test_init
(make, model, True, 4))
AssertionError: Tuples differ: ('Ford', 'Ford', True, 3) != ('Ford', 'Model T', True, 4)
First differing element 1:
Ford
Model T
- ('Ford', 'Ford', True, 3)
? ^ - ^
+ ('Ford', 'Model T', True, 4)
? ^ ++++ ^
This shows that both the model and the wheel count are incorrect.

What you'll probably want to do is derive unittest.TestCase since that's the class that throws when an assertion fails. You will have to re-architect your TestCase to not throw (maybe keep a list of failures instead). Re-architecting stuff can cause other issues that you would have to resolve. For example you may end up needing to derive TestSuite to make changes in support of the changes made to your TestCase.

It is considered an anti-pattern to have multiple asserts in a single unit test. A single unit test is expected to test only one thing. Perhaps you are testing too much. Consider splitting this test up into multiple tests. This way you can name each test properly.
Sometimes however, it is okay to check multiple things at the same time. For instance when you are asserting properties of the same object. In that case you are in fact asserting whether that object is correct. A way to do this is to write a custom helper method that knows how to assert on that object. You can write that method in such a way that it shows all failing properties or for instance shows the complete state of the expected object and the complete state of the actual object when an assert fails.

There is a soft assertion package in PyPI called softest that will handle your requirements. It works by collecting the failures, combining exception and stack trace data, and reporting it all as part of the usual unittest output.
For instance, this code:
import softest
class ExampleTest(softest.TestCase):
def test_example(self):
# be sure to pass the assert method object, not a call to it
self.soft_assert(self.assertEqual, 'Worf', 'wharf', 'Klingon is not ship receptacle')
# self.soft_assert(self.assertEqual('Worf', 'wharf', 'Klingon is not ship receptacle')) # will not work as desired
self.soft_assert(self.assertTrue, True)
self.soft_assert(self.assertTrue, False)
self.assert_all()
if __name__ == '__main__':
softest.main()
...produces this console output:
======================================================================
FAIL: "test_example" (ExampleTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\...\softest_test.py", line 14, in test_example
self.assert_all()
File "C:\...\softest\case.py", line 138, in assert_all
self.fail(''.join(failure_output))
AssertionError: ++++ soft assert failure details follow below ++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The following 2 failures were found in "test_example" (ExampleTest):
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Failure 1 ("test_example" method)
+--------------------------------------------------------------------+
Traceback (most recent call last):
File "C:\...\softest_test.py", line 10, in test_example
self.soft_assert(self.assertEqual, 'Worf', 'wharf', 'Klingon is not ship receptacle')
File "C:\...\softest\case.py", line 84, in soft_assert
assert_method(*arguments, **keywords)
File "C:\...\Python\Python36-32\lib\unittest\case.py", line 829, in assertEqual
assertion_func(first, second, msg=msg)
File "C:\...\Python\Python36-32\lib\unittest\case.py", line 1203, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "C:\...\Python\Python36-32\lib\unittest\case.py", line 670, in fail
raise self.failureException(msg)
AssertionError: 'Worf' != 'wharf'
- Worf
+ wharf
: Klingon is not ship receptacle
+--------------------------------------------------------------------+
Failure 2 ("test_example" method)
+--------------------------------------------------------------------+
Traceback (most recent call last):
File "C:\...\softest_test.py", line 12, in test_example
self.soft_assert(self.assertTrue, False)
File "C:\...\softest\case.py", line 84, in soft_assert
assert_method(*arguments, **keywords)
File "C:\...\Python\Python36-32\lib\unittest\case.py", line 682, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (failures=1)
NOTE: I created and maintain softest.

Do each assert in a separate method.
class MathTest(unittest.TestCase):
def test_addition1(self):
self.assertEqual(1 + 0, 1)
def test_addition2(self):
self.assertEqual(1 + 1, 3)
def test_addition3(self):
self.assertEqual(1 + (-1), 0)
def test_addition4(self):
self.assertEqaul(-1 + (-1), -1)

expect is very useful in gtest.
This is python way in gist, and code:
import sys
import unittest
class TestCase(unittest.TestCase):
def run(self, result=None):
if result is None:
self.result = self.defaultTestResult()
else:
self.result = result
return unittest.TestCase.run(self, result)
def expect(self, val, msg=None):
'''
Like TestCase.assert_, but doesn't halt the test.
'''
try:
self.assert_(val, msg)
except:
self.result.addFailure(self, sys.exc_info())
def expectEqual(self, first, second, msg=None):
try:
self.failUnlessEqual(first, second, msg)
except:
self.result.addFailure(self, sys.exc_info())
expect_equal = expectEqual
assert_equal = unittest.TestCase.assertEqual
assert_raises = unittest.TestCase.assertRaises
test_main = unittest.main

I liked the approach by #Anthony-Batchelor, to capture the AssertionError exception. But a slight variation to this approach using decorators and also a way to report the tests cases with pass/fail.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import unittest
class UTReporter(object):
'''
The UT Report class keeps track of tests cases
that have been executed.
'''
def __init__(self):
self.testcases = []
print "init called"
def add_testcase(self, testcase):
self.testcases.append(testcase)
def display_report(self):
for tc in self.testcases:
msg = "=============================" + "\n" + \
"Name: " + tc['name'] + "\n" + \
"Description: " + str(tc['description']) + "\n" + \
"Status: " + tc['status'] + "\n"
print msg
reporter = UTReporter()
def assert_capture(*args, **kwargs):
'''
The Decorator defines the override behavior.
unit test functions decorated with this decorator, will ignore
the Unittest AssertionError. Instead they will log the test case
to the UTReporter.
'''
def assert_decorator(func):
def inner(*args, **kwargs):
tc = {}
tc['name'] = func.__name__
tc['description'] = func.__doc__
try:
func(*args, **kwargs)
tc['status'] = 'pass'
except AssertionError:
tc['status'] = 'fail'
reporter.add_testcase(tc)
return inner
return assert_decorator
class DecorateUt(unittest.TestCase):
#assert_capture()
def test_basic(self):
x = 5
self.assertEqual(x, 4)
#assert_capture()
def test_basic_2(self):
x = 4
self.assertEqual(x, 4)
def main():
#unittest.main()
suite = unittest.TestLoader().loadTestsFromTestCase(DecorateUt)
unittest.TextTestRunner(verbosity=2).run(suite)
reporter.display_report()
if __name__ == '__main__':
main()
Output from console:
(awsenv)$ ./decorators.py
init called
test_basic (__main__.DecorateUt) ... ok
test_basic_2 (__main__.DecorateUt) ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK
=============================
Name: test_basic
Description: None
Status: fail
=============================
Name: test_basic_2
Description: None
Status: pass

I had a problem with the answer from #Anthony Batchelor because it would have forced me to use try...catch inside my unit tests. Instead, I encapsulated the try...catch logic in an override of the TestCase.assertEqual method. Here is the code:
import unittest
import traceback
class AssertionErrorData(object):
def __init__(self, stacktrace, message):
super(AssertionErrorData, self).__init__()
self.stacktrace = stacktrace
self.message = message
class MultipleAssertionFailures(unittest.TestCase):
def __init__(self, *args, **kwargs):
self.verificationErrors = []
super(MultipleAssertionFailures, self).__init__( *args, **kwargs )
def tearDown(self):
super(MultipleAssertionFailures, self).tearDown()
if self.verificationErrors:
index = 0
errors = []
for error in self.verificationErrors:
index += 1
errors.append( "%s\nAssertionError %s: %s" % (
error.stacktrace, index, error.message ) )
self.fail( '\n\n' + "\n".join( errors ) )
self.verificationErrors.clear()
def assertEqual(self, goal, results, msg=None):
try:
super( MultipleAssertionFailures, self ).assertEqual( goal, results, msg )
except unittest.TestCase.failureException as error:
goodtraces = self._goodStackTraces()
self.verificationErrors.append(
AssertionErrorData( "\n".join( goodtraces[:-2] ), error ) )
def _goodStackTraces(self):
"""
Get only the relevant part of stacktrace.
"""
stop = False
found = False
goodtraces = []
# stacktrace = traceback.format_exc()
# stacktrace = traceback.format_stack()
stacktrace = traceback.extract_stack()
# https://stackoverflow.com/questions/54499367/how-to-correctly-override-testcase
for stack in stacktrace:
filename = stack.filename
if found and not stop and \
not filename.find( 'lib' ) < filename.find( 'unittest' ):
stop = True
if not found and filename.find( 'lib' ) < filename.find( 'unittest' ):
found = True
if stop and found:
stackline = ' File "%s", line %s, in %s\n %s' % (
stack.filename, stack.lineno, stack.name, stack.line )
goodtraces.append( stackline )
return goodtraces
# class DummyTestCase(unittest.TestCase):
class DummyTestCase(MultipleAssertionFailures):
def setUp(self):
self.maxDiff = None
super(DummyTestCase, self).setUp()
def tearDown(self):
super(DummyTestCase, self).tearDown()
def test_function_name(self):
self.assertEqual( "var", "bar" )
self.assertEqual( "1937", "511" )
if __name__ == '__main__':
unittest.main()
Result output:
F
======================================================================
FAIL: test_function_name (__main__.DummyTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\User\Downloads\test.py", line 77, in tearDown
super(DummyTestCase, self).tearDown()
File "D:\User\Downloads\test.py", line 29, in tearDown
self.fail( '\n\n' + "\n\n".join( errors ) )
AssertionError:
File "D:\User\Downloads\test.py", line 80, in test_function_name
self.assertEqual( "var", "bar" )
AssertionError 1: 'var' != 'bar'
- var
? ^
+ bar
? ^
:
File "D:\User\Downloads\test.py", line 81, in test_function_name
self.assertEqual( "1937", "511" )
AssertionError 2: '1937' != '511'
- 1937
+ 511
:
More alternative solutions for the correct stacktrace capture could be posted on How to correctly override TestCase.assertEqual(), producing the right stacktrace?

I don't think there is a way to do this with PyUnit and wouldn't want to see PyUnit extended in this way.
I prefer to stick to one assertion per test function (or more specifically asserting one concept per test) and would rewrite test_addition() as four separate test functions. This would give more useful information on failure, viz:
.FF.
======================================================================
FAIL: test_addition_with_two_negatives (__main__.MathTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_addition.py", line 10, in test_addition_with_two_negatives
self.assertEqual(-1 + (-1), -1)
AssertionError: -2 != -1
======================================================================
FAIL: test_addition_with_two_positives (__main__.MathTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_addition.py", line 6, in test_addition_with_two_positives
self.assertEqual(1 + 1, 3) # Failure!
AssertionError: 2 != 3
----------------------------------------------------------------------
Ran 4 tests in 0.000s
FAILED (failures=2)
If you decide that this approach isn't for you, you may find this answer helpful.
Update
It looks like you are testing two concepts with your updated question and I would split these into two unit tests. The first being that the parameters are being stored on the creation of a new object. This would have two assertions, one for make and one for model. If the first fails, the that clearly needs to be fixed, whether the second passes or fails is irrelevant at this juncture.
The second concept is more questionable... You're testing whether some default values are initialised. Why? It would be more useful to test these values at the point that they are actually used (and if they are not used, then why are they there?).
Both of these tests fail, and both should. When I am unit-testing, I am far more interested in failure than I am in success as that is where I need to concentrate.
FF
======================================================================
FAIL: test_creation_defaults (__main__.CarTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_car.py", line 25, in test_creation_defaults
self.assertEqual(self.car.wheel_count, 4) # Failure!
AssertionError: 3 != 4
======================================================================
FAIL: test_creation_parameters (__main__.CarTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_car.py", line 20, in test_creation_parameters
self.assertEqual(self.car.model, self.model) # Failure!
AssertionError: 'Ford' != 'Model T'
----------------------------------------------------------------------
Ran 2 tests in 0.000s
FAILED (failures=2)

I realize this question was asked literally years ago, but there are now (at least) two Python packages that allow you to do this.
One is softest: https://pypi.org/project/softest/
The other is Python-Delayed-Assert: https://github.com/pr4bh4sh/python-delayed-assert
I haven't used either, but they look pretty similar to me.

I think I found a solution that works. Using selenium, I was able to store a list of text values into a list. Loop through the list until I found an item that contains that text I needed. Then using the if else statement, I used a 'break' statement when the item was found and I assigned a specific value to a dummy variable once the value was found. Then I asserted that value outside of the for-loop.
elements = self.driver.find_elements(*element)
print(elements)
global y
for element in elements:
print(element.text)
t = element.text
time_strip = combined_time[:-2] #test_case specific code
y = t.__contains__(time_strip) #test_case specific code
print(y)
if y == True:
global z
z = "banana"
break
else:
z = "apple"
if z == "banana":
print(z)
assert 2 == 2
else:
print(z)
assert 2 == 1

Related

Python unittest - Display all AssertionError within same test [duplicate]

EDIT: switched to a better example, and clarified why this is a real problem.
I'd like to write unit tests in Python that continue executing when an assertion fails, so that I can see multiple failures in a single test. For example:
class Car(object):
def __init__(self, make, model):
self.make = make
self.model = make # Copy and paste error: should be model.
self.has_seats = True
self.wheel_count = 3 # Typo: should be 4.
class CarTest(unittest.TestCase):
def test_init(self):
make = "Ford"
model = "Model T"
car = Car(make=make, model=model)
self.assertEqual(car.make, make)
self.assertEqual(car.model, model) # Failure!
self.assertTrue(car.has_seats)
self.assertEqual(car.wheel_count, 4) # Failure!
Here, the purpose of the test is to ensure that Car's __init__ sets its fields correctly. I could break it up into four methods (and that's often a great idea), but in this case I think it's more readable to keep it as a single method that tests a single concept ("the object is initialized correctly").
If we assume that it's best here to not break up the method, then I have a new problem: I can't see all of the errors at once. When I fix the model error and re-run the test, then the wheel_count error appears. It would save me time to see both errors when I first run the test.
For comparison, Google's C++ unit testing framework distinguishes between between non-fatal EXPECT_* assertions and fatal ASSERT_* assertions:
The assertions come in pairs that test the same thing but have different effects on the current function. ASSERT_* versions generate fatal failures when they fail, and abort the current function. EXPECT_* versions generate nonfatal failures, which don't abort the current function. Usually EXPECT_* are preferred, as they allow more than one failures to be reported in a test. However, you should use ASSERT_* if it doesn't make sense to continue when the assertion in question fails.
Is there a way to get EXPECT_*-like behavior in Python's unittest? If not in unittest, then is there another Python unit test framework that does support this behavior?
Incidentally, I was curious about how many real-life tests might benefit from non-fatal assertions, so I looked at some code examples (edited 2014-08-19 to use searchcode instead of Google Code Search, RIP). Out of 10 randomly selected results from the first page, all contained tests that made multiple independent assertions in the same test method. All would benefit from non-fatal assertions.
Another way to have non-fatal assertions is to capture the assertion exception and store the exceptions in a list. Then assert that that list is empty as part of the tearDown.
import unittest
class Car(object):
def __init__(self, make, model):
self.make = make
self.model = make # Copy and paste error: should be model.
self.has_seats = True
self.wheel_count = 3 # Typo: should be 4.
class CarTest(unittest.TestCase):
def setUp(self):
self.verificationErrors = []
def tearDown(self):
self.assertEqual([], self.verificationErrors)
def test_init(self):
make = "Ford"
model = "Model T"
car = Car(make=make, model=model)
try: self.assertEqual(car.make, make)
except AssertionError, e: self.verificationErrors.append(str(e))
try: self.assertEqual(car.model, model) # Failure!
except AssertionError, e: self.verificationErrors.append(str(e))
try: self.assertTrue(car.has_seats)
except AssertionError, e: self.verificationErrors.append(str(e))
try: self.assertEqual(car.wheel_count, 4) # Failure!
except AssertionError, e: self.verificationErrors.append(str(e))
if __name__ == "__main__":
unittest.main()
Since Python 3.4 you can also use subtests:
def test_init(self):
make = "Ford"
model = "Model T"
car = Car(make=make, model=model)
with self.subTest(msg='Car.make check'):
self.assertEqual(car.make, make)
with self.subTest(msg='Car.model check'):
self.assertEqual(car.model, model)
with self.subTest(msg='Car.has_seats check'):
self.assertTrue(car.has_seats)
with self.subTest(msg='Car.wheel_count check'):
self.assertEqual(car.wheel_count, 4)
(msg parameter is used to more easily determine which test failed.)
Output:
======================================================================
FAIL: test_init (__main__.CarTest) [Car.model check]
----------------------------------------------------------------------
Traceback (most recent call last):
File "test.py", line 23, in test_init
self.assertEqual(car.model, model)
AssertionError: 'Ford' != 'Model T'
- Ford
+ Model T
======================================================================
FAIL: test_init (__main__.CarTest) [Car.wheel_count check]
----------------------------------------------------------------------
Traceback (most recent call last):
File "test.py", line 27, in test_init
self.assertEqual(car.wheel_count, 4)
AssertionError: 3 != 4
----------------------------------------------------------------------
Ran 1 test in 0.001s
FAILED (failures=2)
One option is assert on all the values at once as a tuple.
For example:
class CarTest(unittest.TestCase):
def test_init(self):
make = "Ford"
model = "Model T"
car = Car(make=make, model=model)
self.assertEqual(
(car.make, car.model, car.has_seats, car.wheel_count),
(make, model, True, 4))
The output from this tests would be:
======================================================================
FAIL: test_init (test.CarTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\temp\py_mult_assert\test.py", line 17, in test_init
(make, model, True, 4))
AssertionError: Tuples differ: ('Ford', 'Ford', True, 3) != ('Ford', 'Model T', True, 4)
First differing element 1:
Ford
Model T
- ('Ford', 'Ford', True, 3)
? ^ - ^
+ ('Ford', 'Model T', True, 4)
? ^ ++++ ^
This shows that both the model and the wheel count are incorrect.
What you'll probably want to do is derive unittest.TestCase since that's the class that throws when an assertion fails. You will have to re-architect your TestCase to not throw (maybe keep a list of failures instead). Re-architecting stuff can cause other issues that you would have to resolve. For example you may end up needing to derive TestSuite to make changes in support of the changes made to your TestCase.
It is considered an anti-pattern to have multiple asserts in a single unit test. A single unit test is expected to test only one thing. Perhaps you are testing too much. Consider splitting this test up into multiple tests. This way you can name each test properly.
Sometimes however, it is okay to check multiple things at the same time. For instance when you are asserting properties of the same object. In that case you are in fact asserting whether that object is correct. A way to do this is to write a custom helper method that knows how to assert on that object. You can write that method in such a way that it shows all failing properties or for instance shows the complete state of the expected object and the complete state of the actual object when an assert fails.
There is a soft assertion package in PyPI called softest that will handle your requirements. It works by collecting the failures, combining exception and stack trace data, and reporting it all as part of the usual unittest output.
For instance, this code:
import softest
class ExampleTest(softest.TestCase):
def test_example(self):
# be sure to pass the assert method object, not a call to it
self.soft_assert(self.assertEqual, 'Worf', 'wharf', 'Klingon is not ship receptacle')
# self.soft_assert(self.assertEqual('Worf', 'wharf', 'Klingon is not ship receptacle')) # will not work as desired
self.soft_assert(self.assertTrue, True)
self.soft_assert(self.assertTrue, False)
self.assert_all()
if __name__ == '__main__':
softest.main()
...produces this console output:
======================================================================
FAIL: "test_example" (ExampleTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\...\softest_test.py", line 14, in test_example
self.assert_all()
File "C:\...\softest\case.py", line 138, in assert_all
self.fail(''.join(failure_output))
AssertionError: ++++ soft assert failure details follow below ++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The following 2 failures were found in "test_example" (ExampleTest):
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Failure 1 ("test_example" method)
+--------------------------------------------------------------------+
Traceback (most recent call last):
File "C:\...\softest_test.py", line 10, in test_example
self.soft_assert(self.assertEqual, 'Worf', 'wharf', 'Klingon is not ship receptacle')
File "C:\...\softest\case.py", line 84, in soft_assert
assert_method(*arguments, **keywords)
File "C:\...\Python\Python36-32\lib\unittest\case.py", line 829, in assertEqual
assertion_func(first, second, msg=msg)
File "C:\...\Python\Python36-32\lib\unittest\case.py", line 1203, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "C:\...\Python\Python36-32\lib\unittest\case.py", line 670, in fail
raise self.failureException(msg)
AssertionError: 'Worf' != 'wharf'
- Worf
+ wharf
: Klingon is not ship receptacle
+--------------------------------------------------------------------+
Failure 2 ("test_example" method)
+--------------------------------------------------------------------+
Traceback (most recent call last):
File "C:\...\softest_test.py", line 12, in test_example
self.soft_assert(self.assertTrue, False)
File "C:\...\softest\case.py", line 84, in soft_assert
assert_method(*arguments, **keywords)
File "C:\...\Python\Python36-32\lib\unittest\case.py", line 682, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (failures=1)
NOTE: I created and maintain softest.
Do each assert in a separate method.
class MathTest(unittest.TestCase):
def test_addition1(self):
self.assertEqual(1 + 0, 1)
def test_addition2(self):
self.assertEqual(1 + 1, 3)
def test_addition3(self):
self.assertEqual(1 + (-1), 0)
def test_addition4(self):
self.assertEqaul(-1 + (-1), -1)
expect is very useful in gtest.
This is python way in gist, and code:
import sys
import unittest
class TestCase(unittest.TestCase):
def run(self, result=None):
if result is None:
self.result = self.defaultTestResult()
else:
self.result = result
return unittest.TestCase.run(self, result)
def expect(self, val, msg=None):
'''
Like TestCase.assert_, but doesn't halt the test.
'''
try:
self.assert_(val, msg)
except:
self.result.addFailure(self, sys.exc_info())
def expectEqual(self, first, second, msg=None):
try:
self.failUnlessEqual(first, second, msg)
except:
self.result.addFailure(self, sys.exc_info())
expect_equal = expectEqual
assert_equal = unittest.TestCase.assertEqual
assert_raises = unittest.TestCase.assertRaises
test_main = unittest.main
I liked the approach by #Anthony-Batchelor, to capture the AssertionError exception. But a slight variation to this approach using decorators and also a way to report the tests cases with pass/fail.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import unittest
class UTReporter(object):
'''
The UT Report class keeps track of tests cases
that have been executed.
'''
def __init__(self):
self.testcases = []
print "init called"
def add_testcase(self, testcase):
self.testcases.append(testcase)
def display_report(self):
for tc in self.testcases:
msg = "=============================" + "\n" + \
"Name: " + tc['name'] + "\n" + \
"Description: " + str(tc['description']) + "\n" + \
"Status: " + tc['status'] + "\n"
print msg
reporter = UTReporter()
def assert_capture(*args, **kwargs):
'''
The Decorator defines the override behavior.
unit test functions decorated with this decorator, will ignore
the Unittest AssertionError. Instead they will log the test case
to the UTReporter.
'''
def assert_decorator(func):
def inner(*args, **kwargs):
tc = {}
tc['name'] = func.__name__
tc['description'] = func.__doc__
try:
func(*args, **kwargs)
tc['status'] = 'pass'
except AssertionError:
tc['status'] = 'fail'
reporter.add_testcase(tc)
return inner
return assert_decorator
class DecorateUt(unittest.TestCase):
#assert_capture()
def test_basic(self):
x = 5
self.assertEqual(x, 4)
#assert_capture()
def test_basic_2(self):
x = 4
self.assertEqual(x, 4)
def main():
#unittest.main()
suite = unittest.TestLoader().loadTestsFromTestCase(DecorateUt)
unittest.TextTestRunner(verbosity=2).run(suite)
reporter.display_report()
if __name__ == '__main__':
main()
Output from console:
(awsenv)$ ./decorators.py
init called
test_basic (__main__.DecorateUt) ... ok
test_basic_2 (__main__.DecorateUt) ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK
=============================
Name: test_basic
Description: None
Status: fail
=============================
Name: test_basic_2
Description: None
Status: pass
I had a problem with the answer from #Anthony Batchelor because it would have forced me to use try...catch inside my unit tests. Instead, I encapsulated the try...catch logic in an override of the TestCase.assertEqual method. Here is the code:
import unittest
import traceback
class AssertionErrorData(object):
def __init__(self, stacktrace, message):
super(AssertionErrorData, self).__init__()
self.stacktrace = stacktrace
self.message = message
class MultipleAssertionFailures(unittest.TestCase):
def __init__(self, *args, **kwargs):
self.verificationErrors = []
super(MultipleAssertionFailures, self).__init__( *args, **kwargs )
def tearDown(self):
super(MultipleAssertionFailures, self).tearDown()
if self.verificationErrors:
index = 0
errors = []
for error in self.verificationErrors:
index += 1
errors.append( "%s\nAssertionError %s: %s" % (
error.stacktrace, index, error.message ) )
self.fail( '\n\n' + "\n".join( errors ) )
self.verificationErrors.clear()
def assertEqual(self, goal, results, msg=None):
try:
super( MultipleAssertionFailures, self ).assertEqual( goal, results, msg )
except unittest.TestCase.failureException as error:
goodtraces = self._goodStackTraces()
self.verificationErrors.append(
AssertionErrorData( "\n".join( goodtraces[:-2] ), error ) )
def _goodStackTraces(self):
"""
Get only the relevant part of stacktrace.
"""
stop = False
found = False
goodtraces = []
# stacktrace = traceback.format_exc()
# stacktrace = traceback.format_stack()
stacktrace = traceback.extract_stack()
# https://stackoverflow.com/questions/54499367/how-to-correctly-override-testcase
for stack in stacktrace:
filename = stack.filename
if found and not stop and \
not filename.find( 'lib' ) < filename.find( 'unittest' ):
stop = True
if not found and filename.find( 'lib' ) < filename.find( 'unittest' ):
found = True
if stop and found:
stackline = ' File "%s", line %s, in %s\n %s' % (
stack.filename, stack.lineno, stack.name, stack.line )
goodtraces.append( stackline )
return goodtraces
# class DummyTestCase(unittest.TestCase):
class DummyTestCase(MultipleAssertionFailures):
def setUp(self):
self.maxDiff = None
super(DummyTestCase, self).setUp()
def tearDown(self):
super(DummyTestCase, self).tearDown()
def test_function_name(self):
self.assertEqual( "var", "bar" )
self.assertEqual( "1937", "511" )
if __name__ == '__main__':
unittest.main()
Result output:
F
======================================================================
FAIL: test_function_name (__main__.DummyTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\User\Downloads\test.py", line 77, in tearDown
super(DummyTestCase, self).tearDown()
File "D:\User\Downloads\test.py", line 29, in tearDown
self.fail( '\n\n' + "\n\n".join( errors ) )
AssertionError:
File "D:\User\Downloads\test.py", line 80, in test_function_name
self.assertEqual( "var", "bar" )
AssertionError 1: 'var' != 'bar'
- var
? ^
+ bar
? ^
:
File "D:\User\Downloads\test.py", line 81, in test_function_name
self.assertEqual( "1937", "511" )
AssertionError 2: '1937' != '511'
- 1937
+ 511
:
More alternative solutions for the correct stacktrace capture could be posted on How to correctly override TestCase.assertEqual(), producing the right stacktrace?
I don't think there is a way to do this with PyUnit and wouldn't want to see PyUnit extended in this way.
I prefer to stick to one assertion per test function (or more specifically asserting one concept per test) and would rewrite test_addition() as four separate test functions. This would give more useful information on failure, viz:
.FF.
======================================================================
FAIL: test_addition_with_two_negatives (__main__.MathTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_addition.py", line 10, in test_addition_with_two_negatives
self.assertEqual(-1 + (-1), -1)
AssertionError: -2 != -1
======================================================================
FAIL: test_addition_with_two_positives (__main__.MathTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_addition.py", line 6, in test_addition_with_two_positives
self.assertEqual(1 + 1, 3) # Failure!
AssertionError: 2 != 3
----------------------------------------------------------------------
Ran 4 tests in 0.000s
FAILED (failures=2)
If you decide that this approach isn't for you, you may find this answer helpful.
Update
It looks like you are testing two concepts with your updated question and I would split these into two unit tests. The first being that the parameters are being stored on the creation of a new object. This would have two assertions, one for make and one for model. If the first fails, the that clearly needs to be fixed, whether the second passes or fails is irrelevant at this juncture.
The second concept is more questionable... You're testing whether some default values are initialised. Why? It would be more useful to test these values at the point that they are actually used (and if they are not used, then why are they there?).
Both of these tests fail, and both should. When I am unit-testing, I am far more interested in failure than I am in success as that is where I need to concentrate.
FF
======================================================================
FAIL: test_creation_defaults (__main__.CarTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_car.py", line 25, in test_creation_defaults
self.assertEqual(self.car.wheel_count, 4) # Failure!
AssertionError: 3 != 4
======================================================================
FAIL: test_creation_parameters (__main__.CarTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_car.py", line 20, in test_creation_parameters
self.assertEqual(self.car.model, self.model) # Failure!
AssertionError: 'Ford' != 'Model T'
----------------------------------------------------------------------
Ran 2 tests in 0.000s
FAILED (failures=2)
I realize this question was asked literally years ago, but there are now (at least) two Python packages that allow you to do this.
One is softest: https://pypi.org/project/softest/
The other is Python-Delayed-Assert: https://github.com/pr4bh4sh/python-delayed-assert
I haven't used either, but they look pretty similar to me.
I think I found a solution that works. Using selenium, I was able to store a list of text values into a list. Loop through the list until I found an item that contains that text I needed. Then using the if else statement, I used a 'break' statement when the item was found and I assigned a specific value to a dummy variable once the value was found. Then I asserted that value outside of the for-loop.
elements = self.driver.find_elements(*element)
print(elements)
global y
for element in elements:
print(element.text)
t = element.text
time_strip = combined_time[:-2] #test_case specific code
y = t.__contains__(time_strip) #test_case specific code
print(y)
if y == True:
global z
z = "banana"
break
else:
z = "apple"
if z == "banana":
print(z)
assert 2 == 2
else:
print(z)
assert 2 == 1

How to run multiple asserts in django unittest? [duplicate]

EDIT: switched to a better example, and clarified why this is a real problem.
I'd like to write unit tests in Python that continue executing when an assertion fails, so that I can see multiple failures in a single test. For example:
class Car(object):
def __init__(self, make, model):
self.make = make
self.model = make # Copy and paste error: should be model.
self.has_seats = True
self.wheel_count = 3 # Typo: should be 4.
class CarTest(unittest.TestCase):
def test_init(self):
make = "Ford"
model = "Model T"
car = Car(make=make, model=model)
self.assertEqual(car.make, make)
self.assertEqual(car.model, model) # Failure!
self.assertTrue(car.has_seats)
self.assertEqual(car.wheel_count, 4) # Failure!
Here, the purpose of the test is to ensure that Car's __init__ sets its fields correctly. I could break it up into four methods (and that's often a great idea), but in this case I think it's more readable to keep it as a single method that tests a single concept ("the object is initialized correctly").
If we assume that it's best here to not break up the method, then I have a new problem: I can't see all of the errors at once. When I fix the model error and re-run the test, then the wheel_count error appears. It would save me time to see both errors when I first run the test.
For comparison, Google's C++ unit testing framework distinguishes between between non-fatal EXPECT_* assertions and fatal ASSERT_* assertions:
The assertions come in pairs that test the same thing but have different effects on the current function. ASSERT_* versions generate fatal failures when they fail, and abort the current function. EXPECT_* versions generate nonfatal failures, which don't abort the current function. Usually EXPECT_* are preferred, as they allow more than one failures to be reported in a test. However, you should use ASSERT_* if it doesn't make sense to continue when the assertion in question fails.
Is there a way to get EXPECT_*-like behavior in Python's unittest? If not in unittest, then is there another Python unit test framework that does support this behavior?
Incidentally, I was curious about how many real-life tests might benefit from non-fatal assertions, so I looked at some code examples (edited 2014-08-19 to use searchcode instead of Google Code Search, RIP). Out of 10 randomly selected results from the first page, all contained tests that made multiple independent assertions in the same test method. All would benefit from non-fatal assertions.
Another way to have non-fatal assertions is to capture the assertion exception and store the exceptions in a list. Then assert that that list is empty as part of the tearDown.
import unittest
class Car(object):
def __init__(self, make, model):
self.make = make
self.model = make # Copy and paste error: should be model.
self.has_seats = True
self.wheel_count = 3 # Typo: should be 4.
class CarTest(unittest.TestCase):
def setUp(self):
self.verificationErrors = []
def tearDown(self):
self.assertEqual([], self.verificationErrors)
def test_init(self):
make = "Ford"
model = "Model T"
car = Car(make=make, model=model)
try: self.assertEqual(car.make, make)
except AssertionError, e: self.verificationErrors.append(str(e))
try: self.assertEqual(car.model, model) # Failure!
except AssertionError, e: self.verificationErrors.append(str(e))
try: self.assertTrue(car.has_seats)
except AssertionError, e: self.verificationErrors.append(str(e))
try: self.assertEqual(car.wheel_count, 4) # Failure!
except AssertionError, e: self.verificationErrors.append(str(e))
if __name__ == "__main__":
unittest.main()
Since Python 3.4 you can also use subtests:
def test_init(self):
make = "Ford"
model = "Model T"
car = Car(make=make, model=model)
with self.subTest(msg='Car.make check'):
self.assertEqual(car.make, make)
with self.subTest(msg='Car.model check'):
self.assertEqual(car.model, model)
with self.subTest(msg='Car.has_seats check'):
self.assertTrue(car.has_seats)
with self.subTest(msg='Car.wheel_count check'):
self.assertEqual(car.wheel_count, 4)
(msg parameter is used to more easily determine which test failed.)
Output:
======================================================================
FAIL: test_init (__main__.CarTest) [Car.model check]
----------------------------------------------------------------------
Traceback (most recent call last):
File "test.py", line 23, in test_init
self.assertEqual(car.model, model)
AssertionError: 'Ford' != 'Model T'
- Ford
+ Model T
======================================================================
FAIL: test_init (__main__.CarTest) [Car.wheel_count check]
----------------------------------------------------------------------
Traceback (most recent call last):
File "test.py", line 27, in test_init
self.assertEqual(car.wheel_count, 4)
AssertionError: 3 != 4
----------------------------------------------------------------------
Ran 1 test in 0.001s
FAILED (failures=2)
One option is assert on all the values at once as a tuple.
For example:
class CarTest(unittest.TestCase):
def test_init(self):
make = "Ford"
model = "Model T"
car = Car(make=make, model=model)
self.assertEqual(
(car.make, car.model, car.has_seats, car.wheel_count),
(make, model, True, 4))
The output from this tests would be:
======================================================================
FAIL: test_init (test.CarTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\temp\py_mult_assert\test.py", line 17, in test_init
(make, model, True, 4))
AssertionError: Tuples differ: ('Ford', 'Ford', True, 3) != ('Ford', 'Model T', True, 4)
First differing element 1:
Ford
Model T
- ('Ford', 'Ford', True, 3)
? ^ - ^
+ ('Ford', 'Model T', True, 4)
? ^ ++++ ^
This shows that both the model and the wheel count are incorrect.
What you'll probably want to do is derive unittest.TestCase since that's the class that throws when an assertion fails. You will have to re-architect your TestCase to not throw (maybe keep a list of failures instead). Re-architecting stuff can cause other issues that you would have to resolve. For example you may end up needing to derive TestSuite to make changes in support of the changes made to your TestCase.
It is considered an anti-pattern to have multiple asserts in a single unit test. A single unit test is expected to test only one thing. Perhaps you are testing too much. Consider splitting this test up into multiple tests. This way you can name each test properly.
Sometimes however, it is okay to check multiple things at the same time. For instance when you are asserting properties of the same object. In that case you are in fact asserting whether that object is correct. A way to do this is to write a custom helper method that knows how to assert on that object. You can write that method in such a way that it shows all failing properties or for instance shows the complete state of the expected object and the complete state of the actual object when an assert fails.
There is a soft assertion package in PyPI called softest that will handle your requirements. It works by collecting the failures, combining exception and stack trace data, and reporting it all as part of the usual unittest output.
For instance, this code:
import softest
class ExampleTest(softest.TestCase):
def test_example(self):
# be sure to pass the assert method object, not a call to it
self.soft_assert(self.assertEqual, 'Worf', 'wharf', 'Klingon is not ship receptacle')
# self.soft_assert(self.assertEqual('Worf', 'wharf', 'Klingon is not ship receptacle')) # will not work as desired
self.soft_assert(self.assertTrue, True)
self.soft_assert(self.assertTrue, False)
self.assert_all()
if __name__ == '__main__':
softest.main()
...produces this console output:
======================================================================
FAIL: "test_example" (ExampleTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\...\softest_test.py", line 14, in test_example
self.assert_all()
File "C:\...\softest\case.py", line 138, in assert_all
self.fail(''.join(failure_output))
AssertionError: ++++ soft assert failure details follow below ++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The following 2 failures were found in "test_example" (ExampleTest):
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Failure 1 ("test_example" method)
+--------------------------------------------------------------------+
Traceback (most recent call last):
File "C:\...\softest_test.py", line 10, in test_example
self.soft_assert(self.assertEqual, 'Worf', 'wharf', 'Klingon is not ship receptacle')
File "C:\...\softest\case.py", line 84, in soft_assert
assert_method(*arguments, **keywords)
File "C:\...\Python\Python36-32\lib\unittest\case.py", line 829, in assertEqual
assertion_func(first, second, msg=msg)
File "C:\...\Python\Python36-32\lib\unittest\case.py", line 1203, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "C:\...\Python\Python36-32\lib\unittest\case.py", line 670, in fail
raise self.failureException(msg)
AssertionError: 'Worf' != 'wharf'
- Worf
+ wharf
: Klingon is not ship receptacle
+--------------------------------------------------------------------+
Failure 2 ("test_example" method)
+--------------------------------------------------------------------+
Traceback (most recent call last):
File "C:\...\softest_test.py", line 12, in test_example
self.soft_assert(self.assertTrue, False)
File "C:\...\softest\case.py", line 84, in soft_assert
assert_method(*arguments, **keywords)
File "C:\...\Python\Python36-32\lib\unittest\case.py", line 682, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (failures=1)
NOTE: I created and maintain softest.
Do each assert in a separate method.
class MathTest(unittest.TestCase):
def test_addition1(self):
self.assertEqual(1 + 0, 1)
def test_addition2(self):
self.assertEqual(1 + 1, 3)
def test_addition3(self):
self.assertEqual(1 + (-1), 0)
def test_addition4(self):
self.assertEqaul(-1 + (-1), -1)
expect is very useful in gtest.
This is python way in gist, and code:
import sys
import unittest
class TestCase(unittest.TestCase):
def run(self, result=None):
if result is None:
self.result = self.defaultTestResult()
else:
self.result = result
return unittest.TestCase.run(self, result)
def expect(self, val, msg=None):
'''
Like TestCase.assert_, but doesn't halt the test.
'''
try:
self.assert_(val, msg)
except:
self.result.addFailure(self, sys.exc_info())
def expectEqual(self, first, second, msg=None):
try:
self.failUnlessEqual(first, second, msg)
except:
self.result.addFailure(self, sys.exc_info())
expect_equal = expectEqual
assert_equal = unittest.TestCase.assertEqual
assert_raises = unittest.TestCase.assertRaises
test_main = unittest.main
I liked the approach by #Anthony-Batchelor, to capture the AssertionError exception. But a slight variation to this approach using decorators and also a way to report the tests cases with pass/fail.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import unittest
class UTReporter(object):
'''
The UT Report class keeps track of tests cases
that have been executed.
'''
def __init__(self):
self.testcases = []
print "init called"
def add_testcase(self, testcase):
self.testcases.append(testcase)
def display_report(self):
for tc in self.testcases:
msg = "=============================" + "\n" + \
"Name: " + tc['name'] + "\n" + \
"Description: " + str(tc['description']) + "\n" + \
"Status: " + tc['status'] + "\n"
print msg
reporter = UTReporter()
def assert_capture(*args, **kwargs):
'''
The Decorator defines the override behavior.
unit test functions decorated with this decorator, will ignore
the Unittest AssertionError. Instead they will log the test case
to the UTReporter.
'''
def assert_decorator(func):
def inner(*args, **kwargs):
tc = {}
tc['name'] = func.__name__
tc['description'] = func.__doc__
try:
func(*args, **kwargs)
tc['status'] = 'pass'
except AssertionError:
tc['status'] = 'fail'
reporter.add_testcase(tc)
return inner
return assert_decorator
class DecorateUt(unittest.TestCase):
#assert_capture()
def test_basic(self):
x = 5
self.assertEqual(x, 4)
#assert_capture()
def test_basic_2(self):
x = 4
self.assertEqual(x, 4)
def main():
#unittest.main()
suite = unittest.TestLoader().loadTestsFromTestCase(DecorateUt)
unittest.TextTestRunner(verbosity=2).run(suite)
reporter.display_report()
if __name__ == '__main__':
main()
Output from console:
(awsenv)$ ./decorators.py
init called
test_basic (__main__.DecorateUt) ... ok
test_basic_2 (__main__.DecorateUt) ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK
=============================
Name: test_basic
Description: None
Status: fail
=============================
Name: test_basic_2
Description: None
Status: pass
I had a problem with the answer from #Anthony Batchelor because it would have forced me to use try...catch inside my unit tests. Instead, I encapsulated the try...catch logic in an override of the TestCase.assertEqual method. Here is the code:
import unittest
import traceback
class AssertionErrorData(object):
def __init__(self, stacktrace, message):
super(AssertionErrorData, self).__init__()
self.stacktrace = stacktrace
self.message = message
class MultipleAssertionFailures(unittest.TestCase):
def __init__(self, *args, **kwargs):
self.verificationErrors = []
super(MultipleAssertionFailures, self).__init__( *args, **kwargs )
def tearDown(self):
super(MultipleAssertionFailures, self).tearDown()
if self.verificationErrors:
index = 0
errors = []
for error in self.verificationErrors:
index += 1
errors.append( "%s\nAssertionError %s: %s" % (
error.stacktrace, index, error.message ) )
self.fail( '\n\n' + "\n".join( errors ) )
self.verificationErrors.clear()
def assertEqual(self, goal, results, msg=None):
try:
super( MultipleAssertionFailures, self ).assertEqual( goal, results, msg )
except unittest.TestCase.failureException as error:
goodtraces = self._goodStackTraces()
self.verificationErrors.append(
AssertionErrorData( "\n".join( goodtraces[:-2] ), error ) )
def _goodStackTraces(self):
"""
Get only the relevant part of stacktrace.
"""
stop = False
found = False
goodtraces = []
# stacktrace = traceback.format_exc()
# stacktrace = traceback.format_stack()
stacktrace = traceback.extract_stack()
# https://stackoverflow.com/questions/54499367/how-to-correctly-override-testcase
for stack in stacktrace:
filename = stack.filename
if found and not stop and \
not filename.find( 'lib' ) < filename.find( 'unittest' ):
stop = True
if not found and filename.find( 'lib' ) < filename.find( 'unittest' ):
found = True
if stop and found:
stackline = ' File "%s", line %s, in %s\n %s' % (
stack.filename, stack.lineno, stack.name, stack.line )
goodtraces.append( stackline )
return goodtraces
# class DummyTestCase(unittest.TestCase):
class DummyTestCase(MultipleAssertionFailures):
def setUp(self):
self.maxDiff = None
super(DummyTestCase, self).setUp()
def tearDown(self):
super(DummyTestCase, self).tearDown()
def test_function_name(self):
self.assertEqual( "var", "bar" )
self.assertEqual( "1937", "511" )
if __name__ == '__main__':
unittest.main()
Result output:
F
======================================================================
FAIL: test_function_name (__main__.DummyTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\User\Downloads\test.py", line 77, in tearDown
super(DummyTestCase, self).tearDown()
File "D:\User\Downloads\test.py", line 29, in tearDown
self.fail( '\n\n' + "\n\n".join( errors ) )
AssertionError:
File "D:\User\Downloads\test.py", line 80, in test_function_name
self.assertEqual( "var", "bar" )
AssertionError 1: 'var' != 'bar'
- var
? ^
+ bar
? ^
:
File "D:\User\Downloads\test.py", line 81, in test_function_name
self.assertEqual( "1937", "511" )
AssertionError 2: '1937' != '511'
- 1937
+ 511
:
More alternative solutions for the correct stacktrace capture could be posted on How to correctly override TestCase.assertEqual(), producing the right stacktrace?
I don't think there is a way to do this with PyUnit and wouldn't want to see PyUnit extended in this way.
I prefer to stick to one assertion per test function (or more specifically asserting one concept per test) and would rewrite test_addition() as four separate test functions. This would give more useful information on failure, viz:
.FF.
======================================================================
FAIL: test_addition_with_two_negatives (__main__.MathTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_addition.py", line 10, in test_addition_with_two_negatives
self.assertEqual(-1 + (-1), -1)
AssertionError: -2 != -1
======================================================================
FAIL: test_addition_with_two_positives (__main__.MathTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_addition.py", line 6, in test_addition_with_two_positives
self.assertEqual(1 + 1, 3) # Failure!
AssertionError: 2 != 3
----------------------------------------------------------------------
Ran 4 tests in 0.000s
FAILED (failures=2)
If you decide that this approach isn't for you, you may find this answer helpful.
Update
It looks like you are testing two concepts with your updated question and I would split these into two unit tests. The first being that the parameters are being stored on the creation of a new object. This would have two assertions, one for make and one for model. If the first fails, the that clearly needs to be fixed, whether the second passes or fails is irrelevant at this juncture.
The second concept is more questionable... You're testing whether some default values are initialised. Why? It would be more useful to test these values at the point that they are actually used (and if they are not used, then why are they there?).
Both of these tests fail, and both should. When I am unit-testing, I am far more interested in failure than I am in success as that is where I need to concentrate.
FF
======================================================================
FAIL: test_creation_defaults (__main__.CarTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_car.py", line 25, in test_creation_defaults
self.assertEqual(self.car.wheel_count, 4) # Failure!
AssertionError: 3 != 4
======================================================================
FAIL: test_creation_parameters (__main__.CarTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_car.py", line 20, in test_creation_parameters
self.assertEqual(self.car.model, self.model) # Failure!
AssertionError: 'Ford' != 'Model T'
----------------------------------------------------------------------
Ran 2 tests in 0.000s
FAILED (failures=2)
I realize this question was asked literally years ago, but there are now (at least) two Python packages that allow you to do this.
One is softest: https://pypi.org/project/softest/
The other is Python-Delayed-Assert: https://github.com/pr4bh4sh/python-delayed-assert
I haven't used either, but they look pretty similar to me.
I think I found a solution that works. Using selenium, I was able to store a list of text values into a list. Loop through the list until I found an item that contains that text I needed. Then using the if else statement, I used a 'break' statement when the item was found and I assigned a specific value to a dummy variable once the value was found. Then I asserted that value outside of the for-loop.
elements = self.driver.find_elements(*element)
print(elements)
global y
for element in elements:
print(element.text)
t = element.text
time_strip = combined_time[:-2] #test_case specific code
y = t.__contains__(time_strip) #test_case specific code
print(y)
if y == True:
global z
z = "banana"
break
else:
z = "apple"
if z == "banana":
print(z)
assert 2 == 2
else:
print(z)
assert 2 == 1

How to print the 2 full objects instead of show diff on a python unit test error?

For example, if I run this test:
import unittest
class TestAssertEqual(unittest.TestCase):
def testString(self):
self.maxDiff = None
a = u'looooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggloooooooo\nooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnngggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggg'
b = u'looooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggglooooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnngggggggggggggggggggggggggggggggggggggggggggggggg\nggggggggggggggggggggglooooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnn\nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggglooooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggg'
self.assertEqual(a, b)
if __name__ == '__main__':
unittest.main()
I can barely understand nothing when the multi line diff is show:
F
======================================================================
FAIL: testString (__main__.TestAssertEqual)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\User\Downloads\test.py", line 8, in testString
self.assertEqual(a, b)
AssertionError: u'looooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnn [truncated]... != u'looooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnn [truncated]...
+ looooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggglooooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnngggggggggggggggggggggggggggggggggggggggggggggggg
+ ggggggggggggggggggggglooooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnn
- looooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggloooooooo
? -----------
+ nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggglooooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggg? +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
- ooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnngggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggg
----------------------------------------------------------------------
Ran 1 test in 0.006s
FAILED (failures=1)
Is it possible to python to only show the strings instead of doing a multi line diff? For example:
F
======================================================================
FAIL: testString (__main__.TestAssertEqual)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\User\Downloads\test.py", line 8, in testString
self.assertEqual(a, b)
AssertionError:
looooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggloooooooo
ooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnngggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggg
!=
looooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggglooooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnngggggggggggggggggggggggggggggggggggggggggggggggg
ggggggggggggggggggggglooooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnn
nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggglooooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggg
----------------------------------------------------------------------
Ran 1 test in 0.006s
FAILED (failures=1)
While searching I found on the question Comparison of multi-line strings in Python unit test where they explained that several types are registered to be showed on the diff mode. These are the types:
self.addTypeEqualityFunc(dict, 'assertDictEqual')
self.addTypeEqualityFunc(list, 'assertListEqual')
self.addTypeEqualityFunc(tuple, 'assertTupleEqual')
self.addTypeEqualityFunc(set, 'assertSetEqual')
self.addTypeEqualityFunc(frozenset, 'assertSetEqual')
try:
self.addTypeEqualityFunc(unicode, 'assertMultiLineEqual')
except NameError:
# No unicode support in this build
pass
As we may see, the unicode are registered, that is way they where displayed as diff on the minimal example just above. However, there are other types which are not registered and will not render the diff mode, like the str type.
Can I unregister all these types so nothing is displayed on diff mode?
Could the output format type to be displayed on the format:
AssertionError:
'a'
!=
'b'
----------------------------------------------------------------------
Ran 1 test in 0.006s
Instead of:
AssertionError: 'b' != 'a'
----------------------------------------------------------------------
Ran 1 test in 0.001s
Using the code on this answer at How to wrap correctly the unit testing diff with diffMode=0 as characters seems to produce better result than the built-in python unittest module behavior:
looooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggg
+ looooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnngggggggggggggggggggggggggggggggggggggggggggggggg
+ gggggggggggggggggggg gl
+ o oooooooo
+ oooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnn
+ nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnngggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggloooooooo ooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
- g ggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggg
Anyways, you can do a side by side diff with the following:
import unittest
class TestRules(unittest.TestCase):
## Set the maximum size of the assertion error message when Unit Test fail
maxDiff = None
def setUp(self):
self.addTypeEqualityFunc(str, self.myAssertEquals)
def myAssertEquals(self, expected, actual, msg=""):
if expected != actual:
self.fail( '\n%s\n!=\n%s' % ( expected, actual ) )
def test_badWrapping(self):
a = u'looooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggloooooooo\nooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnngggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggg'
b = u'looooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggglooooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnngggggggggggggggggggggggggggggggggggggggggggggggg\nggggggggggggggggggggglooooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnn\nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggglooooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggg'
self.myAssertEquals(
a
,
b
)
unittest.main(failfast=True)
Resulting in the following output as desired:
F
======================================================================
FAIL: test_badWrapping (__main__.TestRules)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\User\Downloads\test.py", line 23, in test_badWrapping
b
File "D:\User\Downloads\test.py", line 14, in myAssertEquals
self.fail( '\n%s\n!=\n%s' % ( expected, actual ) )
AssertionError:
looooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggloooooooo
ooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnngggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggg
!=
looooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggglooooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnngggggggggggggggggggggggggggggggggggggggggggggggg
ggggggggggggggggggggglooooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnn
nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggglooooooooooooooooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggg
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (failures=1)

Python: assertRaises error in unit test...exception not being caught

I have a unit test written to force an exception to be thrown. The exception is thrown, but my unit test statement doesn't catch it for some reason, and fails unexpectedly.
Here is the unit test:
def test900_001_ShouldRaiseExceptionDuplicateID(self):
hist = projecthistory.ProjectHistory()
myProject = project.Project(id = 42, locR = 10, locP = 15, locA = 30, eP = 200, eA= 210)
hist.addProject(myProject)
myProject2 = project.Project(id = 42, locR = 15, locP = 25, locA = 40, eP = 300, eA = 410)
self.assertRaises(ValueError, projecthistory.ProjectHistory, hist.addProject(myProject2))
Here is the code that this pertains to:
def addProject(self, proj):
duplicate = False
checkId = proj.getId()
#check to see if that id is already in the container if so, raise ValueError
#append project to container
for project in self.theContainer:
if (project.getId() == checkId):
duplicate = True
break
if(duplicate == False):
self.theContainer.append(proj)
else:
raise ValueError("ProjectHistory.addProject: Duplicate ID found. Project not added to repository.")
return len(self.theContainer)
Basically, projects are added to a list called theContainer. However, if two ID's are the same, then the duplicate is not added. By forcing the addition of two projects with the same ID in the unit test to be added, an exception is raised.
Here is the traceback that I get:
Traceback (most recent call last):
File "C:\Users\blah\workspace\blahID\CA06\test\projecthistoryTest.py", line 46, in test900_001_ShouldRaiseExceptionDuplicateID
self.assertRaises(ValueError, projecthistory.ProjectHistory, hist.addProject(myProject2))
File "C:\Users\blah\workspace\blahID\CA06\prod\projecthistory.py", line 38, in addProject
raise ValueError("ProjectHistory.addProject: Duplicate ID found. Project not added to repository.")
ValueError: ProjectHistory.addProject: Duplicate ID found. Project not added to repository.
Could the problem be with the third parameter in assertRaises? (hist.addProject(myProject2))
Your suspicion is correct and the issue lies with the call to hist.addProject().
You wrote:
self.assertRaises(ValueError, projecthistory.ProjectHistory,
hist.addProject(myProject2))
There is a ValueError raised. But it is in
hist.addProject(myProject2)
The traceback tells you that. And so assertRaises is never actually called because the exception is raised before it gets called.
The way to think about it is that assertRaises can only catch exceptions if it actually manages to be called. If the act of preparing its arguments raises an exception, then assertRaises does not even run, and so cannot catch anything.
If you expect an exception in the call to addProject() just change your assertion:
self.assertRaises(ValueError, hist.addProject, myProject2)
Or you could postpone the call to hist.addProject() with a lambda:
self.assertRaises(ValueError,
lambda: projecthistory.ProjectHistory(hist.addProject(myProject2)))

Python:What's wrong in my unit testing code?

I have two files
"testable.py":
def joiner(x,y):
return x+y
"test_testable.py":
import unittest
import testable
class TestTestable(unittest.TestCase):
def setUp(self):
self.seq = ['a','b','1']
self.seq2 = ['b','c',1]
def test_joiner(self):
for each in self.seq:
for eachy in self.seq2:
self.assertRaises(TypeError,testable.joiner(each,eachy))
if __name__ == '__main__':
unittest.main()
Now when I run the test I get:
ERROR: test_joiner (test_testable.TestTestable)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/rajat/collective_knowledge/test_testable.py", line 16, in test_joiner
self.assertRaises(TypeError,testable.joiner(each,eachy),(each,eachy))
File "/home/rajat/collective_knowledge/testable.py", line 11, in joiner
return x+y
TypeError: cannot concatenate 'str' and 'int' objects
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (errors=1)
What am I doing wrong?
You're miss using assertRaises it should be:
self.assertRaises(TypeError,testable.joiner, (each,eachy))
Or just use it as a context manager if you're using python2.7 and above or unittest2:
with self.assertRaises(TypeError):
testable.joiner(each,eachy)
EDIT :
You should also replace self.seq2 = [1,2,3] for example.
In
for each in self.seq:
for eachy in self.seq2
each could be 'a' and eachy could be 1
You can't add 'a' and 1 so the test fails.

Categories

Resources