I wanted to call setUpClass and tearDownClass so that setup and teardown would be performed only once for each test. However, it keeps failing for me when I call tearDownClass. I only want to record 1 test result, either PASS if both tests passed or FAIL if both tests failed. If I call only setup and tearDown then all works fine:
Calling setUpClass and tearDownClass:
#!/usr/bin/python
import datetime
import itertools
import logging
import os
import sys
import time
import unittest
LOGFILE = 'logfile.txt'
class MyTest(unittest.TestCase):
global testResult
testResult = None
#classmethod
def setUpClass(self):
## test result for DB Entry:
self.dbresult_dict = {
'SCRIPT' : 'MyTest.py',
'RESULT' : testResult,
}
def test1(self):
expected_number = 10
actual_number = 10
self.assertEqual(expected_number, actual_number)
def test2(self):
expected = True
actual = True
self.assertEqual(expected, actual)
def run(self, result=None):
self.testResult = result
unittest.TestCase.run(self, result)
#classmethod
def tearDownClass(self):
ok = self.testResult.wasSuccessful()
errors = self.testResult.errors
failures = self.testResult.failures
if ok:
self.dbresult_dict['RESULT'] = 'Pass'
else:
logging.info(' %d errors and %d failures',
len(errors), len(failures))
self.dbresult_dict['RESULT'] = 'Fail'
if __name__ == '__main__':
logger = logging.getLogger()
logger.addHandler(logging.FileHandler(LOGFILE, mode='a'))
stderr_file = open(LOGFILE, 'a')
runner = unittest.TextTestRunner(verbosity=2, stream=stderr_file, descriptions=True)
itersuite = unittest.TestLoader().loadTestsFromTestCase(MyTest)
runner.run(itersuite)
sys.exit()
unittest.main(module=itersuite, exit=True)
stderr_file.close()
Error:
test1 (__main__.MyTest) ... ok
test2 (__main__.MyTest) ... ok
ERROR
===================================================================
ERROR: tearDownClass (__main__.MyTest)
-------------------------------------------------------------------
Traceback (most recent call last):
File "testTearDownClass.py", line 47, in tearDownClass
ok = self.testResult.wasSuccessful()
AttributeError: type object 'MyTest' has no attribute 'testResult'
----------------------------------------------------------------------
Ran 2 tests in 0.006s
FAILED (errors=1)
like #Marcin already pointed out, you're using the Unittest-Framework in a way it isn't intended.
To see if the tests are successful you check the given values with the expected, like you already did: assertEqual(given, expected). Unittest will then collect a summary of failed ones. you don't have to do this manually.
If you want to check that two tests need to be together successful or fail together, these should be combined in ONE Test, maybe as a additionally one, if the individual Tests need to be checked as well. This is nothing you want to save and load afterwards. The tests itself should be as stateless as possible.
When you say you want to run the SetUp and TearDown 'once per test', do you mean once per test-method or per test-run? This is different if you have more than one test-method inside your class:
setUp() Will be called before each test-method
tearDown() Will be called after each test-method
setUpClass() Will be called once per class (before the first test-method of this class)
tearDownClass() Will be called once per class (after the last test-method of this class)
Here's the official documentation
Here's a related answer
Change tearDownClass(self) to tearDownClass(cls) and setUpClass(self) to setUpClass(cls).
Related
I am trying to mock Python os module but my mocking steps are not working.
The code in the file os_mock.py:
import os
class MyTestMock:
def rm(self):
# some reason file is always hardcoded
file_path = "/tmp/file1"
if os.path.exists(file_path):
os.remove(file_path)
print(file_path, 'removed successfully')
else:
print(file_path, 'Does not exist')
The code in the test case file test_os_mock.py
import os
import unittest
from unittest.mock import patch
from os_mock import MyTestMock
class TestMyTestMock(unittest.TestCase):
#patch('os.path')
#patch('os.remove')
def test_rm(self, mock_remove, mock_path):
my_test_mock = MyTestMock()
mock_path.exists.return_vallue = False
my_test_mock.rm()
self.assertFalse(mock_remove.called)
mock_path.exists.return_vallue = True
my_test_mock.rm()
self.assertTrue(mock_remove.called)
I am getting below error when I execute test cases
F
======================================================================
FAIL: test_rm (__main__.TestMyTestMock)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/unittest/mock.py", line 1336, in patched
return func(*newargs, **newkeywargs)
File "/Users/vuser/code/MP-KT/mock/test_os_mock.py", line 15, in test_rm
self.assertFalse(mock_remove.called)
AssertionError: True is not false
----------------------------------------------------------------------
Ran 1 test in 0.008s
FAILED (failures=1)
I know I am doing something wrong while mocking, But I could not able to figure it out, I got couple of stack overflow links, I followed it, but no help.
I have made some change only to your test file, while the file os_mock.py remains unchanged.
Change return_vallue to return_value is enough
If you change return_vallue to return_value your test passes successfully so these changes (in the 2 points where the errors are present) are sufficient.
In particular the changes are the followings:
mock_path.exists.return_vallue=False --> mock_path.exists.return_value=False (return_vallue is not correct)
mock_path.exists.return_vallue=True --> mock_path.exists.return_value=True (return_vallue is not correct)
Improving tests by assert_not_called() and assert_called_once()
The most important change is return_vallue --> return_value, but in my opinion the package unittest.mock provides methods assert_not_called() and assert_called_once() which can improve your tests.
For example self.assertTrue(mock_remove.called) ensures you called the mocked method, instead mock_remove.assert_called_once() checks that you called the method exactly one time.
So I recommend the followings changes:
self.assertFalse(mock_remove.called) --> mock_remove.assert_not_called()
self.assertTrue(mock_remove.called) --> mock_remove.assert_called_once()
The new file test_os_mock.py
With the changes showed, the file test_os_mock.py becomes:
import os
import unittest
from unittest.mock import patch
from os_mock import MyTestMock
class TestMyTestMock(unittest.TestCase):
#patch('os.path')
#patch('os.remove')
def test_rm(self, mock_remove, mock_path):
my_test_mock = MyTestMock()
# return_vallue --> return_value
mock_path.exists.return_value = False
my_test_mock.rm()
mock_remove.assert_not_called()
#self.assertFalse(mock_remove.called)
# return_vallue --> return_value
mock_path.exists.return_value = True
my_test_mock.rm()
mock_remove.assert_called_once()
#self.assertTrue(mock_remove.called)
if __name__ == "__main__":
unittest.main()
assert_called_once_with()
In your test case better than assert_called_once() is the method assert_called_once_with() and with this other method your test becomes as follow:
#patch('os.path')
#patch('os.remove')
def test_rm(self, mock_remove, mock_path):
my_test_mock = MyTestMock()
mock_path.exists.return_value = False
my_test_mock.rm()
mock_remove.assert_not_called()
mock_path.exists.return_value = True
my_test_mock.rm()
# here is the NEW CHANGE
mock_remove.assert_called_once_with("/tmp/file1")
This link is very useful for understand Mocking object in Python.
I would like to get the test results from my unit tests and then log them. Having some trouble figuring out the best way to do it. Ideally I think I would like to get them from the tearDown method, and then log them there, so that each test is logging it's result as it finishes, but I can't seem to get it to work.
Here is some example code that you can run:
import unittest
class sample_tests(unittest.TestCase):
def test_it(self):
self.assertTrue(1==2)
def tearDown(self):
print("Get test results and log them here")
print(unittest.TestResult())
if __name__=='__main__':
#unittest.main()
suite = unittest.TestSuite()
suite.addTest(sample_tests("test_it"))
runner = unittest.TextTestRunner()
result = runner.run(suite)
print(result.failures)
When you run this you will get the following output:
Get test results and log them here
<unittest.result.TestResult run=0 errors=0 failures=0>
F
======================================================================
FAIL: test_it (__main__.sample_tests)
----------------------------------------------------------------------
Traceback (most recent call last):
File ".\sample.py", line 6, in test_it
self.assertTrue(1==2)
AssertionError: False is not true
----------------------------------------------------------------------
Ran 1 test in 0.005s
FAILED (failures=1)
[(<__main__.sample_tests testMethod=test_it>, 'Traceback (most recent call last):\n File ".\\sample.py", line 6, in test_it\n self.assertTrue(1==2)\nAssertionError: False is not true\n')]
PS C:\Users\cn187366\Documents\Python_Test\ETL_Test_Framework>
As you can see, the tear down method is not returning the expected results and I think it is because I'm not referencing the test runner which contains the TestResults object.
EDIT
I've found a solution here:
Getting Python's unittest results in a tearDown() method
Here is the actual code that does what I wanted:
def tearDown(self):
print("Get test results and log them here")
if hasattr(self,'_outcome'):
result = self.defaultTestResult()
self._feedErrorsToResult(result,self._outcome.errors)
print(result)
You can stream to a file with stream=file-name.log. For more detail, check the unittest.TextTestRunner class.
if __name__ == '__main__':
suite = unittest.TestLoader().loadTestsFromTestCase(YourTestClass)
with open('test_result.out', 'w') as f:
unittest.TextTestRunner(stream=f, verbosity=2).run(suite)
This should work!
import xmlrunner
with open('test-reports/result.xml', 'wb') as output:
unittest.main(testRunner=xmlrunner.XMLTestRunner(output=output),
failfast=False, buffer=False, catchbreak=False)
Alternative:
def tearDown(self)
super(sample_tests, self).tearDown()
with open('result.txt', 'w+') as output:
test_failed = self._outcomes.errors
output.write(test_failed)
I have a tests file in one of my Pyramid projects. It has one suite with six tests in it:
...
from .scripts import populate_test_data
class FunctionalTests(unittest.TestCase):
def setUp(self):
settings = appconfig('config:testing.ini',
'main',
relative_to='../..')
app = main({}, **settings)
self.testapp = TestApp(app)
self.config = testing.setUp()
engine = engine_from_config(settings)
DBSession.configure(bind=engine)
populate_test_data(engine)
def tearDown(self):
DBSession.remove()
tearDown()
def test_index(self):
...
def test_login_form(self):
...
def test_read_recipe(self):
...
def test_tag(self):
...
def test_dish(self):
...
def test_dashboard_forbidden(self):
...
Now, when I run nosetests templates.py (where templates.py is the mentioned file) I get the following output:
......E
======================================================================
ERROR: templates.populate_test_data
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/yentsun/env/local/lib/python2.7/site-packages/nose-1.1.2-py2.7.egg/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/home/yentsun/env/local/lib/python2.7/site-packages/nose-1.1.2-py2.7.egg/nose/util.py", line 622, in newfunc
return func(*arg, **kw)
TypeError: populate_test_data() takes exactly 1 argument (0 given)
----------------------------------------------------------------------
Ran 7 tests in 1.985s
FAILED (errors=1)
When I run the tests with test suite specified nosetests templates.py:FunctionalTests, the output is, as expected, ok:
......
----------------------------------------------------------------------
Ran 6 tests in 1.980s
OK
Why do I have different output and why is an extra (7th) test run?
UPDATE. Its a bit frustrating, but I when removed the word test from the name populate_test_data (it became populate_dummy_data), everything worked fine.
The problem is solved for now, but maybe somebody knows what went wrong here - why a function from setUp had been tested?
Finding and running tests
nose, by default, follows a few simple rules for test discovery.
If it looks like a test, it’s a test. Names of directories, modules, classes and functions are compared against the testMatch regular
expression, and those that match are considered tests. Any class that
is a unittest.TestCase subclass is also collected, so long as it is
inside of a module that looks like a test.
(from nose 1.3.0 documentation)
In the nose's code, the regexp is defined as r'(?:^|[\b_\.%s-])[Tt]est' % os.sep, and if you inpect nose/selector.py, method Selector.matches(self, name) you'll see that the code uses re.search, which looks for a match anywhere in the string, not only at the beginning, as re.match does.
A small test:
>>> import re
>>> import os
>>> testMatch = r'(?:^|[\b_\.%s-])[Tt]est' % os.sep
>>> re.match(testMatch, 'populate_test_data')
>>> re.search(testMatch, 'populate_test_data')
<_sre.SRE_Match object at 0x7f3512569238>
So populate_test_data indeed "looks like a test" by nose's standards.
I am trying to figure out how to write unit tests for functions that I have written in Python - here's the code written below:
def num_buses(n):
import math
""" (int) -> int
Precondition: n >= 0
Return the minimum number of buses required to transport n people.
Each bus can hold 50 people.
>>> num_buses(75)
2
"""
bus = int()
if(n>=0):
bus = int(math.ceil(n/50.0))
return bus
I am attempting to write test code but they are giving me fail results - here's code I started with:
import a1
import unittest
class TestNumBuses(unittest.TestCase):
""" Test class for function a1.num_buses. """
def test_numbuses_1(self):
actual = num_buses(75)
expected = 2
self.assertEqual(actual, expected)
# Add your test methods for a1.num_buses here.
if __name__ == '__main__':
unittest.main(exit=False)
When I run the module by pressing F5 - this is what I get -
E
======================================================================
ERROR: test_numbuses_1 (__main__.TestNumBuses)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\1-blog-cacher\TestNumBuses.py", line 8, in test_numbuses_1
actual = num_buses(75)
NameError: global name 'num_buses' is not defined
----------------------------------------------------------------------
Ran 1 test in 0.050s
FAILED (errors=1)
So my test Failed - although it should pass since the number of passengers are 75 and each bus can hold a maximum of 50 people - anything more than that will result in a rounding up of the figures.
Can anyone see how I can get the test cases to work and where my writing the test code went wrong?
In your unittest file you have to import num_buses.
This fixes your immediate problem, but if you have defined num_buses in a1 then you have to do a1.num_buses otherwise Python will think that num_buses is a global function.
import a1
import unittest
import num_buses
class TestNumBuses(unittest.TestCase):
""" Test class for function a1.num_buses. """
def test_numbuses_1(self):
actual = num_buses(75) #a1.num_buses(75) <-
expected = 2
self.assertEqual(actual, expected)
# Add your test methods for a1.num_buses here.
if __name__ == '__main__':
unittest.main(exit=False)
Check this out: Cyber-Dojo Test - just press resume and then in test_untitled.py press the TEST button.
I've got test classes that inherit from unittest.TestCase. I load the classes multiple times like so:
tests = [TestClass1, TestClass2]
for test in tests:
for var in array:
# somehow indicate that this test should have the value of 'var'
suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(test))
Thing is, I want to pass the value of 'var' to each test, but I cannot use class variables because they are shared between every instance of the class, and I don't have access to the code that actually does the instantiation of the objects. What is the best way of accomplishing this?
I think that even if you don't have access to the class that implement the test cases, you can subclass them and overload the setUp method.
I think you're going about this the wrong way. Rather than doing what you are trying there why dont you just do, say you have in class:
from my_tests.variables import my_array
class TestClass1(unittest.TestCase):
def setUp():
....initializations...
def tearDown():
....clean up after...
def my_test_that_should_use_value_from_array(self):
for value in my_array:
test_stuff(value)
UPDATE:
Since you need to:
feed some variable value to MyTestCase
run MyTestCase using this value
change value
If MyTestCase still running - use updated value.
Consider this:
keep values map in the file (.csv/.txt/.xml/etc.)
read values map from file in the setUp()
find value for your MyTestCase from values map using TestCase.id() method (as shown in the example below).
use it in the test cases.
unittest has handy id() method, which returns test case name in filename.testclassname.methodname format.
So you can use it like this:
import unittest
my_variables_map = {
'test_01': 'foo',
'test_02': 'bar',
}
class MyTest(unittest.TestCase):
def setUp(self):
test_method_name = self.id() # filename.testclassname.methodname
test_method_name = test_method_name.split('.')[-1] # method name
self.variable_value = my_variables_map.get(test_method_name)
self.error_message = 'No values found for "%s" method.' % test_method_name
def test_01(self):
self.assertTrue(self.variable_value is not None, self.error_message)
def test_02(self):
self.assertTrue(self.variable_value is not None, self.error_message)
def test_03(self):
self.assertTrue(self.variable_value is not None, self.error_message)
if __name__ == '__main__':
unittest.main()
This gives you:
$ python /tmp/ut.py
..F
======================================================================
FAIL: test_03 (__main__.MyTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/ut.py", line 25, in test_03
self.assertTrue(self.variable_value is not None, self.error_message)
AssertionError: No values found for "test_03" method.
----------------------------------------------------------------------
Ran 3 tests in 0.000s
FAILED (failures=1)
$
I found the Data-Driven Tests (DDT - not the pesticide) package helpful for this.
http://ddt.readthedocs.org/en/latest/example.html