I have a JSON parser library (ijson) with a test suite using unittest. The library actually has several parsing implementations — "backends" — in the form of a modules with the identical API. I want to automatically run the test suite several times for each available backend. My goals are:
I want to keep all tests in one place as they are backend agnostic.
I want the name of the currently used backend to be visible in some fashion when a test fails.
I want to be able to run a single TestCase or a single test, as unittest normally allows.
So what's the best way to organize the test suite for this? Write a custom test runner? Let TestCases load backends themselves? Imperatively generate separate TestCase classes for each backend?
By the way, I'm not married to unittest library in particular and I'm open to try another one if it solves the problem. But unittest is preferable since I already have the test code in place.
One common way is to group all your tests together in one class with an abstract method that creates an instance of the backend (if you need to create multiple instances in a test), or expects setUp to create an instance of the backend.
You can then create subclasses that create the different backends as needed.
If you are using a test loader that automatically detects TestCase subclasses, you'll probably need to make one change: don't make the common base class a subclass of TestCase: instead treat it as a mixin, and make the backend classes subclass from both TestCase and the mixin.
For example:
class BackendTests:
def make_backend(self):
raise NotImplementedError
def test_one(self):
backend = self.make_backend()
# perform a test on the backend
class FooBackendTests(unittest.TestCase, BackendTests):
def make_backend(self):
# Create an instance of the "foo" backend:
return foo_backend
class BarBackendTests(unittest.TestCase, BackendTests):
def make_backend(self):
# Create an instance of the "bar" backend:
return bar_backend
When building a test suite from the above, you will have independent test cases FooBackendTests.test_one and BarBackendTests.test_one that test the same feature on the two backends.
I took James Henstridge's idea with a mixin class holding all the tests but actual test cases are then generated imperatively, as backends may fail on import in which case we don't want to test them:
class BackendTests(object):
def test_something(self):
# using self.backend
# Generating real TestCase classes for each importable backend
for name in ['backend1', 'backend2', 'backend3']:
try:
classname = '%sTest' % name.capitalize()
locals()[classname] = type(
classname,
(unittest.TestCase, BackendTests),
{'backend': import_module('backends.%s' % name)},
)
except ImportError:
pass
Related
I have a requirement to implement a test suite for multiple functions.
I am trying to figure out best practices to leverage existing pytest design pattern.
There are 2-3 common test cases for all the functions
Each function require different presetup condition
My current design :
/utils
logic.py
/tests
Test_Regression.py
Sedan/
Test_Sedan.py
SUV/
Test_SUV.py
Hatchback/
Test_Hatchback.py
/config
Configuration.py
Current folder structure
Regression.py : This class holds common testcases
Test_SUV.py : This class inherits Test_Regression class test cases and has SUV specific test cases
Utils : This folder stores the program logic
is this a good design practice for a test suite to have class inheritance
class Regression:
#pytest.parameterize(x, utils.logic_func())
#pytest.mark.testengine
def test_engine(x,self):
#validates logic
assert x == 0
#pytest.parameterize(y, utils.logic_func())
#pytest.mark.testheadlight
def test_headlight(y,self):
#validates logic
assert y == 0
class Test_SUV(Test_Regression):
def get_engine_values():
# calls program logic
return x
.
.
.
.
Or is there a better way to structure these test cases.
Preconditions function can be annotated with #pytest.fixture and that can be used as parameter to the test methods instead of utility functions. You can define the scope (function, class, module, package or session) of these fixture functions: More details about fixture: https://docs.pytest.org/en/6.2.x/fixture.html
Pytest over unittest - one reason is you can avoid implicit class requirement and keep your tests simple and less verbose. And if you are adding class for pytest, then we are losing this benefit. IMHO, you can keep your common tests in regression module and avoid Regression class and Test Class inheritance, because this is going to be hard to maintain in long run.
Your tests should be independent from each other. If there is a common functionality that you want to share or inherit, you can do that via fixtures and keep them in a module called conftest.py and all those functions in conftest.py will then be available to all modules in the package and sub-packages More details about conftest
I was studying unittest by following the examples here.
In the following code, def test_add() is supposed to be wrapped in class testClass(), but for my curiosity, I didn't encapsulate it.
# class testClass(unittest.TestCase):
def test_add(self):
result = cal_fun.add_fuc(5, 10)
self.assertEqual(result, 15)
if __name__ == '__main__':
unittest.main()
The result came out in VScode as:
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Why was no test run? Why must the def test_add() be wrapped in a class?
Here's an expansion on my initial comment.
The unittest module has a Test Discovery feature/function. When you run it on a directory, it looks for pre-defined structures, definitions, and patterns. Namely:
In order to be compatible with test discovery, all of the test files must be modules or packages (including namespace packages) importable from the top-level directory of the project (this means that their filenames must be valid identifiers).
The basic building blocks of unit testing are test cases — single scenarios that must be set up and checked for correctness. In unittest, test cases are represented by unittest.TestCase instances. To make your own test cases you must write subclasses of TestCase or use FunctionTestCase.
The relevant answer to your question is that test functions must be wrapped in a TestCase. Not simply wrapped in just some "class testClass()" as you said, but the class must specifically inherit from unittest.TestCase. In addition, that assertEqual method is only available as part of a TestCase, because it's a method of the TestCase class. If you somehow got that code of yours to run, self.assertEqual would result in an error. You would have to use plain assert's.
To go into more detail, you will have to read the section on unittest's load_tests Protocol and specifically the loadTestsFromModule(module, pattern=None) method:
Return a suite of all test cases contained in the given module. This method searches module for classes derived from TestCase and creates an instance of the class for each test method defined for the class.
Finally, it's not just about wrapping your test functions in a unittest.TestCase class, your tests must follow some pre-defined patterns:
By default, unittest looks for files matching the pattern test*.py. This is done by the discover(start_dir, pattern='test*.py', top_level_dir=None) method of unittest's TestLoader class, which is "used to create test suites from classes and modules".
By default, unittest looks for methods that start with test (ex. test_function). This is done by the testMethodPrefix attribute of unittest's TestLoader class, which is a "string giving the prefix of method names which will be interpreted as test methods. The default value is 'test'."
If you want some customized behavior, you'll have to override the load_tests protocol.
I have my test class as follows:
class MyTestCase(django.test.TestCase):
def setUp(sefl):
# set up stuff common to ALL the tests
#my_test_decorator('arg1')
#my_test_decorator('arg2')
def test_something_1(self):
# run test
def test_something_2(self):
# run test
#my_test_decorator('arg1')
#my_test_decorator('arg2')
def test_something_3(self):
# run test
...
def test_something_N(self):
# run test
Now, #my_test_decorator is a decorator I made that performs internal changes to setup some changes to the test environment at runtime and undo such changes after the test finished, but I need to do this to a specific set of test cases only, not to ALL of them and I would like to keep the setup general to all the tests and, for the specific tests, maybe do something like this:
def setUp(self):
# set up stuff common to ALL the tests
tests_to_decorate = ['test_something_1', 'test_something_3']
decorator_args = ['arg1', 'arg2']
if self._testMethodNamein in tests_to_decorate:
method = getattr(self, self._testMethodNamein)
for arg in decorator_args:
method = my_test_decorator(arg)(method)
setattr(self, self._testMethodNamein, method)
I mean, without repeating the decorators all over the file, but it seems that the test runner retrieves the set of tests to run even before instantiating the test class and thus is of no use doing this in the __init__ or setUp methods.
It would be nice to have a way to accomplish this without:
having to write my own test runner
needing to split the tests in two or more TestCase subclasses
repeating setUp in different classes
creating a class that hosts the setUp method and have the TestCase subclasses inherit from such class
Is this even possible?
Thanks!! :)
I am unit testing mercurial integration and have a test class which currently creates a repository with a file and a clone of that repository in its setUp method and removes them in its tearDown method.
As you can probably imagine, this gets quite performance heavy very fast, especially if I have to do this for every test individually.
So what I would like to do is create the folders and initialize them for mercurial on loading the class, so each and every unittest in the TestCase class can use these repositories. Then when all the tests are run, I'd like to remove them. The only thing my setUp and tearDown methods then have to take care of is that the two repositories are in the same state between each test.
Basically what I'm looking for is a python equivalent of JUnit's #BeforeClass and #AfterClass annotations.
I've now done it by subclassing the TestSuite class, since the standard loader wraps all the test methods in an instance of the TestCase in which they're defined and puts them together in a TestSuite. I have the TestSuite call the before() and after() methods of the first TestCase. This of course means that you can't initialize any values to your TestCase object, but you probably want to do this in your setUp anyway.
The TestSuite looks like this:
class BeforeAfterSuite(unittest.TestSuite):
def run(self, result):
if len(self._tests) < 1:
return unittest.TestSuite.run(self, result)
first_test = self._tests[0]
if "before" in dir(first_test):
first_test.before()
result = unittest.TestSuite.run(self, result)
if "after" in dir(first_test):
first_test.after()
return result
For some slightly more finegrained control I also created the custom TestLoader which makes sure the BeforeAfterSuite is only used to wrap test-method-TestCase objects in, which looks like this:
class BeforeAfterLoader(unittest.TestLoader):
def loadTestsFromTestCase(self, testCaseClass):
self.suiteClass = BeforeAfterSuite
suite = unittest.TestLoader.loadTestsFromTestCase(self, testCaseClass)
self.suiteClass = unittest.TestLoader.suiteClass
return suite
Probably missing here is a try/except block around the before and after which could fail all the testcases in the suite or something.
from the Python unittest documentation :
setUpClass() :
A class method called before tests in an individual class run. setUpClass is called with the class as the only argument and must be decorated as a classmethod():
#classmethod
def setUpClass(cls):
...
New in version 2.7.
tearDownClass() :
A class method called after tests in an individual class have run. tearDownClass is called with the class as the only argument and must be decorated as a classmethod():
#classmethod
def tearDownClass(cls):
...
I am testing classes that parse XML and create DB objects (for a Django app).
There is a separate parser/creater class for each different XML type that we read (they all create essentially the same objects). Each parser class has the same superclass so they all have the same interface.
How do I define one set of tests, and provide a list of the parser classes, and have the set of tests run using each parser class? The parser class would define a filename prefix so that it reads the proper input file and the desired result file.
I want all the tests to be run (it shouldn't stop when one breaks), and when one breaks it should report the parser class name.
With nose, you can define test generators. You can define the test case and then write a test generator which will yield one test function for each parser class.
If you are using unittest, which has the advantage of being supported by django and installed on most systems, you can do something like:
class TestBase(unittest.TestCase)
testing_class = None
def setUp(self):
self.testObject = testing_class(foo, bar)
and then to run the tests:
for cls in [class1, class2, class3]:
testclass = type('Test'+cls.__name, (TestBase, ), {'testing_class': cls})
suite = unittest.TestLoader().loadTestsFromTestCase(testclass)
unittest.TextTestRunner(verbosity=2).run(suite)
I haven't tested this code but I've done stuff like this before.