I am testing classes that parse XML and create DB objects (for a Django app).
There is a separate parser/creater class for each different XML type that we read (they all create essentially the same objects). Each parser class has the same superclass so they all have the same interface.
How do I define one set of tests, and provide a list of the parser classes, and have the set of tests run using each parser class? The parser class would define a filename prefix so that it reads the proper input file and the desired result file.
I want all the tests to be run (it shouldn't stop when one breaks), and when one breaks it should report the parser class name.
With nose, you can define test generators. You can define the test case and then write a test generator which will yield one test function for each parser class.
If you are using unittest, which has the advantage of being supported by django and installed on most systems, you can do something like:
class TestBase(unittest.TestCase)
testing_class = None
def setUp(self):
self.testObject = testing_class(foo, bar)
and then to run the tests:
for cls in [class1, class2, class3]:
testclass = type('Test'+cls.__name, (TestBase, ), {'testing_class': cls})
suite = unittest.TestLoader().loadTestsFromTestCase(testclass)
unittest.TextTestRunner(verbosity=2).run(suite)
I haven't tested this code but I've done stuff like this before.
Related
I was studying unittest by following the examples here.
In the following code, def test_add() is supposed to be wrapped in class testClass(), but for my curiosity, I didn't encapsulate it.
# class testClass(unittest.TestCase):
def test_add(self):
result = cal_fun.add_fuc(5, 10)
self.assertEqual(result, 15)
if __name__ == '__main__':
unittest.main()
The result came out in VScode as:
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Why was no test run? Why must the def test_add() be wrapped in a class?
Here's an expansion on my initial comment.
The unittest module has a Test Discovery feature/function. When you run it on a directory, it looks for pre-defined structures, definitions, and patterns. Namely:
In order to be compatible with test discovery, all of the test files must be modules or packages (including namespace packages) importable from the top-level directory of the project (this means that their filenames must be valid identifiers).
The basic building blocks of unit testing are test cases — single scenarios that must be set up and checked for correctness. In unittest, test cases are represented by unittest.TestCase instances. To make your own test cases you must write subclasses of TestCase or use FunctionTestCase.
The relevant answer to your question is that test functions must be wrapped in a TestCase. Not simply wrapped in just some "class testClass()" as you said, but the class must specifically inherit from unittest.TestCase. In addition, that assertEqual method is only available as part of a TestCase, because it's a method of the TestCase class. If you somehow got that code of yours to run, self.assertEqual would result in an error. You would have to use plain assert's.
To go into more detail, you will have to read the section on unittest's load_tests Protocol and specifically the loadTestsFromModule(module, pattern=None) method:
Return a suite of all test cases contained in the given module. This method searches module for classes derived from TestCase and creates an instance of the class for each test method defined for the class.
Finally, it's not just about wrapping your test functions in a unittest.TestCase class, your tests must follow some pre-defined patterns:
By default, unittest looks for files matching the pattern test*.py. This is done by the discover(start_dir, pattern='test*.py', top_level_dir=None) method of unittest's TestLoader class, which is "used to create test suites from classes and modules".
By default, unittest looks for methods that start with test (ex. test_function). This is done by the testMethodPrefix attribute of unittest's TestLoader class, which is a "string giving the prefix of method names which will be interpreted as test methods. The default value is 'test'."
If you want some customized behavior, you'll have to override the load_tests protocol.
I'm currently trying to rework how we do unit testing of our code. For this we decided to go with an intermediate format of serialized test descriptions in xml. The general structure consisted of a Base class (derived from unittest.Testcase) and then classes per configuration (previously a bunch of python files in their own folders) that inherit from this Base Class.
We've changed the way we call the and discover the unit tests, but I'm having a problem with regards to the SetupClass method of unittest.
Previously, this method had been overwritten in the Base Class and it needed to be called with a number of variables distinct for each test case, which we did from the derived classes SetupClass methods (call to super(...).SetupClass(,vars...).
Now that we're trying to get rid of the python files that hold this configuration in rather verbose and hard to track/configure way, I've run into a problem.
Example of an xml config file:
<TestConfig>
<Cases>
<TestCase>
<Name>Run Time</Name>
<GUID>1234-5678</GUID>
<Comparison>LessThan</Comparison>
<Value>22</Value>
</TestCase>
<TestCase>
<Name>Memory Usage</Name>
<GUID>5678-1234</GUID>
<Comparison>Equal</Comparison>
<Value>150</Value>
</TestCase>
</Cases>
</TestConfig>
My current implementation of the Base Class:
class MyUnitTestBase(unittest.TestCase):
#classmethod
def SetupClass(self, var1, var2,...):
some_internal_setup_stuff using var1 etc.
Now I'm searching for all the config files in my folder structure and using that list to try and setup a test suite of all the tests I want to run.
def parseTestSuite(cfg_files):
all_suites = unittest.TestSuite()
for cfg_file in cfg_files:
suite = unittest.TestSuite()
class MyUnitTestDerived(MyUnitTestBase):
pass
try:
tree = eTree.parse(cfg_file)
root = tree.getroot()
for section in root[0]:
for item in section:
test_item = TestItem() #parsing helper
test_item.parse_from_xml(item) #my parsing of the settings
test_name = 'test_%s' % (test_item.name).replace(" ","_")
setattr(MyUnitTestDerived, test_name, test_generator(test_item.comparison, test_item.exp_value,test_item.comp_value))
suite.addTests(unittest.TestLoader().loadTestsFromTestCase(MyUnitTestDerived))
except:
raise
all_suites.addTests(suite)
return all_suites
and to help with the test method generation:
def test_generator(comparison, exp_value, comp_value):
if comparison=="LessThan":
def test(self):
self.assertLessEqual(exp_value, comp_value)
elsif comparison=="Equal":
def test(self):
self.assertEqual(exp_value, comp_value)
elif xxx:
etc...
return test
The detection seems to work, as does the parsing and the filling of the suite with the correct tests seems to work as well.
What I'm struggling with is however that I need to call the setupClass method with variables, whereas here I cannot. I haven't figured out yet where I would add this and how I could make this work again with the old BaseClass implementation of it.
One thing I've tried (and I'm sure I'm going to python-hell for this) is replacing the pass with:
#classmethod
def setUpClass(self, var1 = var1_from_conf, var2 = var2_from_conf, etc):
return super(MyUnitTestDerived, self).setUpClass(var1 = var1_from_conf, var2 = var2_from_conf, etc)
in order to force default setup to use the variables from the config that are already known at that point in time, but it doesn't seem to even ever call this setUpClass (tried with debug print).
I'd appreciate any sort of help you can give me. I know it is quite a convoluted problem, but I've tried to break it down into as simple a case as I could.
I am trying to create test classes that aren't unittest based.
This method under this class
class ClassUnderTestTests:
def test_something(self):
cannot be detected and run when you call py.test from the command line or when you run this test in PyCharm (it's on its own module).
This
def test_something(self):
same method outside of a class can be detected and run.
I'd like to group my tests under classes and unless I'm missing something I'm following the py.test spec to do that.
Environment: Windows 7, PyCharm with py.test set as the test runner.
By convention it searches for
Test prefixed test classes (without an init method)
eg.
# content of test_class.py
class TestClass:
def test_one(self):
x = "this"
assert 'h' in x
def test_two(self):
x = "hello"
assert hasattr(x, 'check')
# this works too
#staticmethod
def test_three():
pass
# this doesn't work
##classmethod
#def test_three(cls):
# pass
See the docs:
Group multiple tests in a class
Conventions for Python test discovery
The accepted answer is not incorrect, but it is incomplete. Also, the link it contains to the documentation no longer works, nor does the updated link in the a comment on that answer.
The current documentation can now be found here. The relevant bits of that doc are:
...
In those directories, search for test_*.py or *_test.py files, imported by their test package name.
From those files, collect test items:
...
test prefixed test functions or methods inside Test prefixed test classes (without an __init__ method)
The key bit that is missing in the accepted answer is that not only must the class name start with Test and not have an __init__ method, but also, the name of the file containing the class MUST be of one of the forms test_*.py or *_test.py.
Where I got tripped up here, and I assume many others will too, is that I generally name my Python source files containing only a class definition to directly mirror the name of the class. So if my class is named MyClass, I normally put its code in a file named MyClass.py. I then put test code for my class in a file named TestMyClass.py. This won't work with PyTest. To let PyTest do its thing with my test class, I need to name the file for this class test_MyClass.py or MyClass_test.py. I chose the last form so that I generally add a '_test' suffix to the file names I'd otherwise choose that need to be named so that PyTest will parse them looking for tests.
I have a JSON parser library (ijson) with a test suite using unittest. The library actually has several parsing implementations — "backends" — in the form of a modules with the identical API. I want to automatically run the test suite several times for each available backend. My goals are:
I want to keep all tests in one place as they are backend agnostic.
I want the name of the currently used backend to be visible in some fashion when a test fails.
I want to be able to run a single TestCase or a single test, as unittest normally allows.
So what's the best way to organize the test suite for this? Write a custom test runner? Let TestCases load backends themselves? Imperatively generate separate TestCase classes for each backend?
By the way, I'm not married to unittest library in particular and I'm open to try another one if it solves the problem. But unittest is preferable since I already have the test code in place.
One common way is to group all your tests together in one class with an abstract method that creates an instance of the backend (if you need to create multiple instances in a test), or expects setUp to create an instance of the backend.
You can then create subclasses that create the different backends as needed.
If you are using a test loader that automatically detects TestCase subclasses, you'll probably need to make one change: don't make the common base class a subclass of TestCase: instead treat it as a mixin, and make the backend classes subclass from both TestCase and the mixin.
For example:
class BackendTests:
def make_backend(self):
raise NotImplementedError
def test_one(self):
backend = self.make_backend()
# perform a test on the backend
class FooBackendTests(unittest.TestCase, BackendTests):
def make_backend(self):
# Create an instance of the "foo" backend:
return foo_backend
class BarBackendTests(unittest.TestCase, BackendTests):
def make_backend(self):
# Create an instance of the "bar" backend:
return bar_backend
When building a test suite from the above, you will have independent test cases FooBackendTests.test_one and BarBackendTests.test_one that test the same feature on the two backends.
I took James Henstridge's idea with a mixin class holding all the tests but actual test cases are then generated imperatively, as backends may fail on import in which case we don't want to test them:
class BackendTests(object):
def test_something(self):
# using self.backend
# Generating real TestCase classes for each importable backend
for name in ['backend1', 'backend2', 'backend3']:
try:
classname = '%sTest' % name.capitalize()
locals()[classname] = type(
classname,
(unittest.TestCase, BackendTests),
{'backend': import_module('backends.%s' % name)},
)
except ImportError:
pass
I am unit testing mercurial integration and have a test class which currently creates a repository with a file and a clone of that repository in its setUp method and removes them in its tearDown method.
As you can probably imagine, this gets quite performance heavy very fast, especially if I have to do this for every test individually.
So what I would like to do is create the folders and initialize them for mercurial on loading the class, so each and every unittest in the TestCase class can use these repositories. Then when all the tests are run, I'd like to remove them. The only thing my setUp and tearDown methods then have to take care of is that the two repositories are in the same state between each test.
Basically what I'm looking for is a python equivalent of JUnit's #BeforeClass and #AfterClass annotations.
I've now done it by subclassing the TestSuite class, since the standard loader wraps all the test methods in an instance of the TestCase in which they're defined and puts them together in a TestSuite. I have the TestSuite call the before() and after() methods of the first TestCase. This of course means that you can't initialize any values to your TestCase object, but you probably want to do this in your setUp anyway.
The TestSuite looks like this:
class BeforeAfterSuite(unittest.TestSuite):
def run(self, result):
if len(self._tests) < 1:
return unittest.TestSuite.run(self, result)
first_test = self._tests[0]
if "before" in dir(first_test):
first_test.before()
result = unittest.TestSuite.run(self, result)
if "after" in dir(first_test):
first_test.after()
return result
For some slightly more finegrained control I also created the custom TestLoader which makes sure the BeforeAfterSuite is only used to wrap test-method-TestCase objects in, which looks like this:
class BeforeAfterLoader(unittest.TestLoader):
def loadTestsFromTestCase(self, testCaseClass):
self.suiteClass = BeforeAfterSuite
suite = unittest.TestLoader.loadTestsFromTestCase(self, testCaseClass)
self.suiteClass = unittest.TestLoader.suiteClass
return suite
Probably missing here is a try/except block around the before and after which could fail all the testcases in the suite or something.
from the Python unittest documentation :
setUpClass() :
A class method called before tests in an individual class run. setUpClass is called with the class as the only argument and must be decorated as a classmethod():
#classmethod
def setUpClass(cls):
...
New in version 2.7.
tearDownClass() :
A class method called after tests in an individual class have run. tearDownClass is called with the class as the only argument and must be decorated as a classmethod():
#classmethod
def tearDownClass(cls):
...