I have a desire to use Nose for an over the wire integration test suite. However, the order of execution of some of these tests is important.
That said, I thought I would toss together a quick plugin to decorate a test with the order I want it executed: https://gist.github.com/Redsz/5736166
def Foo(unittest.TestCase):
#step(number=1)
def test_foo(self):
pass
#step(number=2)
def test_boo(self):
pass
From reviewing the built in plugins I had thought, I could simply override loadTestsFromTestCase and order the tests by the decorated 'step number'.:
def loadTestsFromTestCase(self, cls):
"""
Return tests in this test case class. Ordered by the step definitions.
"""
l = loader.TestLoader()
tmp = l.loadTestsFromTestCase(cls)
test_order = []
for test in tmp._tests:
order = test.test._testMethodName
func = getattr(cls, test.test._testMethodName)
if hasattr(func, 'number'):
order = getattr(func, 'number')
test_order.append((test, order))
test_order.sort(key=lambda tup: tup[1])
tmp._tests = (t[0] for t in test_order)
return tmp
This method is returning the tests in the order I desire, however when the tests are being executed by nose they are not being executed in this order?
Perhaps I need to move this concept of ordering to a different location?
UPDATE: As per the comment I made, the plugin is actually working as expected. I was mistaken to trust the pycharm test reporter. The tests are running as expected. Rather than removing the question I figured I would leave it up.
From the documentation:
[...] nose runs functional tests in the order in which they appear in the module file. TestCase-derived tests and other test classes are run in alphabetical order.
So a simple solution might be to rename the tests in your test case:
class Foo(unittest.TestCase):
def test_01_foo(self):
pass
def test_02_boo(self):
pass
I found a solution for it using PyTest ordering plugin provided here.
Try py.test YourModuleName.py -vv in CLI and the test will run in the order they have appeared in your module (first test_foo and then test_bar)
I did the same thing and works fine for me.
Note: You need to install PyTest package and import it.
Related
Background
We have a big suite of reusable test cases which we run in different environments. For our suite with test case ...
#pytest.mark.system_a
#pytest.mark.system_b
# ...
#pytest.mark.system_z
test_one_thing_for_current_system():
assert stuff()
... we execute pytest -m system_a on system A, pytest -m system_b on system B, and so on.
Goal
We want to parametrize multiple test case for one system only, but neither want to copy-paste those test cases, nor generate the parametrization dynamically based on the command line argument pytest -m.
Our Attempt
Copy And Mark
Instead of copy-pasting the test case, we assign the existing function object to a variable. For ...
class TestBase:
#pytest.mark.system_a
def test_reusable_thing(self):
assert stuff()
class TestReuse:
test_to_be_altered = pytest.mark.system_z(TestBase.test_reusable_thing)
... pytest --collect-only shows two test cases
TestBase.test_reusable_thing
TestReuse.test_to_be_altered
However, pytest.mark on one of the cases also affects the other one. Therefore, both cases are marked as system_a and system_z.
Using copy.deepcopy(TestBase.test_reusable_thing) and changing __name__ of the copy, before adding mark does not help.
Copy And Parametrize
Above example is only used for illustration, as it does not actually alter the test case. For our usecase we tried something like ...
class TestBase:
#pytest.fixture
def thing(self):
return 1
#pytest.mark.system_a
# ...
#pytest.mark.system_y
def test_reusable_thing(self, thing, lots, of, other, fixtures):
# lots of code
assert stuff() == thing
copy_of_test = copy.deepcopy(TestBase.test_reusable_thing)
copy_of_test.__name__ = "test_altered"
class TestReuse:
test_altered = pytest.mark.system_z(
pytest.mark.parametrize("thing", [1, 2, 3])(copy_of_test)
)
Because of aforementioned problem, this parametrizes test_reusable_thing for all systems while we only wanted to parametrize the copy for system_z.
Question
How can we parametrize test_reusable_thing for system_z ...
without changing the implementation of test_reusable_thing,
and without changing the implementation of fixture thing,
and without copy-pasting the implementation of test_reusable_thing
and without manually creating a wrapper function def test_altered for which we have to copy-paste requested fixtures only to pass them to TestBase().test_reusable_thing(thing, lots, of, other, fixtures).
Somehow pytest has to link the copy to the original. If we know how (e.g. based on a variable like __name__) we could break the link.
You can defer the parametrization to the pytest_generate_tests hookimpl. You can use that to add your custom logic for implicit populating of test parameters, e.g.
def pytest_generate_tests(metafunc):
# test has `my_arg` in parameters
if 'my_arg' in metafunc.fixturenames:
marker_for_system_z = metafunc.definition.get_closest_marker('system_z')
# test was marked with `#pytest.mark.system_z`
if marker_for_system_z is not None:
values_for_system_z = some_data.get('z')
metafunc.parametrize('my_arg', values_for_system_z)
A demo example to pass the marker name to test_reusable_thing via a system arg:
import pytest
def pytest_generate_tests(metafunc):
if 'system' in metafunc.fixturenames:
args = [marker.name for marker in metafunc.definition.iter_markers()]
metafunc.parametrize('system', args)
class Tests:
#pytest.fixture
def thing(self):
return 1
#pytest.mark.system_a
#pytest.mark.system_b
#pytest.mark.system_c
#pytest.mark.system_z
def test_reusable_thing(self, thing, system):
assert system.startswith('system_')
Running this will yield four tests in total:
test_spam.py::Tests::test_reusable_thing[system_z] PASSED
test_spam.py::Tests::test_reusable_thing[system_c] PASSED
test_spam.py::Tests::test_reusable_thing[system_b] PASSED
test_spam.py::Tests::test_reusable_thing[system_a] PASSED
In a scenario of Nose tests in python
class test_class(testcase):
def test_a(self):
print 'test1'
def test_b(self):
print 'test2'
def test_c(self):
print 'test3'
when i execute the nosetest, the sequence of execution keep changing.
Could some one please advise on how to provide/define a sequence for this execution.
Thanks so much in advance..
-Revant
That is by design. Nose will run the tests in random order each time and makes no guarantees of the order they will run in. You can provide a setup method that will run before all tests in that class, and a teardown method that will run after all tests in that class.
The idea is to test each thing in a system in isolation, and not have results from previous tests affect other tests. If you need the tests to be ordered, then it's a sign that you need to refactor your tests.
I'm writing test system using py.test, and looking for a way to make particular tests execution depending on some other test run results.
For example, we have standard test class:
import pytest
class Test_Smoke:
def test_A(self):
pass
def test_B(self):
pass
def test_C(self):
pass
test_C() should be executed if test_A() and test_B() were passed, else - skipped.
I need a way to do something like this on test or test class level (e.g. Test_Perfo executes, if all of Test_Smoke passed), and I'm unable to find a solution using standard methods (like #pytest.mark.skipif).
Is it possible at all with pytest?
You might want to have a look at pytest-dependency. It is a plugin that allows you to skip some tests if some other test had failed.
Consider also pytest-depends:
def test_build_exists():
assert os.path.exists(BUILD_PATH)
#pytest.mark.depends(on=['test_build_exists'])
def test_build_version():
# this will skip if `test_build_exists` fails...
How can I make it so unit tests in Python (using unittest) are run in the order in which they are specified in the file?
You can change the default sorting behavior by setting a custom comparison function. In unittest.py you can find the class variable unittest.TestLoader.sortTestMethodsUsing which is set to the builtin function cmp by default.
For example you can revert the execution order of your tests with doing this:
import unittest
unittest.TestLoader.sortTestMethodsUsing = lambda _, x, y: cmp(y, x)
Clever Naming.
class Test01_Run_Me_First( unittest.TestCase ):
def test010_do_this( self ):
assertTrue( True )
def test020_do_that( self ):
etc.
Is one way to force a specific order.
As said above, normally tests in test cases should be tested in any (i.e. random) order.
However, if you do want to order the tests in the test case, apparently it is not trivial.
Tests (method names) are retrieved from test cases using dir(MyTest), which returns a sorted list of members. You can use a clever (?) hack to order methods by their line numbers. This will work for one test case:
if __name__ == "__main__":
loader = unittest.TestLoader()
ln = lambda f: getattr(MyTestCase, f).im_func.func_code.co_firstlineno
lncmp = lambda a, b: cmp(ln(a), ln(b))
loader.sortTestMethodsUsing = lncmp
unittest.main(testLoader=loader, verbosity=2)
There are also test runners which do that by themselves – I think py.test does it.
Use proboscis library as I mentioned already (please see short description there).
The default order is alphabetical. You can put test_<int> before every test to specify the order of execution.
Also you can set unittest.TestLoader.sortTestMethodsUsing = None to eliminate the sort.
Check out Unittest Documentation for more details.
I found a solution for it using PyTest ordering plugin provided here.
Try py.test YourModuleName.py -vv in CLI and the test will run in the order they have appeared in your module.
I did the same thing and works fine for me.
Note: You need to install PyTest package and import it.
hacky way (run this file in pycharm or other unit test runner)
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
import unittest
def make_suite():
class Test(unittest.TestCase):
def test_32(self):
print "32"
def test_23(self):
print "23"
suite = unittest.TestSuite()
suite.addTest(Test('test_32'))
suite.addTest(Test('test_23'))
return suite
suite = make_suite()
class T(unittest.TestCase):
counter = 0
def __call__(self, *args, **kwargs):
res = suite._tests[T.counter](*args, **kwargs)
T.counter += 1
return res
for t in suite._tests:
name = "{}${}".format(t._testMethodName, t.__class__.__name__)
setattr(T, name, t)
I'm extending the python 2.7 unittest framework to do some function testing. One of the things I would like to do is to stop all the tests from running inside of a test, and inside of a setUpClass() method. Sometimes if a test fails, the program is so broken it is no longer of any use to keep testing, so I want to stop the tests from running.
I noticed that a TestResult has a shouldStop attribute, and a stop() method, but I'm not sure how to get access to that inside of a test.
Does anyone have any ideas? Is there a better way?
In case you are interested, here is a simple example how you could make a decision yourself about exiting a test suite cleanly with py.test:
# content of test_module.py
import pytest
counter = 0
def setup_function(func):
global counter
counter += 1
if counter >=3:
pytest.exit("decided to stop the test run")
def test_one():
pass
def test_two():
pass
def test_three():
pass
and if you run this you get:
$ pytest test_module.py
============== test session starts =================
platform linux2 -- Python 2.6.5 -- pytest-1.4.0a1
test path 1: test_module.py
test_module.py ..
!!!! Exit: decided to stop the test run !!!!!!!!!!!!
============= 2 passed in 0.08 seconds =============
You can also put the py.test.exit() call inside a test or into a project-specific plugin.
Sidenote: py.test natively supports py.test --maxfail=NUM to implement stopping after NUM failures.
Sidenote2: py.test has only limited support for running tests in the traditional unittest.TestCase style.
Here's another answer I came up with after a while:
First, I added a new exception:
class StopTests(Exception):
"""
Raise this exception in a test to stop the test run.
"""
pass
then I added a new assert to my child test class:
def assertStopTestsIfFalse(self, statement, reason=''):
try:
assert statement
except AssertionError:
result.addFailure(self, sys.exc_info())
and last I overrode the run function to include this right below the testMethod() call:
except StopTests:
result.addFailure(self, sys.exc_info())
result.stop()
I like this better since any test now has the ability to stop all the tests, and there is no cpython-specific code.
Currently, you can only stop the tests at the suite level. Once you are in a TestCase, the stop() method for the TestResult is not used when iterating through the tests.
Somewhat related to your question, if you are using python 2.7, you can use the -f/--failfast flag when calling your test with python -m unittest. This will stop the test at the first failure.
See 25.3.2.1. failfast, catch and buffer command line options
You can also consider using Nose to run your tests and use the -x, --stop flag to stop the test early.
In the test loop of unittest.TestSuite, there is a break condition at the start:
class TestSuite(BaseTestSuite):
def run(self, result, debug=False):
topLevel = False
if getattr(result, '_testRunEntered', False) is False:
result._testRunEntered = topLevel = True
for test in self:
if result.shouldStop:
break
So I am using a custom test suite like this:
class CustomTestSuite(unittest.TestSuite):
""" This variant registers the test result object with all ScriptedTests,
so that a failed Loign test can abort the test suite by setting result.shouldStop
to True
"""
def run(self, result, debug=False):
for test in self._tests:
test.result = result
return super(CustomTestSuite, self).run(result, debug)
with a custom test result class like this:
class CustomTestResult(TextTestResult):
def __init__(self, stream, descriptions, verbosity):
super(CustomTestResult, self).__init__(stream, descriptions, verbosity)
self.verbosity = verbosity
self.shouldStop = False
and my test classes are like:
class ScriptedTest(unittest.TestCase):
def __init__(self, environment, test_cfg, module, test):
super(ScriptedTest, self).__init__()
self.result = None
Under certain conditions, I then abort the test suite; for example, the test suite starts with a login, and if that fails, I do not have to try the rest:
try:
test_case.execute_script(test_command_list)
except AssertionError as e:
if test_case.module == 'session' and test_case.test == 'Login':
test_case.result.shouldStop = True
raise TestFatal('Login failed, aborting test.')
else:
raise sys.exc_info()
Then I use the test suite in the following way:
suite = CustomTestSuite()
self.add_tests(suite)
result = unittest.TextTestRunner(verbosity=self.environment.verbosity, stream=UnitTestLoggerStream(self.logger),
resultclass=CustomTestResult).run(suite)
I'm not sure if there is a better way to do it, but it behaves correctly for my tests.
Though you won't get the usual test reports of the tests run so far, a very easy way to stop the test run from within a TestCase method is simply to raise KeyboardInterrupt inside the method.
You can see how only KeyboardInterrupt is allowed to bubble up inside unittest's test runner by looking at CPython's code here inside testPartExecutor().
The OP was about python 2.7. Skip ahead a decade, and for python 3.1 and above, the way to skip tests in python unittest has had an upgrade, but the documentation could use some clarification (IMHO):
The documentation covers the following:
Skip All tests after first failure: use failfast (only useful if you really don't want to continue any further tests at all, including in other unrelated TestCase classes)
Skip All tests in a TestCase class: decorate class with #unittest.skip(), etc.
Skip a single method within a TestCase: decorate method with #unittest.skip(), etc.
Conditionally skip a method or a class: decorate with #unittest.skipIf() or #unittest.skipUnless() etc.
Conditionally skip a method, but not until something within that method runs: use self.skipTest() inside the method (this will skip that method, and ONLY that method, not subsequent methods)
The documentation does not cover the following (as of this writing):
Skip all tests within a TestCase class if a condition is met inside the setUpClass method: solution from this post raise unittest.SkipTest("skip all tests in this class") (there may be another way, but I'm unaware)
Skip all subsequent test methods in a TestCase class after a condition is met in one of the first tests, but still continue to test other unrelated TestCase classes. For this, I propose the following solution...
This solution assumes that you encounter the "bad state" in the middle of a test method, and which could only be noticed in a test method ONLY (i.e., it is not something that could have been determined in the setUpClass method, for whatever reason). Indeed the setUpClass method is the best location for determining whether to proceed if the initial conditions aren't right, but sometimes (as I've encountered) you just don't know until you run some test method. This solution assumes that test methods are in alphabetical order and that subsequent tests methods that you don't want to run after encountering the "bad" state follow alphabetically.
import unittest
class SkipMethodsConditionally(unittest.TestCase):
#classmethod
def setUpClass(cls):
#this class variable maintains whether or not test methods should continue
cls.should_continue = True
#this class variable represents the state of your system. Replace with function of your own
cls.some_bad_condition = False
def setUp(self) -> None:
"""setUp runs before every single test method in this class"""
if not self.__class__.should_continue:
self.skipTest("no reason to go on.")
def test_1_fail(self):
#Do some work here. Let's assume you encounter a "bad state,"" that could
#only be noticed in this first test method only, (i.e., it's not something that
#can be placed in the setUpClass method, for whatever reason)
self.__class__.some_bad_condition = True
if self.__class__.some_bad_condition:
self.__class__.should_continue = False
self.assertTrue(False,"this test should fail, rendering the rest of the tests irrelevant")
def test_2_pass(self):
self.assertFalse(self.__class__.some_bad_condition,"this test would pass normally if run, but should be skipped, because it would fail")
The above test will yield the following output:
test_1_fail (__main__.SkipMethodsConditionally) ... FAIL
test_2_pass (__main__.SkipMethodsConditionally) ... skipped 'no reason to go on.'
----------------------------------------------------------------------
Ran 2 tests in 0.001s
FAILED (failures=1, skipped=1)
I looked at the TestCase class and decided to subclass it. The class just overrides run(). I copied the method and starting at line 318 in the original class added this:
# this is CPython specific. Jython and IronPython may do this differently
if testMethod.func_code.co_argcount == 2:
testMethod(result)
else:
testMethod()
It has some CPython specific code in there to tell if the test method can accept another parameter, but since I'm using CPython everywhere, this isn't an issue for me.
Use:
if condition:
return 'pass'