Background
We have a big suite of reusable test cases which we run in different environments. For our suite with test case ...
#pytest.mark.system_a
#pytest.mark.system_b
# ...
#pytest.mark.system_z
test_one_thing_for_current_system():
assert stuff()
... we execute pytest -m system_a on system A, pytest -m system_b on system B, and so on.
Goal
We want to parametrize multiple test case for one system only, but neither want to copy-paste those test cases, nor generate the parametrization dynamically based on the command line argument pytest -m.
Our Attempt
Copy And Mark
Instead of copy-pasting the test case, we assign the existing function object to a variable. For ...
class TestBase:
#pytest.mark.system_a
def test_reusable_thing(self):
assert stuff()
class TestReuse:
test_to_be_altered = pytest.mark.system_z(TestBase.test_reusable_thing)
... pytest --collect-only shows two test cases
TestBase.test_reusable_thing
TestReuse.test_to_be_altered
However, pytest.mark on one of the cases also affects the other one. Therefore, both cases are marked as system_a and system_z.
Using copy.deepcopy(TestBase.test_reusable_thing) and changing __name__ of the copy, before adding mark does not help.
Copy And Parametrize
Above example is only used for illustration, as it does not actually alter the test case. For our usecase we tried something like ...
class TestBase:
#pytest.fixture
def thing(self):
return 1
#pytest.mark.system_a
# ...
#pytest.mark.system_y
def test_reusable_thing(self, thing, lots, of, other, fixtures):
# lots of code
assert stuff() == thing
copy_of_test = copy.deepcopy(TestBase.test_reusable_thing)
copy_of_test.__name__ = "test_altered"
class TestReuse:
test_altered = pytest.mark.system_z(
pytest.mark.parametrize("thing", [1, 2, 3])(copy_of_test)
)
Because of aforementioned problem, this parametrizes test_reusable_thing for all systems while we only wanted to parametrize the copy for system_z.
Question
How can we parametrize test_reusable_thing for system_z ...
without changing the implementation of test_reusable_thing,
and without changing the implementation of fixture thing,
and without copy-pasting the implementation of test_reusable_thing
and without manually creating a wrapper function def test_altered for which we have to copy-paste requested fixtures only to pass them to TestBase().test_reusable_thing(thing, lots, of, other, fixtures).
Somehow pytest has to link the copy to the original. If we know how (e.g. based on a variable like __name__) we could break the link.
You can defer the parametrization to the pytest_generate_tests hookimpl. You can use that to add your custom logic for implicit populating of test parameters, e.g.
def pytest_generate_tests(metafunc):
# test has `my_arg` in parameters
if 'my_arg' in metafunc.fixturenames:
marker_for_system_z = metafunc.definition.get_closest_marker('system_z')
# test was marked with `#pytest.mark.system_z`
if marker_for_system_z is not None:
values_for_system_z = some_data.get('z')
metafunc.parametrize('my_arg', values_for_system_z)
A demo example to pass the marker name to test_reusable_thing via a system arg:
import pytest
def pytest_generate_tests(metafunc):
if 'system' in metafunc.fixturenames:
args = [marker.name for marker in metafunc.definition.iter_markers()]
metafunc.parametrize('system', args)
class Tests:
#pytest.fixture
def thing(self):
return 1
#pytest.mark.system_a
#pytest.mark.system_b
#pytest.mark.system_c
#pytest.mark.system_z
def test_reusable_thing(self, thing, system):
assert system.startswith('system_')
Running this will yield four tests in total:
test_spam.py::Tests::test_reusable_thing[system_z] PASSED
test_spam.py::Tests::test_reusable_thing[system_c] PASSED
test_spam.py::Tests::test_reusable_thing[system_b] PASSED
test_spam.py::Tests::test_reusable_thing[system_a] PASSED
I want to have a specific setup/tear down fixture for one of the test modules. Obviously, I want it to run the setup code once before all the tests in the module, and once after all tests are done.
So, I've came up with this:
import pytest
#pytest.fixture(scope="module")
def setup_and_teardown():
print("Start")
yield
print("End")
def test_checking():
print("Checking")
assert True
This does not work that way. It will only work if I provide setup_and_teardown as an argument to the first test in the module.
Is this the way it's supposed to work? Isn't it supposed to be run automatically if I mark it as a module level fixture?
Module-scoped fixtures behave the same as fixtures of any other scope - they are only used if they are explicitely passed in a test, marked using #pytest.mark.usefixtures, or have autouse=True set:
#pytest.fixture(scope="module", autouse=True)
def setup_and_teardown():
print("setup")
yield
print("teardown")
For module- and session-scoped fixtures that do the setup/teardown as in your example, this is the most commonly used option.
For fixtures that yield an object (for example an expansive resource that shall only be allocated once) that is accessed in the test, this does not make sense, because the fixture has to be passed to the test to be accessible. Also, it may not be needed in all tests.
I have some common code which are used in all my PyTest. I want to keep the complete code inside any file and load it during run time. Is it possible using Python ?
Eg:
def teardown_method(self, method):
print "This is tear down method"
def setup_method(self, method):
print "This is setup method"
Apart from simply using import, for the example you showed (setup/teardown), you'd usually use a pytest fixture instead, and put them in a conftest.py to share them between tests.
I have a desire to use Nose for an over the wire integration test suite. However, the order of execution of some of these tests is important.
That said, I thought I would toss together a quick plugin to decorate a test with the order I want it executed: https://gist.github.com/Redsz/5736166
def Foo(unittest.TestCase):
#step(number=1)
def test_foo(self):
pass
#step(number=2)
def test_boo(self):
pass
From reviewing the built in plugins I had thought, I could simply override loadTestsFromTestCase and order the tests by the decorated 'step number'.:
def loadTestsFromTestCase(self, cls):
"""
Return tests in this test case class. Ordered by the step definitions.
"""
l = loader.TestLoader()
tmp = l.loadTestsFromTestCase(cls)
test_order = []
for test in tmp._tests:
order = test.test._testMethodName
func = getattr(cls, test.test._testMethodName)
if hasattr(func, 'number'):
order = getattr(func, 'number')
test_order.append((test, order))
test_order.sort(key=lambda tup: tup[1])
tmp._tests = (t[0] for t in test_order)
return tmp
This method is returning the tests in the order I desire, however when the tests are being executed by nose they are not being executed in this order?
Perhaps I need to move this concept of ordering to a different location?
UPDATE: As per the comment I made, the plugin is actually working as expected. I was mistaken to trust the pycharm test reporter. The tests are running as expected. Rather than removing the question I figured I would leave it up.
From the documentation:
[...] nose runs functional tests in the order in which they appear in the module file. TestCase-derived tests and other test classes are run in alphabetical order.
So a simple solution might be to rename the tests in your test case:
class Foo(unittest.TestCase):
def test_01_foo(self):
pass
def test_02_boo(self):
pass
I found a solution for it using PyTest ordering plugin provided here.
Try py.test YourModuleName.py -vv in CLI and the test will run in the order they have appeared in your module (first test_foo and then test_bar)
I did the same thing and works fine for me.
Note: You need to install PyTest package and import it.
I'm extending the python 2.7 unittest framework to do some function testing. One of the things I would like to do is to stop all the tests from running inside of a test, and inside of a setUpClass() method. Sometimes if a test fails, the program is so broken it is no longer of any use to keep testing, so I want to stop the tests from running.
I noticed that a TestResult has a shouldStop attribute, and a stop() method, but I'm not sure how to get access to that inside of a test.
Does anyone have any ideas? Is there a better way?
In case you are interested, here is a simple example how you could make a decision yourself about exiting a test suite cleanly with py.test:
# content of test_module.py
import pytest
counter = 0
def setup_function(func):
global counter
counter += 1
if counter >=3:
pytest.exit("decided to stop the test run")
def test_one():
pass
def test_two():
pass
def test_three():
pass
and if you run this you get:
$ pytest test_module.py
============== test session starts =================
platform linux2 -- Python 2.6.5 -- pytest-1.4.0a1
test path 1: test_module.py
test_module.py ..
!!!! Exit: decided to stop the test run !!!!!!!!!!!!
============= 2 passed in 0.08 seconds =============
You can also put the py.test.exit() call inside a test or into a project-specific plugin.
Sidenote: py.test natively supports py.test --maxfail=NUM to implement stopping after NUM failures.
Sidenote2: py.test has only limited support for running tests in the traditional unittest.TestCase style.
Here's another answer I came up with after a while:
First, I added a new exception:
class StopTests(Exception):
"""
Raise this exception in a test to stop the test run.
"""
pass
then I added a new assert to my child test class:
def assertStopTestsIfFalse(self, statement, reason=''):
try:
assert statement
except AssertionError:
result.addFailure(self, sys.exc_info())
and last I overrode the run function to include this right below the testMethod() call:
except StopTests:
result.addFailure(self, sys.exc_info())
result.stop()
I like this better since any test now has the ability to stop all the tests, and there is no cpython-specific code.
Currently, you can only stop the tests at the suite level. Once you are in a TestCase, the stop() method for the TestResult is not used when iterating through the tests.
Somewhat related to your question, if you are using python 2.7, you can use the -f/--failfast flag when calling your test with python -m unittest. This will stop the test at the first failure.
See 25.3.2.1. failfast, catch and buffer command line options
You can also consider using Nose to run your tests and use the -x, --stop flag to stop the test early.
In the test loop of unittest.TestSuite, there is a break condition at the start:
class TestSuite(BaseTestSuite):
def run(self, result, debug=False):
topLevel = False
if getattr(result, '_testRunEntered', False) is False:
result._testRunEntered = topLevel = True
for test in self:
if result.shouldStop:
break
So I am using a custom test suite like this:
class CustomTestSuite(unittest.TestSuite):
""" This variant registers the test result object with all ScriptedTests,
so that a failed Loign test can abort the test suite by setting result.shouldStop
to True
"""
def run(self, result, debug=False):
for test in self._tests:
test.result = result
return super(CustomTestSuite, self).run(result, debug)
with a custom test result class like this:
class CustomTestResult(TextTestResult):
def __init__(self, stream, descriptions, verbosity):
super(CustomTestResult, self).__init__(stream, descriptions, verbosity)
self.verbosity = verbosity
self.shouldStop = False
and my test classes are like:
class ScriptedTest(unittest.TestCase):
def __init__(self, environment, test_cfg, module, test):
super(ScriptedTest, self).__init__()
self.result = None
Under certain conditions, I then abort the test suite; for example, the test suite starts with a login, and if that fails, I do not have to try the rest:
try:
test_case.execute_script(test_command_list)
except AssertionError as e:
if test_case.module == 'session' and test_case.test == 'Login':
test_case.result.shouldStop = True
raise TestFatal('Login failed, aborting test.')
else:
raise sys.exc_info()
Then I use the test suite in the following way:
suite = CustomTestSuite()
self.add_tests(suite)
result = unittest.TextTestRunner(verbosity=self.environment.verbosity, stream=UnitTestLoggerStream(self.logger),
resultclass=CustomTestResult).run(suite)
I'm not sure if there is a better way to do it, but it behaves correctly for my tests.
Though you won't get the usual test reports of the tests run so far, a very easy way to stop the test run from within a TestCase method is simply to raise KeyboardInterrupt inside the method.
You can see how only KeyboardInterrupt is allowed to bubble up inside unittest's test runner by looking at CPython's code here inside testPartExecutor().
The OP was about python 2.7. Skip ahead a decade, and for python 3.1 and above, the way to skip tests in python unittest has had an upgrade, but the documentation could use some clarification (IMHO):
The documentation covers the following:
Skip All tests after first failure: use failfast (only useful if you really don't want to continue any further tests at all, including in other unrelated TestCase classes)
Skip All tests in a TestCase class: decorate class with #unittest.skip(), etc.
Skip a single method within a TestCase: decorate method with #unittest.skip(), etc.
Conditionally skip a method or a class: decorate with #unittest.skipIf() or #unittest.skipUnless() etc.
Conditionally skip a method, but not until something within that method runs: use self.skipTest() inside the method (this will skip that method, and ONLY that method, not subsequent methods)
The documentation does not cover the following (as of this writing):
Skip all tests within a TestCase class if a condition is met inside the setUpClass method: solution from this post raise unittest.SkipTest("skip all tests in this class") (there may be another way, but I'm unaware)
Skip all subsequent test methods in a TestCase class after a condition is met in one of the first tests, but still continue to test other unrelated TestCase classes. For this, I propose the following solution...
This solution assumes that you encounter the "bad state" in the middle of a test method, and which could only be noticed in a test method ONLY (i.e., it is not something that could have been determined in the setUpClass method, for whatever reason). Indeed the setUpClass method is the best location for determining whether to proceed if the initial conditions aren't right, but sometimes (as I've encountered) you just don't know until you run some test method. This solution assumes that test methods are in alphabetical order and that subsequent tests methods that you don't want to run after encountering the "bad" state follow alphabetically.
import unittest
class SkipMethodsConditionally(unittest.TestCase):
#classmethod
def setUpClass(cls):
#this class variable maintains whether or not test methods should continue
cls.should_continue = True
#this class variable represents the state of your system. Replace with function of your own
cls.some_bad_condition = False
def setUp(self) -> None:
"""setUp runs before every single test method in this class"""
if not self.__class__.should_continue:
self.skipTest("no reason to go on.")
def test_1_fail(self):
#Do some work here. Let's assume you encounter a "bad state,"" that could
#only be noticed in this first test method only, (i.e., it's not something that
#can be placed in the setUpClass method, for whatever reason)
self.__class__.some_bad_condition = True
if self.__class__.some_bad_condition:
self.__class__.should_continue = False
self.assertTrue(False,"this test should fail, rendering the rest of the tests irrelevant")
def test_2_pass(self):
self.assertFalse(self.__class__.some_bad_condition,"this test would pass normally if run, but should be skipped, because it would fail")
The above test will yield the following output:
test_1_fail (__main__.SkipMethodsConditionally) ... FAIL
test_2_pass (__main__.SkipMethodsConditionally) ... skipped 'no reason to go on.'
----------------------------------------------------------------------
Ran 2 tests in 0.001s
FAILED (failures=1, skipped=1)
I looked at the TestCase class and decided to subclass it. The class just overrides run(). I copied the method and starting at line 318 in the original class added this:
# this is CPython specific. Jython and IronPython may do this differently
if testMethod.func_code.co_argcount == 2:
testMethod(result)
else:
testMethod()
It has some CPython specific code in there to tell if the test method can accept another parameter, but since I'm using CPython everywhere, this isn't an issue for me.
Use:
if condition:
return 'pass'