running a test suite (an arbitrary collection of tests) with py.test - python

I'm using py.test to build functional test framework, so I need to be able to specify the exact tests to be run. I understand the beauty of dynamic test collection, but I want to be able to run my test environment health checks first, then run my regression tests after; that categorization does not preclude tests in these sets being used for other purposes.
The test suites will be tied to Jenkins build projects. I'm using osx, python 2.7.3, py.test 2.3.4.
So I have a test case like the following:
# sample_unittest.py
import unittest, pytest
class TestClass(unittest.TestCase):
def setUp(self):
self.testdata = ['apple', 'pear', 'berry']
def test_first(self):
assert 'apple' in self.testdata
def test_second(self):
assert 'pear' in self.testdata
def tearDown(self):
self.testdata = []
def suite():
suite = unittest.TestSuite()
suite.addTest(TestClass('test_first'))
return suite
if __name__ == '__main__':
unittest.TextTestRunner(verbosity=2).run(suite())
And I have a test suite like this:
# suite_regression.py
import unittest, pytest
import functionaltests.sample_unittest as sample_unittest
# set up the imported tests
suite_sample_unittest = sample_unittest.suite()
# create this test suite
suite = unittest.TestSuite()
suite.addTest(suite_sample_unittest)
# run the suite
unittest.TextTestRunner(verbosity=2).run(suite)
If I run the following from the command line against the suite, test_first runs (but I don't get the additional information that py.test would provide):
python functionaltests/suite_regression.py -v
If I run the following against the suite, 0 tests are collected:
py.test functionaltests/suite_regression.py
If I run the following against the testcase, test_first and test_second run:
py.test functionaltests/sample_unittest.py -v
I don't see how doing py.test with keywords will help organize tests into suites. Placing testcases into a folder structure and running py.test with folder options won't let me organize tests by functional area.
So my questions:
Is there a py.test mechanism for specifying arbitrary groupings of tests in a re-usable format?
Is there a way to use unittest.TestSuite from py.test?
EDIT:
So I tried out py.test markers, which lets me flag test functions and test methods with an arbitrary label, and then filter for that label at run time.
# conftest.py
import pytest
# set up custom markers
regression = pytest.mark.NAME
health = pytest.mark.NAME
And my updated test case:
# sample_unittest.py
import unittest, pytest
class TestClass(unittest.TestCase):
def setUp(self):
self.testdata = ['apple', 'pear', 'berry']
#pytest.mark.healthcheck
#pytest.mark.regression
def test_first(self):
assert 'apple' in self.testdata
#pytest.mark.regression
def test_second(self):
assert 'pear' in self.testdata
def tearDown(self):
self.testdata = []
def suite():
suite = unittest.TestSuite()
suite.addTest(TestClass('test_first'))
return suite
if __name__ == '__main__':
unittest.TextTestRunner(verbosity=2).run(suite())
So running the following command collects and runs test_first:
py.test functionaltests/sample_unittest.py -v -m healthcheck
And this collects and runs test_first and test_second:
py.test functionaltests/sample_unittest.py -v -m regression
So back to my questions: markers is a partial solution, but I still don't have a way to control the execution of collected marked tests.

No need to use markers in this case: setting the #pytest.mark.incremental on your py.test test class will force the execution order to the declaration order:
# sequential.py
import pytest
#pytest.mark.incremental
class TestSequential:
def test_first(self):
print('first')
def test_second(self):
print('second')
def test_third(self):
print('third')
Now running it with
pytest -s -v sequential.py
produces the following output:
=========== test session starts ===========
collected 3 items
sequential.py::TestSequential::test_first first
PASSED
sequential.py::TestSequential::test_second second
PASSED
sequential.py::TestSequential::test_third third
PASSED
=========== 3 passed in 0.01 seconds ===========

I guess it's a bit late now but I just finished up an interactive selection plugin with docs here:
https://github.com/tgoodlet/pytest-interactive
I actually use the hook Holger mentioned above.
It allows you to choose a selection of tests just after the collection phase using IPython. Ordering the tests is pretty easy using slices, subscripts, or tab-completion if that's what you're after. Note that it's an interactive tool meant for use during development and not so much for automated regression runs.
For persistent ordering using marks I've used pytest-ordering which is actually quite useful especially if you have baseline prerequisite tests in a long regression suite.

There is currently no direct way to control the order of test execution. FWIW, there is a plugin hook pytest_collection_modifyitems which you can use to implement something. See https://github.com/klrmn/pytest-random/blob/master/random_plugin.py for a plugin that uses it to implement randomization.

I know this is old but this library seems like it allow exactly what the op was looking for.. may help someone in the future.
https://pytest-ordering.readthedocs.io/en/develop/

Related

Cleaner way to do pytest fixture parameterization based on command-line switch?

I've technically already solved the problem I was working on, but I can't help but feel like my solution is ugly:
I've got a pytest suite that I can run in two modes: Local Mode (for developing tests; everything just runs on my dev box through Chrome), and Seriousface Regression Testing Mode (for CI; the suite gets run on a zillion browsers and OSes). I've got a command-line flag to toggle between the two modes, --test-local. If it's there, I run in local mode. If it's not there, I run in seriousface mode. Here's how I do it:
# contents of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption("--test-local", action="store_true", default=False, help="run locally instead of in seriousface mode")
def pytest_generate_tests(metafunc):
if "dummy" in metafunc.fixturenames:
if metafunc.config.getoption("--test-local"):
driverParams = [(True, None)]
else:
driverParams = [(False, "seriousface setting 1"), (False, "seriousface setting 2")]
metafunc.parameterize("dummy", driverParams)
#pytest.fixture(scope="function")
def driver(dummy):
_driver = makeDriverStuff(dummy[0], dummy[1])
yield _driver
_driver.cleanup()
#pytest.fixture
def dummy():
pass
The problem is, that dummy fixture is hideous. I've tried having pytest_generate_tests parameterize the driver fixture directly, but it ends up replacing the fixture rather than just feeding stuff into it, so cleanup() never gets called when the test finishes. Using the dummy lets me replace the dummy with my parameter tuple, so that that gets passed into driver().
But, to reiterate, what I have does work, it just feels like a janky hack.
You can try a different approach: instead of dynamically selecting the parameter set for the test, declare ALL parameter sets on it, but deselect the irrelevant ones at launch time.
# r.py
import pytest
real = pytest.mark.real
mock = pytest.mark.mock
#pytest.mark.parametrize('a, b', [
real((True, 'serious 1')),
real((True, 'serious 2')),
mock((False, 'fancy mock')),
(None, 'always unmarked!'),
])
def test_me(a, b):
print([a, b])
Then run it as follows:
pytest -ra -v -s r.py -m real # strictly marked sets
pytest -ra -v -s r.py -m mock # strictly marked sets
pytest -ra -v -s r.py -m 'not real' # incl. non-marked sets
pytest -ra -v -s r.py -m 'not mock' # incl. non-marked sets
Also, skipif marks can be used for the selected parameter set (different from deselecting). They are natively supported by pytest. But the syntax is quite ugly. See more in the pytest's Skip/xfail with parametrize section.
The official manual also contains exactly the same case as in your question in Custom marker and command line option to control test runs. However, it is also not as elegant as -m test deselection, and is more suitable for complex run-time conditions, not on the apriori known structure of the tests.

Patching the same module for every test

If I have multiple tests that patch the same module, is there a way to not have to patch it in every test (namely, factor it out)?
def test_1(mocker):
mocker.patch.object(module, 'method')
# run test
def test_2(mocker):
mocker.patch.object(module, 'method')
# run test
def test_3(mocker):
mocker.patch.object(module, 'method')
# run test
yes there is, take a look at autouse fixtures in the official pytest docs

How to run only the test cases added in a test suite and not all the test cases available in the class?

I have written a Test Suite.
myTestsuite.py
import unittest
from myTestCase2 import MyTestCase2
from prime_num_validation import Prime_Num_Validation
def my_test_suite():
suite = unittest.TestSuite()
suite.addTest(MyTestCase2('test_greaterCheck2'))
#To add only test case: test_greaterCheck2 from the MyTestCase2 class
suite.addTest(Prime_Num_Validation('test_prime_check'))
#To add only test case: test_prime_check from the MyTestCase2 class
return suite
if __name__ == '__main__':
runner = unittest.TextTestRunner()
runner.run(my_test_suite())
Now when I run this using command line with: python -m unittest -v myTestsuite, It runs all the test cases from the MyTestCase2 class, which actually has 3 TC's, but we added only one out of 3 in our suite.
How should we avoid invoking all test case and executing only those which are present in the suite.
When I run this using Pycharm editor,
it again executes all the test cases from MyTestCase2.
You can have the marker on top of your unit test, something call xfail that will skip the test.
for example,
Below example skip test_function3()
import sys
def test_function1():
def test_function2():
#pytest.mark.skipif(sys.version_info < (3,3),
reason="requires python3.3")
def test_function3():
For reference please go through this site, you will find more information py.test skipif
the link I provided above has example of using xfail marker also.
You can use the xfail marker or create your own custom marker also. xfail is to indicate that you expect a test to fail.
You can run xfail test using following command.
pytest --runxfail
As with skipif you can also mark your expectation of a failure on a particular
platform:
Example
import pytest
xfail = pytest.mark.xfail
#xfail
def test_func1():
assert 0
#xfail(run=False)
def test_func2():
assert 0
I also assumed that running python -m unittest -v myTestsuite would execute just the test cases in the defined test suites. Calling the myTestsuite.py test module directly (i.e., not passing module as a parameter to the unittest library module), however, should produce the result you are after. Try running the following:
python myTestsuite
NOTE: You will need to pass the "verbosity=1" argument to the TextTestRunner function instead of using "-v" on command line (or modify myTestsuite.py to take a "-v" parameter and pass that to TextTestRunner)

Avoid setUpClass to run every time for nose cherry picked tests

This is my tests class, in mymodule.foo:
class Some TestClass(TestCase):
def setUpClass(cls):
# Do the setup for my tests
def test_Something(self)
# Test something
def test_AnotherThing(self)
# Test another thing
def test_DifferentStuff(self)
# Test another thing
I'm running the tests from Python with the following lines:
tests_to_run = ['mymodule.foo:test_AnotherThing', 'mymodule.foo:test_DifferentStuff']
result = nose.run(defaultTest= tests_to_run)
(This is obviously a bit more complicated and there's some logic to pick what tests I want to run)
Nose will run just the selected tests, as expected, but the setUpClass will run once for every test in tests_to_run. Is there any way to avoid this?
What I'm trying to achieve is to be able to run some dynamic set of tests while using nose in a Python script (not from the command line)
As #jonrsharpe mentioned, setupModule is what I was after: it will run just once per the whole module where my tests reside.

Running single test function in Nose NOT associated with unittest subclass

nose discovers tests beginning with test_, as well as subclasses of unittest.TestCase.
If one wishes to run a single TestCase test, e.g.:
# file tests.py
class T(unittest.TestCase):
def test_something():
1/0
This can be done on the command line with:
nosetests tests:T.test_something
I sometimes prefer to write a simple function and skip all the unittest boilerplate:
def test_something_else():
assert False
In that case, the test will still be run by nose when it runs all my tests. But how can I tell nose to run only that test, using the (Unix) command line?
That would be:
nosetests tests:test_something_else
An additional tip is to use attributes
from nose.plugins.attrib import attr
#attr('now')
def test_something_else():
pass
To run all tests tagged with the attribute, execute:
nosetests -a now
Inversely, avoid running those tests:
nosetests -a !now

Categories

Resources