pytest-ordering order modules not test case within module - python

I have a list of test cases in different modules (just showing glimpse here, but in actual the count is around 10).
I want pytest to execute these modules in order, First module-a.py, next module-b.py and so on...
However, I am aware of ordering the test cases within module which is working perfectly fine.
Our overall application is a kind of pipeline where module-a output is consumed by module-b and so on..
We want to test it completely end-end in defined order of modules.
module-a.py
-----------
import pytest
#pytest.mark.run(order=2)
def test_three():
assert True
#pytest.mark.run(order=1)
def test_four():
assert True
module-b.py
-----------
#pytest.mark.run(order=2)
def test_two():
assert True
#pytest.mark.run(order=1)
def test_one():
assert True
From above code, I want
module-a test_four, test_three
module-b test_one, test_two
to be executed in order.
Currently we are running each of the module with --cov-append, something like below and generating final coverage.
pytest module-a.py
pytest --cov-append module-b.py
Can any one please help with better approach or better method.

Related

Show full plan in Pytest

I'm looking for a way in Pytest to show the full test and fixture plan instead of just listing the test cases via --collect-only.
This is the best I can get now:
TestClass1
TestCase1
TestCase2
TestClass2
TestCase3
TestCase4
This is what I'm looking for (should match the execution order):
Fixture1_Setup_ModuleScope
Fixture2_Setup_ClassScope
TestClass1
Fixture3_Setup_FunctionScope
TestCase1
Fixture3_Teardown_FunctionScope
TestCase2
Fixture2_Teardown_ClassScope
TestClass2
TestCase3
TestCase4
Fixture1_Teardown_ModuleScope
I looked around for such Pytest plugin and none seems to provide this. Not even as parsing of the result, let alone something that could be generated without running the tests. I understand that it's not needed for Pytest testing, but it's something I've learned to like in one of our older internal test frameworks, if only for validating my intention with reality.
Am I missing some obvious solution here? How could I achieve this?
Have you tried pytest --setup-plan.
show what fixtures and tests would be executed but don't execute anything.
pytest --setup-plan
# ...
# assert_test.py
# assert_test.py::TestTest::test_test
# click_test.py
# click_test.py::test_echo_token
# fixture_test.py
# SETUP F env['dev']
# SETUP F folder['dev_data']
# fixture_test.py::test_are_folders_exist[dev-dev_data] (fixtures used: env, folder)
# TEARDOWN F folder['dev_data']
# TEARDOWN F env['dev']

Session scope with pytest-dependency

Referring to the sample code copied from pytest-dependency, slight changes by removing "tests" folder, I am expecting "test_e" and "test_g" to pass, however, both are skipped. Kindly advise if I have done anything silly that stopping the session scope from working properly.
Note:
pytest-dependency 0.5.1 is used.
Both modules are stored relative to the current working directory respectively.
test_mod_01.py
import pytest
#pytest.mark.dependency()
def test_a():
pass
#pytest.mark.dependency()
#pytest.mark.xfail(reason="deliberate fail")
def test_b():
assert False
#pytest.mark.dependency(depends=["test_a"])
def test_c():
pass
class TestClass(object):
#pytest.mark.dependency()
def test_b(self):
pass
test_mod_02.py
import pytest
#pytest.mark.dependency()
#pytest.mark.xfail(reason="deliberate fail")
def test_a():
assert False
#pytest.mark.dependency(
depends=["./test_mod_01.py::test_a", "./test_mod_01.py::test_c"],
scope='session'
)
def test_e():
pass
#pytest.mark.dependency(
depends=["./test_mod_01.py::test_b", "./test_mod_02.py::test_e"],
scope='session'
)
def test_f():
pass
#pytest.mark.dependency(
depends=["./test_mod_01.py::TestClass::test_b"],
scope='session'
)
def test_g():
pass
Unexpected output
=========================================================== test session starts ===========================================================
...
collected 4 items
test_mod_02.py xsss
[100%]
====================================================== 3 skipped, 1 xfailed in 0.38s ======================================================
Expected output
=========================================================== test session starts ===========================================================
...
collected 4 items
test_mod_02.py x.s.
[100%]
====================================================== 2 passed, 1 skipped, 1 xfailed in 0.38s ======================================================
The first problem is that pytest-dependency uses the full test node names if used in session scope. That means that you have to exactly match that string, which never contains relative paths like "." in your case.
Instead of using "./test_mod_01.py::test_c", you have to use something like "tests/test_mod_01.py::test_c", or "test_mod_01.py::test_c", depending where your test root is.
The second problem is that pytest-dependency will only work if the tests other tests are depend on are run before in the same test session, e.g. in your case both test_mod_01 and test_mod_02 modules have to be in the same test session. The test dependencies are looked up at runtime in the list of tests that already have been run.
Note that this also means that you cannot make tests in test_mod_01 depend on tests in test_mod_02, if you run the tests in the default order. You have to ensure that the tests are run in the correct order either by adapting the names accordingly, or by using some ordering plugin like pytest-order, which has an option (--order-dependencies) to order the tests if needed in such a case.
Disclaimer: I'm the maintainer of pytest-order.

Is there a way for pytest to check if a log entry was made at Error level or higher?

Python 3.8.0, pytest 5.3.2, logging 0.5.1.2.
My code has an input loop, and to prevent the program crashing entirely, I catch any exceptions that get thrown, log them as critical, reset the program state, and keep going. That means that a test that causes such an exception won't outright fail, so long as the output is still what is expected. This might happen if the error was a side effect of the test code but didn't affect the main tested logic. I would still like to know that the test is exposing an error-causing bug however.
Most of the Googling I have done shows results on how to display logs within pytest, which I am doing, but I can't find out if there is a way to expose the logs within the test, such that I can fail any test with a log at Error or Critical level.
Edit: This is a minimal example of a failing attempt:
test.py:
import subject
import logging
import pytest
#pytest.fixture(autouse=True)
def no_log_errors(caplog):
yield # Run in teardown
print(caplog.records)
# caplog.set_level(logging.INFO)
errors = [record for record in caplog.records if record.levelno >= logging.ERROR]
assert not errors
def test_main():
subject.main()
# assert False
subject.py:
import logging
logger = logging.Logger('s')
def main():
logger.critical("log critical")
Running python3 -m pytest test.py passes with no errors.
Uncommenting the assert statement fails the test without errors, and prints [] to stdout, and log critical to stderr.
Edit 2:
I found why this fails. From the documentation on caplog:
The caplog.records attribute contains records from the current stage only, so inside the setup phase it contains only setup logs, same with the call and teardown phases
However, right underneath is what I should have found the first time:
To access logs from other stages, use the caplog.get_records(when) method. As an example, if you want to make sure that tests which use a certain fixture never log any warnings, you can inspect the records for the setup and call stages during teardown like so:
#pytest.fixture
def window(caplog):
window = create_window()
yield window
for when in ("setup", "call"):
messages = [
x.message for x in caplog.get_records(when) if x.levelno == logging.WARNING
]
if messages:
pytest.fail(
"warning messages encountered during testing: {}".format(messages)
)
However this still doesn't make a difference, and print(caplog.get_records("call")) still returns []
You can build something like this using the caplog fixture
here's some sample code from the docs which does some assertions based on the levels:
def test_baz(caplog):
func_under_test()
for record in caplog.records:
assert record.levelname != "CRITICAL"
assert "wally" not in caplog.text
since the records are the standard logging record types, you can use whatever you need there
here's one way you might do this ~more automatically using an autouse fixture:
#pytest.fixture(autouse=True)
def no_logs_gte_error(caplog):
yield
errors = [record for record in caplog.get_records('call') if record.levelno >= logging.ERROR]
assert not errors
(disclaimer: I'm a core dev on pytest)
You can use the unittest.mock module (even if using pytest) and monkey-patch whatever function / method you use for logging. Then in your test, you can have some assert that fails if, say, logging.error was called.
That'd be a short term solution. But it might also be the case that your design could benefit from more separation, so that you can easily test your application without a zealous try ... except block catching / suppressing just about everything.

Separate test cases per input files?

Most test frameworks assume that "1 test = 1 Python method/function",
and consider a test as passed when the function executes without
raising assertions.
I'm testing a compiler-like program (a program that reads *.foo
files and process their contents), for which I want to execute the same test on many input (*.foo) files. IOW, my test looks like:
class Test(unittest.TestCase):
def one_file(self, filename):
# do the actual test
def list_testcases(self):
# essentially os.listdir('tests/') and filter *.foo files.
def test_all(self):
for f in self.list_testcases():
one_file(f)
My current code uses
unittest from
Python's standard library, i.e. one_file uses self.assert...(...)
statements to check whether the test passes.
This works, in the sense that I do get a program which succeeds/fails
when my code is OK/buggy, but I'm loosing a lot of the advantages of
the testing framework:
I don't get relevant reporting like "X failures out of Y tests" nor
the list of passed/failed tests. (I'm planning to use such system
not only to test my own development but also to grade student's code
as a teacher, so reporting is important for me)
I don't get test independence. The second test runs on the
environment left by the first, and so on. The first failure stops
the testsuite: testcases coming after a failure are not ran at all.
I get the feeling that I'm abusing my test framework: there's only
one test function so automatic test discovery of unittest sounds
overkill for example. The same code could (should?) be written in
plain Python with a basic assert.
An obvious alternative is to change my code to something like
class Test(unittest.TestCase):
def one_file(self, filename):
# do the actual test
def test_file1(self):
one_file("first-testcase.foo")
def test_file2(self):
one_file("second-testcase.foo")
Then I get all the advantages of unittest back, but:
It's a lot more code to write.
It's easy to "forget" a testcase, i.e. create a test file in
tests/ and forget to add it to the Python test.
I can imagine a solution where I would generate one method per testcase dynamically (along the lines of setattr(self, 'test_file' + str(n), ...)), to generate the code for the second solution without having to write it by hand. But that sounds really overkill for a use-case which doesn't seem so complex.
How could I get the best of both, i.e.
automatic testcase discovery (list tests/*.foo files), test
independence and proper reporting?
If you can use pytest as your test runner, then this is actually pretty straightforward using the parametrize decorator:
import pytest, glob
all_files = glob.glob('some/path/*.foo')
#pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
# do the actual test
This will also automatically name the tests in a useful way, so that you can see which files have failed:
$ py.test
================================== test session starts ===================================
platform darwin -- Python 3.6.1, pytest-3.1.3, py-1.4.34, pluggy-0.4.0
[...]
======================================== FAILURES ========================================
_____________________________ test_one_file[some/path/a.foo] _____________________________
filename = 'some/path/a.foo'
#pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
> assert False
E assert False
test_it.py:7: AssertionError
_____________________________ test_one_file[some/path/b.foo] _____________________________
filename = 'some/path/b.foo'
#pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
[...]
Here is a solution, although it might be considered not very beautiful... The idea is to dynamically create new functions, add them to the test class, and use the function names as arguments (e.g., filenames):
# import
import unittest
# test class
class Test(unittest.TestCase):
# example test case
def test_default(self):
print('test_default')
self.assertEqual(2,2)
# set string for creating new function
func_string="""def test(cls):
# get function name and use it to pass information
filename = inspect.stack()[0][3]
# print function name for demonstration purposes
print(filename)
# dummy test for demonstration purposes
cls.assertEqual(type(filename),str)"""
# add new test for each item in list
for f in ['test_bla','test_blu','test_bli']:
# set name of new function
name=func_string.replace('test',f)
# create new function
exec(name)
# add new function to test class
setattr(Test, f, eval(f))
if __name__ == "__main__":
unittest.main()
This correctly runs all four tests and returns:
> test_bla
> test_bli
> test_blu
> test_default
> Ran 4 tests in 0.040s
> OK

running a test suite (an arbitrary collection of tests) with py.test

I'm using py.test to build functional test framework, so I need to be able to specify the exact tests to be run. I understand the beauty of dynamic test collection, but I want to be able to run my test environment health checks first, then run my regression tests after; that categorization does not preclude tests in these sets being used for other purposes.
The test suites will be tied to Jenkins build projects. I'm using osx, python 2.7.3, py.test 2.3.4.
So I have a test case like the following:
# sample_unittest.py
import unittest, pytest
class TestClass(unittest.TestCase):
def setUp(self):
self.testdata = ['apple', 'pear', 'berry']
def test_first(self):
assert 'apple' in self.testdata
def test_second(self):
assert 'pear' in self.testdata
def tearDown(self):
self.testdata = []
def suite():
suite = unittest.TestSuite()
suite.addTest(TestClass('test_first'))
return suite
if __name__ == '__main__':
unittest.TextTestRunner(verbosity=2).run(suite())
And I have a test suite like this:
# suite_regression.py
import unittest, pytest
import functionaltests.sample_unittest as sample_unittest
# set up the imported tests
suite_sample_unittest = sample_unittest.suite()
# create this test suite
suite = unittest.TestSuite()
suite.addTest(suite_sample_unittest)
# run the suite
unittest.TextTestRunner(verbosity=2).run(suite)
If I run the following from the command line against the suite, test_first runs (but I don't get the additional information that py.test would provide):
python functionaltests/suite_regression.py -v
If I run the following against the suite, 0 tests are collected:
py.test functionaltests/suite_regression.py
If I run the following against the testcase, test_first and test_second run:
py.test functionaltests/sample_unittest.py -v
I don't see how doing py.test with keywords will help organize tests into suites. Placing testcases into a folder structure and running py.test with folder options won't let me organize tests by functional area.
So my questions:
Is there a py.test mechanism for specifying arbitrary groupings of tests in a re-usable format?
Is there a way to use unittest.TestSuite from py.test?
EDIT:
So I tried out py.test markers, which lets me flag test functions and test methods with an arbitrary label, and then filter for that label at run time.
# conftest.py
import pytest
# set up custom markers
regression = pytest.mark.NAME
health = pytest.mark.NAME
And my updated test case:
# sample_unittest.py
import unittest, pytest
class TestClass(unittest.TestCase):
def setUp(self):
self.testdata = ['apple', 'pear', 'berry']
#pytest.mark.healthcheck
#pytest.mark.regression
def test_first(self):
assert 'apple' in self.testdata
#pytest.mark.regression
def test_second(self):
assert 'pear' in self.testdata
def tearDown(self):
self.testdata = []
def suite():
suite = unittest.TestSuite()
suite.addTest(TestClass('test_first'))
return suite
if __name__ == '__main__':
unittest.TextTestRunner(verbosity=2).run(suite())
So running the following command collects and runs test_first:
py.test functionaltests/sample_unittest.py -v -m healthcheck
And this collects and runs test_first and test_second:
py.test functionaltests/sample_unittest.py -v -m regression
So back to my questions: markers is a partial solution, but I still don't have a way to control the execution of collected marked tests.
No need to use markers in this case: setting the #pytest.mark.incremental on your py.test test class will force the execution order to the declaration order:
# sequential.py
import pytest
#pytest.mark.incremental
class TestSequential:
def test_first(self):
print('first')
def test_second(self):
print('second')
def test_third(self):
print('third')
Now running it with
pytest -s -v sequential.py
produces the following output:
=========== test session starts ===========
collected 3 items
sequential.py::TestSequential::test_first first
PASSED
sequential.py::TestSequential::test_second second
PASSED
sequential.py::TestSequential::test_third third
PASSED
=========== 3 passed in 0.01 seconds ===========
I guess it's a bit late now but I just finished up an interactive selection plugin with docs here:
https://github.com/tgoodlet/pytest-interactive
I actually use the hook Holger mentioned above.
It allows you to choose a selection of tests just after the collection phase using IPython. Ordering the tests is pretty easy using slices, subscripts, or tab-completion if that's what you're after. Note that it's an interactive tool meant for use during development and not so much for automated regression runs.
For persistent ordering using marks I've used pytest-ordering which is actually quite useful especially if you have baseline prerequisite tests in a long regression suite.
There is currently no direct way to control the order of test execution. FWIW, there is a plugin hook pytest_collection_modifyitems which you can use to implement something. See https://github.com/klrmn/pytest-random/blob/master/random_plugin.py for a plugin that uses it to implement randomization.
I know this is old but this library seems like it allow exactly what the op was looking for.. may help someone in the future.
https://pytest-ordering.readthedocs.io/en/develop/

Categories

Resources