Session scope with pytest-dependency - python

Referring to the sample code copied from pytest-dependency, slight changes by removing "tests" folder, I am expecting "test_e" and "test_g" to pass, however, both are skipped. Kindly advise if I have done anything silly that stopping the session scope from working properly.
Note:
pytest-dependency 0.5.1 is used.
Both modules are stored relative to the current working directory respectively.
test_mod_01.py
import pytest
#pytest.mark.dependency()
def test_a():
pass
#pytest.mark.dependency()
#pytest.mark.xfail(reason="deliberate fail")
def test_b():
assert False
#pytest.mark.dependency(depends=["test_a"])
def test_c():
pass
class TestClass(object):
#pytest.mark.dependency()
def test_b(self):
pass
test_mod_02.py
import pytest
#pytest.mark.dependency()
#pytest.mark.xfail(reason="deliberate fail")
def test_a():
assert False
#pytest.mark.dependency(
depends=["./test_mod_01.py::test_a", "./test_mod_01.py::test_c"],
scope='session'
)
def test_e():
pass
#pytest.mark.dependency(
depends=["./test_mod_01.py::test_b", "./test_mod_02.py::test_e"],
scope='session'
)
def test_f():
pass
#pytest.mark.dependency(
depends=["./test_mod_01.py::TestClass::test_b"],
scope='session'
)
def test_g():
pass
Unexpected output
=========================================================== test session starts ===========================================================
...
collected 4 items
test_mod_02.py xsss
[100%]
====================================================== 3 skipped, 1 xfailed in 0.38s ======================================================
Expected output
=========================================================== test session starts ===========================================================
...
collected 4 items
test_mod_02.py x.s.
[100%]
====================================================== 2 passed, 1 skipped, 1 xfailed in 0.38s ======================================================

The first problem is that pytest-dependency uses the full test node names if used in session scope. That means that you have to exactly match that string, which never contains relative paths like "." in your case.
Instead of using "./test_mod_01.py::test_c", you have to use something like "tests/test_mod_01.py::test_c", or "test_mod_01.py::test_c", depending where your test root is.
The second problem is that pytest-dependency will only work if the tests other tests are depend on are run before in the same test session, e.g. in your case both test_mod_01 and test_mod_02 modules have to be in the same test session. The test dependencies are looked up at runtime in the list of tests that already have been run.
Note that this also means that you cannot make tests in test_mod_01 depend on tests in test_mod_02, if you run the tests in the default order. You have to ensure that the tests are run in the correct order either by adapting the names accordingly, or by using some ordering plugin like pytest-order, which has an option (--order-dependencies) to order the tests if needed in such a case.
Disclaimer: I'm the maintainer of pytest-order.

Related

Where does pytest.skip('Output string') get printed?

I have the following code in a python module called test_me.py:
#pytest.fixture()
def test_me():
if condition:
pytest.skip('Test Message')
def test_func(test_me):
assert ...
The output looks like:
tests/folder/test_me.py::test_me SKIPPED
Question: Where does 'Test Message' get printed or output? I can't see or find it anywhere.
According to the Pytest documentation, you can use the -rs flag to show it.
$ pytest -rs
======================== test session starts ========================
platform darwin -- Python 3.7.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: ...
collected 1 item
test_sample.py s [100%]
====================== short test summary info ======================
SKIPPED [1] test_sample.py:5: Test Message
======================== 1 skipped in 0.02s =========================
import pytest
#pytest.fixture()
def test_me():
pytest.skip('Test Message')
def test_1(test_me):
pass
Not sure if this is platform-specific, or if it works with OPs configuration, since OP didn't provide any specific info.
I don't believe there is any built-in way of printing these messages during the test runs. An alternative is to create your own skip function that does this:
def skip(message):
print(message) # you can add some additional info around the message
pytest.skip()
#pytest.fixture()
def test_me():
if condition:
skip('Test Message')
You could probably turn this custom function into a decorator too.
A cursory look at the code for pytest points to the fact that it becomes wrapped up as the message for the exception that's raised off the back of calling skip() itself. Its behavior, while not explicitly documented, is defined in outcomes.py:
def skip(msg: str = "", *, allow_module_level: bool = False) -> "NoReturn":
"""Skip an executing test with the given message.
This function should be called only during testing (setup, call or teardown) or
during collection by using the ``allow_module_level`` flag. This function can
be called in doctests as well.
:param bool allow_module_level:
Allows this function to be called at module level, skipping the rest
of the module. Defaults to False.
.. note::
It is better to use the :ref:`pytest.mark.skipif ref` marker when
possible to declare a test to be skipped under certain conditions
like mismatching platforms or dependencies.
Similarly, use the ``# doctest: +SKIP`` directive (see `doctest.SKIP
<https://docs.python.org/3/library/doctest.html#doctest.SKIP>`_)
to skip a doctest statically.
"""
__tracebackhide__ = True
raise Skipped(msg=msg, allow_module_level=allow_module_level)
Eventually, this exception is balled up through several layers and finally thrown as a BaseException. As such, you should be able to access the message itself by trapping the associated exception and reading its exception message (relevant SO thread).

pytest-ordering order modules not test case within module

I have a list of test cases in different modules (just showing glimpse here, but in actual the count is around 10).
I want pytest to execute these modules in order, First module-a.py, next module-b.py and so on...
However, I am aware of ordering the test cases within module which is working perfectly fine.
Our overall application is a kind of pipeline where module-a output is consumed by module-b and so on..
We want to test it completely end-end in defined order of modules.
module-a.py
-----------
import pytest
#pytest.mark.run(order=2)
def test_three():
assert True
#pytest.mark.run(order=1)
def test_four():
assert True
module-b.py
-----------
#pytest.mark.run(order=2)
def test_two():
assert True
#pytest.mark.run(order=1)
def test_one():
assert True
From above code, I want
module-a test_four, test_three
module-b test_one, test_two
to be executed in order.
Currently we are running each of the module with --cov-append, something like below and generating final coverage.
pytest module-a.py
pytest --cov-append module-b.py
Can any one please help with better approach or better method.

Pytest - how to skip tests unless you declare an option/flag?

I have some unit tests, but I'm looking for a way to tag some specific unit tests to have them skipped unless you declare an option when you call the tests.
Example:
If I call pytest test_reports.py, I'd want a couple specific unit tests to not be run.
But if I call pytest -<something> test_reports, then I want all my tests to be run.
I looked into the #pytest.mark.skipif(condition) tag but couldn't quite figure it out, so not sure if I'm on the right track or not. Any guidance here would be great!
The pytest documentation offers a nice example on how to skip tests marked "slow" by default and only run them with a --runslow option:
# conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"--runslow", action="store_true", default=False, help="run slow tests"
)
def pytest_configure(config):
config.addinivalue_line("markers", "slow: mark test as slow to run")
def pytest_collection_modifyitems(config, items):
if config.getoption("--runslow"):
# --runslow given in cli: do not skip slow tests
return
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
for item in items:
if "slow" in item.keywords:
item.add_marker(skip_slow)
We can now mark our tests in the following way:
# test_module.py
from time import sleep
import pytest
def test_func_fast():
sleep(0.1)
#pytest.mark.slow
def test_func_slow():
sleep(10)
The test test_func_fast is always executed (calling e.g. pytest). The "slow" function test_func_slow, however, will only be executed when calling pytest --runslow.
We are using markers with addoption in conftest.py
testcase:
#pytest.mark.no_cmd
def test_skip_if_no_command_line():
assert True
conftest.py:
in function
def pytest_addoption(parser):
parser.addoption("--no_cmd", action="store_true",
help="run the tests only in case of that command line (marked with marker #no_cmd)")
in function
def pytest_runtest_setup(item):
if 'no_cmd' in item.keywords and not item.config.getoption("--no_cmd"):
pytest.skip("need --no_cmd option to run this test")
pytest call:
py.test test_the_marker
-> test will be skipped
py.test test_the_marker --no_cmd
-> test will run
There are two ways to do that:
First method is to tag the functions with #pytest.mark decorator and run / skip the tagged functions alone using -m option.
#pytest.mark.anytag
def test_calc_add():
assert True
#pytest.mark.anytag
def test_calc_multiply():
assert True
def test_calc_divide():
assert True
Running the script as py.test -m anytag test_script.py will run only the first two functions.
Alternatively run as py.test -m "not anytag" test_script.py will run only the third function and skip the first two functions.
Here 'anytag' is the name of the tag. It can be anything.!
Second way is to run the functions with a common substring in their name using -k option.
def test_calc_add():
assert True
def test_calc_multiply():
assert True
def test_divide():
assert True
Running the script as py.test -k calc test_script.py will run the functions and skip the last one.
Note that 'calc' is the common substring present in both the function name and any other function having 'calc' in its name like 'calculate' will also be run.
Following the approach suggested in the pytest docs, thus the answer of #Manu_CJ, is certainly the way to go here.
I'd simply like to show how this can be adapted to easily define multiple options:
The canonical example given by the pytest docs highlights well how to add a single marker through command line options. However, adapting it to add multiple markers might not be straight forward, as the three hooks pytest_addoption, pytest_configure and pytest_collection_modifyitems all need to be evoked to allow adding a single marker through command line option.
This is one way you can adapt the canonical example, if you have several markers, like 'flag1', 'flag2', etc., that you want to be able to add via command line option:
# content of conftest.py
import pytest
# Create a dict of markers.
# The key is used as option, so --{key} will run all tests marked with key.
# The value must be a dict that specifies:
# 1. 'help': the command line help text
# 2. 'marker-descr': a description of the marker
# 3. 'skip-reason': displayed reason whenever a test with this marker is skipped.
optional_markers = {
"flag1": {"help": "<Command line help text for flag1...>",
"marker-descr": "<Description of the marker...>",
"skip-reason": "Test only runs with the --{} option."},
"flag2": {"help": "<Command line help text for flag2...>",
"marker-descr": "<Description of the marker...>",
"skip-reason": "Test only runs with the --{} option."},
# add further markers here
}
def pytest_addoption(parser):
for marker, info in optional_markers.items():
parser.addoption("--{}".format(marker), action="store_true",
default=False, help=info['help'])
def pytest_configure(config):
for marker, info in optional_markers.items():
config.addinivalue_line("markers",
"{}: {}".format(marker, info['marker-descr']))
def pytest_collection_modifyitems(config, items):
for marker, info in optional_markers.items():
if not config.getoption("--{}".format(marker)):
skip_test = pytest.mark.skip(
reason=info['skip-reason'].format(marker)
)
for item in items:
if marker in item.keywords:
item.add_marker(skip_test)
Now you can use the markers defined in optional_markers in your test modules:
# content of test_module.py
import pytest
#pytest.mark.flag1
def test_some_func():
pass
#pytest.mark.flag2
def test_other_func():
pass
If the use-case prohibits modifying either conftest.py and/or pytest.ini, here's how to use environment variables to directly take advantage of the skipif marker.
test_reports.py contents:
import os
import pytest
#pytest.mark.skipif(
not os.environ.get("MY_SPECIAL_FLAG"),
reason="MY_SPECIAL_FLAG not set in environment"
)
def test_skip_if_no_cli_tag():
assert True
def test_always_run():
assert True
In Windows:
> pytest -v test_reports.py --no-header
================== test session starts ===================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag SKIPPED [ 50%]
test_reports.py::test_always_run PASSED [100%]
============== 1 passed, 1 skipped in 0.01s ==============
> cmd /c "set MY_SPECIAL_FLAG=1&pytest -v test_reports.py --no-header"
================== test session starts ===================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag PASSED [ 50%]
test_reports.py::test_always_run PASSED [100%]
=================== 2 passed in 0.01s ====================
In Linux or other *NIX-like systems:
$ pytest -v test_reports.py --no-header
================= test session starts =================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag SKIPPED [ 50%]
test_reports.py::test_always_run PASSED [100%]
============ 1 passed, 1 skipped in 0.00s =============
$ MY_SPECIAL_FLAG=1 pytest -v test_reports.py --no-header
================= test session starts =================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag PASSED [ 50%]
test_reports.py::test_always_run PASSED [100%]
================== 2 passed in 0.00s ==================
MY_SPECIAL_FLAG can be whatever you wish based on your specific use-case and of course --no-header is just being used for this example.
Enjoy.

Separate test cases per input files?

Most test frameworks assume that "1 test = 1 Python method/function",
and consider a test as passed when the function executes without
raising assertions.
I'm testing a compiler-like program (a program that reads *.foo
files and process their contents), for which I want to execute the same test on many input (*.foo) files. IOW, my test looks like:
class Test(unittest.TestCase):
def one_file(self, filename):
# do the actual test
def list_testcases(self):
# essentially os.listdir('tests/') and filter *.foo files.
def test_all(self):
for f in self.list_testcases():
one_file(f)
My current code uses
unittest from
Python's standard library, i.e. one_file uses self.assert...(...)
statements to check whether the test passes.
This works, in the sense that I do get a program which succeeds/fails
when my code is OK/buggy, but I'm loosing a lot of the advantages of
the testing framework:
I don't get relevant reporting like "X failures out of Y tests" nor
the list of passed/failed tests. (I'm planning to use such system
not only to test my own development but also to grade student's code
as a teacher, so reporting is important for me)
I don't get test independence. The second test runs on the
environment left by the first, and so on. The first failure stops
the testsuite: testcases coming after a failure are not ran at all.
I get the feeling that I'm abusing my test framework: there's only
one test function so automatic test discovery of unittest sounds
overkill for example. The same code could (should?) be written in
plain Python with a basic assert.
An obvious alternative is to change my code to something like
class Test(unittest.TestCase):
def one_file(self, filename):
# do the actual test
def test_file1(self):
one_file("first-testcase.foo")
def test_file2(self):
one_file("second-testcase.foo")
Then I get all the advantages of unittest back, but:
It's a lot more code to write.
It's easy to "forget" a testcase, i.e. create a test file in
tests/ and forget to add it to the Python test.
I can imagine a solution where I would generate one method per testcase dynamically (along the lines of setattr(self, 'test_file' + str(n), ...)), to generate the code for the second solution without having to write it by hand. But that sounds really overkill for a use-case which doesn't seem so complex.
How could I get the best of both, i.e.
automatic testcase discovery (list tests/*.foo files), test
independence and proper reporting?
If you can use pytest as your test runner, then this is actually pretty straightforward using the parametrize decorator:
import pytest, glob
all_files = glob.glob('some/path/*.foo')
#pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
# do the actual test
This will also automatically name the tests in a useful way, so that you can see which files have failed:
$ py.test
================================== test session starts ===================================
platform darwin -- Python 3.6.1, pytest-3.1.3, py-1.4.34, pluggy-0.4.0
[...]
======================================== FAILURES ========================================
_____________________________ test_one_file[some/path/a.foo] _____________________________
filename = 'some/path/a.foo'
#pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
> assert False
E assert False
test_it.py:7: AssertionError
_____________________________ test_one_file[some/path/b.foo] _____________________________
filename = 'some/path/b.foo'
#pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
[...]
Here is a solution, although it might be considered not very beautiful... The idea is to dynamically create new functions, add them to the test class, and use the function names as arguments (e.g., filenames):
# import
import unittest
# test class
class Test(unittest.TestCase):
# example test case
def test_default(self):
print('test_default')
self.assertEqual(2,2)
# set string for creating new function
func_string="""def test(cls):
# get function name and use it to pass information
filename = inspect.stack()[0][3]
# print function name for demonstration purposes
print(filename)
# dummy test for demonstration purposes
cls.assertEqual(type(filename),str)"""
# add new test for each item in list
for f in ['test_bla','test_blu','test_bli']:
# set name of new function
name=func_string.replace('test',f)
# create new function
exec(name)
# add new function to test class
setattr(Test, f, eval(f))
if __name__ == "__main__":
unittest.main()
This correctly runs all four tests and returns:
> test_bla
> test_bli
> test_blu
> test_default
> Ran 4 tests in 0.040s
> OK

py.test 2.3.5 does not run finalizer after fixture failure

i was trying py.test for its claimed better support than unittest for module and session fixtures, but i stumbled on a, at least for me, bizarre behavior.
Consider the following code (don't tell me it's dumb, i know it, it's just a quick and dirty hack to replicate the behavior) (i'm running on Python 2.7.5 x86 on windows 7)
import os
import shutil
import pytest
test_work_dir = 'test-work-dir'
tmp = os.environ['tmp']
count = 0
#pytest.fixture(scope='module')
def work_path(request):
global count
count += 1
print('test: ' + str(count))
test_work_path = os.path.join(tmp, test_work_dir)
def cleanup():
print('cleanup: ' + str(count))
if os.path.isdir(test_work_path):
shutil.rmtree(test_work_path)
request.addfinalizer(cleanup)
os.makedirs(test_work_path)
return test_work_path
def test_1(work_path):
assert os.path.isdir(work_path)
def test_2(work_path):
assert os.path.isdir(work_path)
def test_3(work_path):
assert os.path.isdir(work_path)
if __name__ == "__main__":
pytest.main(['-s', '-v', __file__])
If test_work_dir does not exist, then i obtain the expected behavior:
platform win32 -- Python 2.7.5 -- pytest-2.3.5 -- C:\Programs\Python\27-envs\common\Scripts\python.exe
collecting ... collected 4 items
py_test.py: [doctest] PASSED
py_test.py:34: test_1 test: 1
cleanup: 1
PASSED
py_test.py:38: test_2 PASSED
py_test.py:42: test_3 PASSEDcleanup: 1
fixture is called once for the module and cleanup is called once at the end of tests.
Then if test_work_dir exists i would expect something similar to unittest, that fixture is called once, it fails with OSError, tests that need it are not run, cleanup is called once and world peace is established again.
But... here's what i see:
py_test.py: [doctest] PASSED
py_test.py:34: test_1 test: 1
ERROR
py_test.py:38: test_2 test: 2
ERROR
py_test.py:42: test_3 test: 3
ERROR
Despite the failure of the fixture all the tests are run, the fixture that is supposed to be scope='module' is called once for each test and finalizer in never called!
I know that exceptions in fixtures are not good policy, but the real fixtures are complex and i'd rather avoid filling them with try blocks if i can count on the execution of each finalizer set till the point of failure. I don't want to go hunting for test artifacts after a failure.
And moreover trying to run the tests when not all of the fixtures they need are in place makes no sense and can make them at best erratic.
Is this the intended behavior of py.test in case of failure in a fixture?
Thanks, Gabriele
Three issues here:
you should register the finalizer after you performed the action that you want to be undone. So first call makedirs() and then register the finalizer. That's a general issue with fixtures because usually teardown code can only run if there was something successfully created
pytest-2.3.5 has a bug in that it will not call finalizers if the fixture function fails. I've just fixed it and you can install the 2.4.0.dev7 (or higher) version with pip install -i http://pypi.testrun.org -U pytest. It ensures the fixture finalizers are called even if the fixture function partially fails. Actually a bit surprising this hasn't popped up earlier but i guess people, including me, usually just go ahead and fix the fixtures instead of diving into what's happening specifically. So thanks for posting here!
if a module-scoped fixture function fails, the next test needing that fixture will still trigger execution of the fixture function again, as it might have been an intermittent failure. It stands to reason that pytest should memorize the failure for the given scope and not retry execution. If you think so, please open an issue, linking to this stackoverflow discussion.
thanks, holger

Categories

Resources