What is best way to skip every remaining test if a specific test fails, here test_002_wips_online.py failed, and then there is no point in running further:
tests/test_001_springboot_monitor.py::TestClass::test_service_monitor[TEST12] PASSED [ 2%]
tests/test_002_wips_online.py::TestClass::test01_online[TEST12] FAILED [ 4%]
tests/test_003_idpro_test_api.py::TestClass::test01_testapi_present[TEST12] PASSED [ 6%]
I like to skip all remaining tests and write test report.
Should I write to a status file and write a function that checks it?
#pytest.mark.skipif(setup_failed(), reason="requirements failed")
pytest-skipif-reference
You should really look at pytest-dependency plugin: https://pytest-dependency.readthedocs.io/en/latest/usage.html
import pytest
#pytest.mark.dependency()
def test_b():
pass
#pytest.mark.dependency(depends=["test_b"])
def test_d():
pass
in this example test_d won't be executed if test_b fails
from the docs: http://pytest.org/en/latest/usage.html#stopping-after-the-first-or-n-failures
pytest -x # stop after first failure
pytest --maxfail=2 # stop after two failures
I found that calling pytest.exit("error message") if any of my critical predefine tests fail is the most convenient. pytest will finish all post jobs like html-report, screenshots and your error message will be printed at the end:
!!!!!!!!!!! _pytest.outcomes.Exit: Critical Error: wips frontend in test1 not running !!!!!!!!!!!
===================== 1 failed, 8 passed in 92.72 seconds ======================
Related
I'm using the pytest framework to run tests that interface with a set of test instruments. I'd like to have a set of initial tests in a single file that will check the connection and configuration of these instruments. If any of these tests fail, I'd like to abort all future tests.
In the remaining tests and test files, I'd like pytest to continue if there are any test failures so that I get a complete report of all failed tests.
For example, the test run may look something like this:
pytest test_setup.py test_series1.py test_series2.py
In this example, I'd like pytest to exit if any tests in test_setup.py fail.
I would like to invoke the test session in a single call so that session based fixtures that I have only get called once. For example, I have a fixture that will connect to a power supply and configure it for the tests.
Is there a way to tell pytest to exit on any test only in a specific file? If I use the -x option it will not continue in subsequent tests.
Ideally, I'd prefer something like a decorator that tells pytest to exit if there is a failure. However, I have not seen anything like this.
Is this possible or am I thinking about this the wrong way?
Update
Based on the answer from pL3b, my final solution looks like this:
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
if 'critical' in [mark.name for mark in item.own_markers]:
result = outcome.get_result()
if result.when == "call" and result.failed:
print('FAILED')
pytest.exit('Exiting pytest due to critical test failure', 1)
I needed to inspect the failure code in order to check if the test failed or not. Otherwise this would exit on every call.
Additionally, I needed to register my custome marker. I chose to also put this in the conftest.py file like so:
def pytest_configure(config):
config.addinivalue_line(
"markers", "critical: make test as critical"
)
You may use following hook in your conftest.py to solve your problem:
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
yield
if 'critical' in [mark.name for mark in item.own_markers]:
pytest.exit('Exiting pytest')
Then just add #pytest.mark.critical decorator to desired tests/classes in test_setup.py.
Learn more about pytest hooks here so you can define desired output and so on.
I've multiple test functions created in a file. Example:
def testA():
change_user_permission_to_allow()
assert action == success
change_user_permission_to_deny()
def testB():
assert action == fail
## Multiple other tests...
By default, user are denied for action. When I run testB individually, it passes. When I run the test file as a whole.
pytest testfile.py
The testB fails. When I debug the user permission is allowed. Seems like testA is making issue in testB. Is there a way to tell pytest to run test one after another?
you can read the pytest-ordering: run your tests in order
With pytest-ordering, you can change the default ordering as follows:
import pytest
#pytest.mark.order2
def test_foo():
assert True
#pytest.mark.order1
def test_bar():
assert True
$ py.test test_foo.py -vv
============================= test session starts ==============================
platform darwin -- Python 2.7.5 -- py-1.4.20 -- pytest-2.5.2 -- env/bin/python
plugins: ordering
collected 2 items
test_foo.py:7: test_bar PASSED
test_foo.py:3: test_foo PASSED
=========================== 2 passed in 0.01 seconds ===========================
I was looking for this also
--sw, --stepwise exit on test failure and continue from last failing
test next time
--sw-skip, --stepwise-skip
ignore the first failing test but stop on the next
failing test.
implicitly enables --stepwise.
I have the following code in a python module called test_me.py:
#pytest.fixture()
def test_me():
if condition:
pytest.skip('Test Message')
def test_func(test_me):
assert ...
The output looks like:
tests/folder/test_me.py::test_me SKIPPED
Question: Where does 'Test Message' get printed or output? I can't see or find it anywhere.
According to the Pytest documentation, you can use the -rs flag to show it.
$ pytest -rs
======================== test session starts ========================
platform darwin -- Python 3.7.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: ...
collected 1 item
test_sample.py s [100%]
====================== short test summary info ======================
SKIPPED [1] test_sample.py:5: Test Message
======================== 1 skipped in 0.02s =========================
import pytest
#pytest.fixture()
def test_me():
pytest.skip('Test Message')
def test_1(test_me):
pass
Not sure if this is platform-specific, or if it works with OPs configuration, since OP didn't provide any specific info.
I don't believe there is any built-in way of printing these messages during the test runs. An alternative is to create your own skip function that does this:
def skip(message):
print(message) # you can add some additional info around the message
pytest.skip()
#pytest.fixture()
def test_me():
if condition:
skip('Test Message')
You could probably turn this custom function into a decorator too.
A cursory look at the code for pytest points to the fact that it becomes wrapped up as the message for the exception that's raised off the back of calling skip() itself. Its behavior, while not explicitly documented, is defined in outcomes.py:
def skip(msg: str = "", *, allow_module_level: bool = False) -> "NoReturn":
"""Skip an executing test with the given message.
This function should be called only during testing (setup, call or teardown) or
during collection by using the ``allow_module_level`` flag. This function can
be called in doctests as well.
:param bool allow_module_level:
Allows this function to be called at module level, skipping the rest
of the module. Defaults to False.
.. note::
It is better to use the :ref:`pytest.mark.skipif ref` marker when
possible to declare a test to be skipped under certain conditions
like mismatching platforms or dependencies.
Similarly, use the ``# doctest: +SKIP`` directive (see `doctest.SKIP
<https://docs.python.org/3/library/doctest.html#doctest.SKIP>`_)
to skip a doctest statically.
"""
__tracebackhide__ = True
raise Skipped(msg=msg, allow_module_level=allow_module_level)
Eventually, this exception is balled up through several layers and finally thrown as a BaseException. As such, you should be able to access the message itself by trapping the associated exception and reading its exception message (relevant SO thread).
I have some unit tests, but I'm looking for a way to tag some specific unit tests to have them skipped unless you declare an option when you call the tests.
Example:
If I call pytest test_reports.py, I'd want a couple specific unit tests to not be run.
But if I call pytest -<something> test_reports, then I want all my tests to be run.
I looked into the #pytest.mark.skipif(condition) tag but couldn't quite figure it out, so not sure if I'm on the right track or not. Any guidance here would be great!
The pytest documentation offers a nice example on how to skip tests marked "slow" by default and only run them with a --runslow option:
# conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"--runslow", action="store_true", default=False, help="run slow tests"
)
def pytest_configure(config):
config.addinivalue_line("markers", "slow: mark test as slow to run")
def pytest_collection_modifyitems(config, items):
if config.getoption("--runslow"):
# --runslow given in cli: do not skip slow tests
return
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
for item in items:
if "slow" in item.keywords:
item.add_marker(skip_slow)
We can now mark our tests in the following way:
# test_module.py
from time import sleep
import pytest
def test_func_fast():
sleep(0.1)
#pytest.mark.slow
def test_func_slow():
sleep(10)
The test test_func_fast is always executed (calling e.g. pytest). The "slow" function test_func_slow, however, will only be executed when calling pytest --runslow.
We are using markers with addoption in conftest.py
testcase:
#pytest.mark.no_cmd
def test_skip_if_no_command_line():
assert True
conftest.py:
in function
def pytest_addoption(parser):
parser.addoption("--no_cmd", action="store_true",
help="run the tests only in case of that command line (marked with marker #no_cmd)")
in function
def pytest_runtest_setup(item):
if 'no_cmd' in item.keywords and not item.config.getoption("--no_cmd"):
pytest.skip("need --no_cmd option to run this test")
pytest call:
py.test test_the_marker
-> test will be skipped
py.test test_the_marker --no_cmd
-> test will run
There are two ways to do that:
First method is to tag the functions with #pytest.mark decorator and run / skip the tagged functions alone using -m option.
#pytest.mark.anytag
def test_calc_add():
assert True
#pytest.mark.anytag
def test_calc_multiply():
assert True
def test_calc_divide():
assert True
Running the script as py.test -m anytag test_script.py will run only the first two functions.
Alternatively run as py.test -m "not anytag" test_script.py will run only the third function and skip the first two functions.
Here 'anytag' is the name of the tag. It can be anything.!
Second way is to run the functions with a common substring in their name using -k option.
def test_calc_add():
assert True
def test_calc_multiply():
assert True
def test_divide():
assert True
Running the script as py.test -k calc test_script.py will run the functions and skip the last one.
Note that 'calc' is the common substring present in both the function name and any other function having 'calc' in its name like 'calculate' will also be run.
Following the approach suggested in the pytest docs, thus the answer of #Manu_CJ, is certainly the way to go here.
I'd simply like to show how this can be adapted to easily define multiple options:
The canonical example given by the pytest docs highlights well how to add a single marker through command line options. However, adapting it to add multiple markers might not be straight forward, as the three hooks pytest_addoption, pytest_configure and pytest_collection_modifyitems all need to be evoked to allow adding a single marker through command line option.
This is one way you can adapt the canonical example, if you have several markers, like 'flag1', 'flag2', etc., that you want to be able to add via command line option:
# content of conftest.py
import pytest
# Create a dict of markers.
# The key is used as option, so --{key} will run all tests marked with key.
# The value must be a dict that specifies:
# 1. 'help': the command line help text
# 2. 'marker-descr': a description of the marker
# 3. 'skip-reason': displayed reason whenever a test with this marker is skipped.
optional_markers = {
"flag1": {"help": "<Command line help text for flag1...>",
"marker-descr": "<Description of the marker...>",
"skip-reason": "Test only runs with the --{} option."},
"flag2": {"help": "<Command line help text for flag2...>",
"marker-descr": "<Description of the marker...>",
"skip-reason": "Test only runs with the --{} option."},
# add further markers here
}
def pytest_addoption(parser):
for marker, info in optional_markers.items():
parser.addoption("--{}".format(marker), action="store_true",
default=False, help=info['help'])
def pytest_configure(config):
for marker, info in optional_markers.items():
config.addinivalue_line("markers",
"{}: {}".format(marker, info['marker-descr']))
def pytest_collection_modifyitems(config, items):
for marker, info in optional_markers.items():
if not config.getoption("--{}".format(marker)):
skip_test = pytest.mark.skip(
reason=info['skip-reason'].format(marker)
)
for item in items:
if marker in item.keywords:
item.add_marker(skip_test)
Now you can use the markers defined in optional_markers in your test modules:
# content of test_module.py
import pytest
#pytest.mark.flag1
def test_some_func():
pass
#pytest.mark.flag2
def test_other_func():
pass
If the use-case prohibits modifying either conftest.py and/or pytest.ini, here's how to use environment variables to directly take advantage of the skipif marker.
test_reports.py contents:
import os
import pytest
#pytest.mark.skipif(
not os.environ.get("MY_SPECIAL_FLAG"),
reason="MY_SPECIAL_FLAG not set in environment"
)
def test_skip_if_no_cli_tag():
assert True
def test_always_run():
assert True
In Windows:
> pytest -v test_reports.py --no-header
================== test session starts ===================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag SKIPPED [ 50%]
test_reports.py::test_always_run PASSED [100%]
============== 1 passed, 1 skipped in 0.01s ==============
> cmd /c "set MY_SPECIAL_FLAG=1&pytest -v test_reports.py --no-header"
================== test session starts ===================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag PASSED [ 50%]
test_reports.py::test_always_run PASSED [100%]
=================== 2 passed in 0.01s ====================
In Linux or other *NIX-like systems:
$ pytest -v test_reports.py --no-header
================= test session starts =================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag SKIPPED [ 50%]
test_reports.py::test_always_run PASSED [100%]
============ 1 passed, 1 skipped in 0.00s =============
$ MY_SPECIAL_FLAG=1 pytest -v test_reports.py --no-header
================= test session starts =================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag PASSED [ 50%]
test_reports.py::test_always_run PASSED [100%]
================== 2 passed in 0.00s ==================
MY_SPECIAL_FLAG can be whatever you wish based on your specific use-case and of course --no-header is just being used for this example.
Enjoy.
I have a minor issue using py.test for my unit tests.
I use py.test to run my tests and output a junitxml report of the tests.
This xml report is imported in jenkins and generates nice statistics.
When I use a test class which derives from unittest.TestCase,
I skip expected failures using:
#unittest.skip("Bug 1234 : This does not work")
This message also shows up in jenkins when selecting this test.
When I don't use a unittest.TestCase class, e.g. to use py.test parametrize functionality,
I skip expected failures using:
#pytest.mark.xfail(reason="Bug 1234 : This does not work", run=False)
But then this reason is not actually displayed in jenkins, instead it will say:
Skip Message
expected test failure
How can I fix this?
I solved it using this line as the first line of the test:
pytest.skip("Bug 1234: This does not work")
I'd rather have used one of the pytest decorators, but this'll do.
I had a similar problem except I had a different Jenkins message and could not tell which test was skipped.
It turns out that if the only test in the module is a skipped test, then jenkins would not show the test in the test result list (using either decorator or jr-be's soloution). You could see that there was a skipped test in the total resuls, but could not tell which test or which module the skipped test was in.
To solve this (ok hack solve), I went back to using the decorator on my test and added a dummy test (so have 1 test that runs and 1 test that gets skipped):
#pytest.skip('SONIC-3218')
def test_segments_create_delete(self, api):
logging.info('TestCreateDeleteSegments.test_segments_create_delete')
def test_dummy(self, api):
'''
Dummy test to see if suite will display in jenkins if one
test is run and 1 is skipped (instead of having only skipped tests)
'''
logging.info('TestCreateDeleteSegments.test_dummy')
For me that works since I would rather have 1 extra dummy test and be able to find my skipped tests.