pytest exit on failure only for a specific file - python

I'm using the pytest framework to run tests that interface with a set of test instruments. I'd like to have a set of initial tests in a single file that will check the connection and configuration of these instruments. If any of these tests fail, I'd like to abort all future tests.
In the remaining tests and test files, I'd like pytest to continue if there are any test failures so that I get a complete report of all failed tests.
For example, the test run may look something like this:
pytest test_setup.py test_series1.py test_series2.py
In this example, I'd like pytest to exit if any tests in test_setup.py fail.
I would like to invoke the test session in a single call so that session based fixtures that I have only get called once. For example, I have a fixture that will connect to a power supply and configure it for the tests.
Is there a way to tell pytest to exit on any test only in a specific file? If I use the -x option it will not continue in subsequent tests.
Ideally, I'd prefer something like a decorator that tells pytest to exit if there is a failure. However, I have not seen anything like this.
Is this possible or am I thinking about this the wrong way?
Update
Based on the answer from pL3b, my final solution looks like this:
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
if 'critical' in [mark.name for mark in item.own_markers]:
result = outcome.get_result()
if result.when == "call" and result.failed:
print('FAILED')
pytest.exit('Exiting pytest due to critical test failure', 1)
I needed to inspect the failure code in order to check if the test failed or not. Otherwise this would exit on every call.
Additionally, I needed to register my custome marker. I chose to also put this in the conftest.py file like so:
def pytest_configure(config):
config.addinivalue_line(
"markers", "critical: make test as critical"
)

You may use following hook in your conftest.py to solve your problem:
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
yield
if 'critical' in [mark.name for mark in item.own_markers]:
pytest.exit('Exiting pytest')
Then just add #pytest.mark.critical decorator to desired tests/classes in test_setup.py.
Learn more about pytest hooks here so you can define desired output and so on.

Related

Run every pytest test function simultaneously with another function

I want every pytest test run simultaneously with a function recording video. I want to somehow make pytest do something like this
test_function1
test_function2
test_function
When pytest would call every function for execution it would couple it with record_video(). Like test_function{n} + record_video()
In the end it woud return a report on how the test was executed and a .mp4 file with a recording.
How to achieve that? I guess it should be possible somehow
Fixtures are a perfect use case for this.
See the following hypothetical example --
import pytest
#pytest.fixture
def record_video():
record.start()
yield
record.stop()
#pytest.fixture(scope="session")
def generate_report():
yield
report.make()

Show full plan in Pytest

I'm looking for a way in Pytest to show the full test and fixture plan instead of just listing the test cases via --collect-only.
This is the best I can get now:
TestClass1
TestCase1
TestCase2
TestClass2
TestCase3
TestCase4
This is what I'm looking for (should match the execution order):
Fixture1_Setup_ModuleScope
Fixture2_Setup_ClassScope
TestClass1
Fixture3_Setup_FunctionScope
TestCase1
Fixture3_Teardown_FunctionScope
TestCase2
Fixture2_Teardown_ClassScope
TestClass2
TestCase3
TestCase4
Fixture1_Teardown_ModuleScope
I looked around for such Pytest plugin and none seems to provide this. Not even as parsing of the result, let alone something that could be generated without running the tests. I understand that it's not needed for Pytest testing, but it's something I've learned to like in one of our older internal test frameworks, if only for validating my intention with reality.
Am I missing some obvious solution here? How could I achieve this?
Have you tried pytest --setup-plan.
show what fixtures and tests would be executed but don't execute anything.
pytest --setup-plan
# ...
# assert_test.py
# assert_test.py::TestTest::test_test
# click_test.py
# click_test.py::test_echo_token
# fixture_test.py
# SETUP F env['dev']
# SETUP F folder['dev_data']
# fixture_test.py::test_are_folders_exist[dev-dev_data] (fixtures used: env, folder)
# TEARDOWN F folder['dev_data']
# TEARDOWN F env['dev']

In pytest, how to skip all tests that use specified fixture?

Suppose I have a fixture fixture1 and I want to be able to skip all tests that use this fixture from pytest commandline.
I found answer to this quetion, but I decided to do it different way.
I am marking all tests that uses fixture1 with using_fixture1 marker by defining following in my conftest.py
def pytest_collection_modifyitems(items):
for item in items:
try:
fixtures=item.fixturenames
if 'fixture1' in fixtures:
item.add_marker(pytest.mark.using_fixture1)
except:
pass
And then, when running pytest I use: py.test tests -m "not using_fixture1".
So the real question should be: Are there any better ways to do it and are there any drawbacks of my solution?

Skipping a py.test fixture from command line

given that this is my test code:
# conftest.py
#pytest.fixture(scope='function')
def fixA(request)
pass
#pytest.fixture(scope='function')
def fixB(request)
pass
# test_my.py
pytestmark = pytest.mark.usefixtures("fixA")
def test_something():
pass
I want to be able to use fixB() insted of fixA(), using fixB() is easy enough, I could add pytest.ini like this:
[pytest]
usefixtures=fixB
but I could figure out how can I disable fixA, from the commandline of from the configuration.
is my use case is so far fetched ?
(the actual reason, I want to keep fixA working in our CI system, but for day to day work, I need people to be able to disable it on their desk)
I've found a way, (after fighting it in the debugger, py.test documentation isn't that clear regarding how fixtures are being selected)
Added this into the conftest.py:
# conftest.py
def pytest_runtest_setup(item):
for fixture in item.config.getini('ignorefixtures'):
del item._fixtureinfo.name2fixturedefs[fixture]
item._fixtureinfo.names_closure.remove(fixture)
def pytest_addoption(parser):
parser.addini('ignorefixtures', 'fixtures to ignore', 'linelist')
and then I could put this in pytest.ini:
[pytest]
usefixtures=fixB
ignorefixtures=fixA
I would be nice to those kind of things also in the command line...

Py.test skip messages don't show in jenkins

I have a minor issue using py.test for my unit tests.
I use py.test to run my tests and output a junitxml report of the tests.
This xml report is imported in jenkins and generates nice statistics.
When I use a test class which derives from unittest.TestCase,
I skip expected failures using:
#unittest.skip("Bug 1234 : This does not work")
This message also shows up in jenkins when selecting this test.
When I don't use a unittest.TestCase class, e.g. to use py.test parametrize functionality,
I skip expected failures using:
#pytest.mark.xfail(reason="Bug 1234 : This does not work", run=False)
But then this reason is not actually displayed in jenkins, instead it will say:
Skip Message
expected test failure
How can I fix this?
I solved it using this line as the first line of the test:
pytest.skip("Bug 1234: This does not work")
I'd rather have used one of the pytest decorators, but this'll do.
I had a similar problem except I had a different Jenkins message and could not tell which test was skipped.
It turns out that if the only test in the module is a skipped test, then jenkins would not show the test in the test result list (using either decorator or jr-be's soloution). You could see that there was a skipped test in the total resuls, but could not tell which test or which module the skipped test was in.
To solve this (ok hack solve), I went back to using the decorator on my test and added a dummy test (so have 1 test that runs and 1 test that gets skipped):
#pytest.skip('SONIC-3218')
def test_segments_create_delete(self, api):
logging.info('TestCreateDeleteSegments.test_segments_create_delete')
def test_dummy(self, api):
'''
Dummy test to see if suite will display in jenkins if one
test is run and 1 is skipped (instead of having only skipped tests)
'''
logging.info('TestCreateDeleteSegments.test_dummy')
For me that works since I would rather have 1 extra dummy test and be able to find my skipped tests.

Categories

Resources