Show full plan in Pytest - python

I'm looking for a way in Pytest to show the full test and fixture plan instead of just listing the test cases via --collect-only.
This is the best I can get now:
TestClass1
TestCase1
TestCase2
TestClass2
TestCase3
TestCase4
This is what I'm looking for (should match the execution order):
Fixture1_Setup_ModuleScope
Fixture2_Setup_ClassScope
TestClass1
Fixture3_Setup_FunctionScope
TestCase1
Fixture3_Teardown_FunctionScope
TestCase2
Fixture2_Teardown_ClassScope
TestClass2
TestCase3
TestCase4
Fixture1_Teardown_ModuleScope
I looked around for such Pytest plugin and none seems to provide this. Not even as parsing of the result, let alone something that could be generated without running the tests. I understand that it's not needed for Pytest testing, but it's something I've learned to like in one of our older internal test frameworks, if only for validating my intention with reality.
Am I missing some obvious solution here? How could I achieve this?

Have you tried pytest --setup-plan.
show what fixtures and tests would be executed but don't execute anything.
pytest --setup-plan
# ...
# assert_test.py
# assert_test.py::TestTest::test_test
# click_test.py
# click_test.py::test_echo_token
# fixture_test.py
# SETUP F env['dev']
# SETUP F folder['dev_data']
# fixture_test.py::test_are_folders_exist[dev-dev_data] (fixtures used: env, folder)
# TEARDOWN F folder['dev_data']
# TEARDOWN F env['dev']

Related

pytest exit on failure only for a specific file

I'm using the pytest framework to run tests that interface with a set of test instruments. I'd like to have a set of initial tests in a single file that will check the connection and configuration of these instruments. If any of these tests fail, I'd like to abort all future tests.
In the remaining tests and test files, I'd like pytest to continue if there are any test failures so that I get a complete report of all failed tests.
For example, the test run may look something like this:
pytest test_setup.py test_series1.py test_series2.py
In this example, I'd like pytest to exit if any tests in test_setup.py fail.
I would like to invoke the test session in a single call so that session based fixtures that I have only get called once. For example, I have a fixture that will connect to a power supply and configure it for the tests.
Is there a way to tell pytest to exit on any test only in a specific file? If I use the -x option it will not continue in subsequent tests.
Ideally, I'd prefer something like a decorator that tells pytest to exit if there is a failure. However, I have not seen anything like this.
Is this possible or am I thinking about this the wrong way?
Update
Based on the answer from pL3b, my final solution looks like this:
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
if 'critical' in [mark.name for mark in item.own_markers]:
result = outcome.get_result()
if result.when == "call" and result.failed:
print('FAILED')
pytest.exit('Exiting pytest due to critical test failure', 1)
I needed to inspect the failure code in order to check if the test failed or not. Otherwise this would exit on every call.
Additionally, I needed to register my custome marker. I chose to also put this in the conftest.py file like so:
def pytest_configure(config):
config.addinivalue_line(
"markers", "critical: make test as critical"
)
You may use following hook in your conftest.py to solve your problem:
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
yield
if 'critical' in [mark.name for mark in item.own_markers]:
pytest.exit('Exiting pytest')
Then just add #pytest.mark.critical decorator to desired tests/classes in test_setup.py.
Learn more about pytest hooks here so you can define desired output and so on.

Using different parameters for different tests in Pytest

I am working with embedded firmware testing using Python 3.9 and Pytest. We are working with multiple devices, and run different tests run on different devices. It would be very nice to be able to reuse test fixtures for each device - however I am running into difficulty parameterizing test fixtures.
Currently I have something like this:
#pytest.fixture(scope="function", params=["device1", "device2"])
def connect(request):
jlink.connect(request.param)
#pytest.mark.device1
def test_device1(connect):
# test code
#pytest.mark.device2
def test_device2(connect)
# test code
The behavior I would like is that param "device1" is used for test_device1, and param "device2" is used for test_device2. But the default Pytest behavior is to use all params for all tests, and I am struggling to find a way around this. Is there a way to specify which params to use for certain markers?
I should also mention, I am an embedded C developer and have been learning Python as I've worked on this project, so my general Python/Pytest knowledge may be a bit lacking.
EDIT: I think I found a workaround, but I'm not super happy with it. I have separated the tests for each device into different folders, and in each folder, have a device_cfg.json file. The test fixture opens the cfg file to know which device to connect to.
EDIT2: This doesn't even work because of Pytest scoping...
#pytest.mark.parametrize: parametrizing test functions
The builtin pytest.mark.parametrize decorator enables parametrization of arguments for a test function. Here is a typical example of a test function that implements checking that a certain input leads to an expected output:
# content of test_expectation.py
import pytest
#pytest.mark.parametrize("test_input,expected", [("3+5", 8), ("2+4", 6), ("6*9", 42)])
def test_eval(test_input, expected):
assert eval(test_input) == expected
https://docs.pytest.org/en/7.1.x/how-to/parametrize.html
Hope it'll help you

Test case Code coverage using Coverage.py and Pytest

I'm trying to get the executed code out of each Pytest Test case -in order to make a debuger-, now it's possible in Coverage module apis i will use the following code:
cov.start()
assert nrm.mySqrt(x) == 3 // this is a test case using Pytest
cov.stop()
cov.json_report()
cov.erase()// after this line i repeat the steps top
but it's not practical, so if I used the coverage run - m pytest fileName command it will not give me a report for each case, I'm trying to use the plugin in order to apply some special case where if the executed line is including assert I want to get a JSON report and the erase the data, will I be able to do that? or should I try to find a way to implant the code above for each test case?

Flask Testing - why does coverage exclude import statements and decorators?

My tests clearly execute each function, and there are no unused imports either. Yet, according to the coverage report, 62% of the code was never executed in the following file:
Can someone please point out what I might be doing wrong?
Here's how I initialise the test suite and the coverage:
cov = coverage(branch=True, omit=['website/*', 'run_test_suite.py'])
cov.start()
try:
unittest.main(argv=[sys.argv[0]])
except:
pass
cov.stop()
cov.save()
print "\n\nCoverage Report:\n"
cov.report()
print "HTML version: " + os.path.join(BASEDIR, "tmp/coverage/index.html")
cov.html_report(directory='tmp/coverage')
cov.erase()
This is the third question in the coverage.py FAQ:
Q: Why do the bodies of functions (or classes) show as executed, but
the def lines do not?
This happens because coverage is started after the functions are
defined. The definition lines are executed without coverage
measurement, then coverage is started, then the function is called.
This means the body is measured, but the definition of the function
itself is not.
To fix this, start coverage earlier. If you use the command line to
run your program with coverage, then your entire program will be
monitored. If you are using the API, you need to call coverage.start()
before importing the modules that define your functions.
The simplest thing to do is run you tests under coverage:
$ coverage run -m unittest discover
Your custom test script isn't doing much beyond what the coverage command line would do, it will be simpler just to use the command line.
For excluding the imports statements, you can add the following lines to .coveragerc
[report]
exclude_lines =
# Ignore imports
from
import
but when I tried to add '#' for decorators, the source code within the scope of decorators was excluded. The coverage rate was wrong.
There may be some other ways to exclude decorators.

Py.test skip messages don't show in jenkins

I have a minor issue using py.test for my unit tests.
I use py.test to run my tests and output a junitxml report of the tests.
This xml report is imported in jenkins and generates nice statistics.
When I use a test class which derives from unittest.TestCase,
I skip expected failures using:
#unittest.skip("Bug 1234 : This does not work")
This message also shows up in jenkins when selecting this test.
When I don't use a unittest.TestCase class, e.g. to use py.test parametrize functionality,
I skip expected failures using:
#pytest.mark.xfail(reason="Bug 1234 : This does not work", run=False)
But then this reason is not actually displayed in jenkins, instead it will say:
Skip Message
expected test failure
How can I fix this?
I solved it using this line as the first line of the test:
pytest.skip("Bug 1234: This does not work")
I'd rather have used one of the pytest decorators, but this'll do.
I had a similar problem except I had a different Jenkins message and could not tell which test was skipped.
It turns out that if the only test in the module is a skipped test, then jenkins would not show the test in the test result list (using either decorator or jr-be's soloution). You could see that there was a skipped test in the total resuls, but could not tell which test or which module the skipped test was in.
To solve this (ok hack solve), I went back to using the decorator on my test and added a dummy test (so have 1 test that runs and 1 test that gets skipped):
#pytest.skip('SONIC-3218')
def test_segments_create_delete(self, api):
logging.info('TestCreateDeleteSegments.test_segments_create_delete')
def test_dummy(self, api):
'''
Dummy test to see if suite will display in jenkins if one
test is run and 1 is skipped (instead of having only skipped tests)
'''
logging.info('TestCreateDeleteSegments.test_dummy')
For me that works since I would rather have 1 extra dummy test and be able to find my skipped tests.

Categories

Resources