Mark test as passed from inside a fixture in pytest - python

With pytest.skip or pytest.xfail, I can mark a test as skipped or xfailed from inside the fixture. There is no pytest.pass, though. How can I mark it as passed?
import pytest
#pytest.fixture
def fixture():
#pytest.skip()
pytest.xfail()
def test(fixture):
assert False

Unfortunately i don't know the way to pass tests with fixtures, but you can use pytest.skip in your test with msg for example "pass" and the hook from conftest.py will check this "pass" msg and make test passed:
conftest.py
import pytest
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(call):
outcome = yield
if outcome.get_result().outcome == 'skipped' and call.excinfo.value.msg == 'pass':
outcome.get_result().outcome = 'passed'
test_skip.py
# -*- coding: utf-8 -*-
import pytest
def test_skip_pass():
pytest.skip('pass')
def test_skip_pass_2():
pytest.skip('skip')
result:
collecting ... collected 2 items
test_skip.py::test_skip_pass PASSED [ 50%]
test_skip.py::test_skip_pass_2 SKIPPED [100%]
Skipped: skip
======================== 1 passed, 1 skipped in 0.04s =========================

Related

Using mock.patch + parametrize in a Pytest Class Function

I have been working on fastAPI and have some async methods to generate an auth token
Writting the unit testing I'm getting the following error:
TypeError: test_get_auth_token() missing 2 required positional arguments: 'test_input' and 'expected_result'
my unit test looks like:
class TestGenerateAuthToken(IsolatedAsyncioTestCase):
"""
"""
#pytest.mark.parametrize(
"test_input,expected_result",
[("user", "user_token"), ("admin", "admin_token")],
)
#mock.patch("myaauth.get_token", new_callable=AsyncMock)
async def test_get_auth_token(self, get_token_mock, test_input, expected_result):
"""
Test get_auth_header
"""
def mock_generate_user_token(_type):
return f"{_type}_token"
get_token_mock.side_effect = mock_generate_user_token
assert await myaauth.get_token(test_input) == expected_result
I know is as simple as to just remove the parametrize, but I wanna know if is possible to do so
It is not related to mock.
The reason is that pytest.mark.parametrize is not compatible with unittest.IsolatedAsyncioTestCase.
Instead, you could try using pytest's plugin, for example, pytest-asyncio, to let pytest work with the coroutine test function.
from unittest import mock
from unittest.mock import AsyncMock
import pytest as pytest
import myaauth
class TestGenerateAuthToken:
#pytest.mark.parametrize(
"test_input,expected_result",
[("user", "user_token"), ("admin", "admin_token")],
)
#pytest.mark.asyncio
#mock.patch("myaauth.get_token", new_callable=AsyncMock)
async def test_get_auth_token(self, get_token_mock, test_input, expected_result):
"""
Test get_auth_header
"""
def mock_generate_user_token(_type):
return f"{_type}_token"
get_token_mock.side_effect = mock_generate_user_token
assert await myaauth.get_token(test_input) == expected_result
-> % python -m pytest pytest_coroutine.py
=================================================================================================== test session starts ===================================================================================================
platform darwin -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0
rootdir: /Users/james/PycharmProjects/stackoverflow
plugins: asyncio-0.20.2
asyncio: mode=strict
collected 2 items
pytest_coroutine.py .. [100%]
==================================================================================================== 2 passed in 0.03s ====================================================================================================

How do I run a fixture only when the test fails?

I have the following example:
conftest.py:
#pytest.fixture:
def my_fixture_1(main_device)
yield
if FAILED:
-- code lines --
else:
pass
main.py:
def my_test(my_fixture_1):
main_device = ...
-- code lines --
assert 0
-- code lines --
assert 1
When assert 0, for example, the test should fail and execute my_fixture_1. If the test pass, the fixture must not execute. I tried using hookimpl but didn't found a solution, the fixture is always executing even if the test pass.
Note that main_device is the device connected where my test is running.
You could use request as an argument to your fixture. From that, you can check the status of the corresponding tests, i.e. if it has failed or not. In case it failed, you can execute the code you want to get executed on failure. In code that reads as
#pytest.fixture
def my_fixture_1(request):
yield
if request.session.testsfailed:
print("Only print if failed")
Of course, the fixture will always run but the branch will only be executed if the corresponding test failed.
In Simon Hawe's answer, request.session.testsfailed denotes the number of test failures in that particular test run.
Here is an alternative solution that I can think of.
import os
#pytest.fixture(scope="module")
def main_device():
return None
#pytest.fixture(scope='function', autouse=True)
def my_fixture_1(main_device):
yield
if os.environ["test_result"] == "failed":
print("+++++++++ Test Failed ++++++++")
elif os.environ["test_result"] == "passed":
print("+++++++++ Test Passed ++++++++")
elif os.environ["test_result"] == "skipped":
print("+++++++++ Test Skipped ++++++++")
def pytest_runtest_logreport(report):
if report.when == 'call':
os.environ["test_result"] = report.outcome
You can do your implementations directly in the pytest_runtest_logreport hook itself. But the drawback is that you won't get access to the fixtures other than the report.
So, if you need main_device, you have to go with a custom fixture like as shown above.
Use #pytest.fixture(scope='function', autouse=True) which will automatically run it for every test case. you don't have to give main_device in all test functions as an argument.

pytest.config.getoption alternative is failing

My setup is such that; pytest test.py executes nothing while pytest --first-test test.py executes the target function test_skip.
In order to determine whether a certain test should be conducted or not, this is what I have been using:
skip_first = pytest.mark.skipif(
not (
pytest.config.getoption("--first-test")
or os.environ.get('FULL_AUTH_TEST')
), reason="Skipping live"
)
#skip_first
def test_skip():
assert_something
Now that, pytest.config.getoption is being deprecated, I am trying to update my code. This is what I have written:
#pytest.fixture
def skip_first(request):
def _skip_first():
return pytest.mark.skipif(
not (
request.config.getoption("--first-test")
or os.environ.get('FULL_AUTH_TEST')),
reason="Skipping"
)
return _skip_first()
# And, to call:
def test_skip(skip_first):
assert 1==2
However, whether I do pytest test.py or pytest --first-test test.py, test_skip will always execute. But, the skip_first seems to be working fine - Inserting a print statement shows skip_first = MarkDecorator(mark=Mark(name='skipif', args=(False,), kwargs={'reason': 'Skipping first'})), when --first-test is given, and args=(True) when given. (Same thing was observed when using the first setup).
Am I missing something?? I even tried to return the function _skip_first instead of it's output in the def skip_first but no difference.
When using a test class, the manual indicates we need to use #pytest.mark.usefixtures("fixturename") but that proved to be of no use either (with classes).
Ideas? This is my system: platform linux -- Python 3.6.7, pytest-4.0.2, py-1.7.0, pluggy-0.8.0
In order to cause a SKIP from a fixture, you must raise pytest.skip. Here's an example using your code above:
import os
import pytest
#pytest.fixture
def skip_first(request):
if (
request.config.getoption("--first-test")
or os.environ.get('FULL_AUTH_TEST')
):
raise pytest.skip('Skipping!')
# And, to call:
def test_skip(skip_first):
assert 1==2
If you want, you can almost replace your original code by doing:
#pytest.fixture
def skip_first_fixture(request): ...
skip_first = pytest.mark.usefixtures('skip_first_fixture')
#skip_first
def test_skip(): ...
Here's the execution showing this working:
$ pytest t.py -q
F [100%]
=================================== FAILURES ===================================
__________________________________ test_skip ___________________________________
skip_first = None
def test_skip(skip_first):
> assert 1==2
E assert 1 == 2
E -1
E +2
t.py:16: AssertionError
1 failed in 0.03 seconds
$ pytest t.py --first-test -q
s [100%]
1 skipped in 0.01 seconds

How can I access the overall test result of a pytest test run during runtime?

Dependent on the overall test result of a pytest test run I would like to execute conditional tear down. This means the access to the overall test result must happen after all tests have been executed but before the test runner has been left. How can I achieve this?
I could not find a suitable pytest hook to access the overall test result yet.
You don't need one; just collect the test results yourself. This is the blueprint I usually use when in need of accessing the test results in batch:
# conftest.py
import pytest
def pytest_sessionstart(session):
session.results = dict()
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
result = outcome.get_result()
if result.when == 'call':
item.session.results[item] = result
Now all test results are stored under session.results dict; example usage:
# conftest.py (continued)
def pytest_sessionfinish(session, exitstatus):
print()
print('run status code:', exitstatus)
passed_amount = sum(1 for result in session.results.values() if result.passed)
failed_amount = sum(1 for result in session.results.values() if result.failed)
print(f'there are {passed_amount} passed and {failed_amount} failed tests')
Running the tests will yield:
$ pytest -sv
================================== test session starts ====================================
platform darwin -- Python 3.6.4, pytest-3.7.1, py-1.5.3, pluggy-0.7.1 -- /Users/hoefling/.virtualenvs/stackoverflow/bin/python3.6
cachedir: .pytest_cache
rootdir: /Users/hoefling/projects/private/stackoverflow/so-51711988, inifile:
collected 3 items
test_spam.py::test_spam PASSED
test_spam.py::test_eggs PASSED
test_spam.py::test_fail FAILED
run status code: 1
there are 2 passed and 1 failed tests
======================================== FAILURES =========================================
_______________________________________ test_fail _________________________________________
def test_fail():
> assert False
E assert False
test_spam.py:10: AssertionError
=========================== 1 failed, 2 passed in 0.05 seconds ============================
In case the overall pytest exit code (exitstatus) is sufficient info (info about # passed, # failed, etc. not required) use the following:
# conftest.py
def pytest_sessionfinish(session, exitstatus):
print()
print('run status code:', exitstatus)
Accessing the error
You can access the error details from the call.excinfo object:
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
result = outcome.get_result()
if result.outcome == "failed":
exception = call.excinfo.value
exception_class = call.excinfo.type
exception_class_name = call.excinfo.typename
exception_type_and_message_formatted = call.excinfo.exconly()
exception_traceback = call.excinfo.traceback

PyTest : Allow Failure Rate

I am currently working on a project where we are running a large suite of parameterized tests (>1M). The tests are randomly generated use-cases and in this large test space, it is expected that in each run certain edge cases will fail, ~1-2%. Is there a implementation for Pytest where you can pass a failure-rate argument, or handle this behavior?
I guess what you want is modify exit status of pytest command, there is a nonpublic hook, named pytest_sessionfinish, could do this.
consider you have following tests:
def test_spam():
assert 0
def test_ham():
pass
def test_eggs():
pass
and a hook in conftest.py:
import pytest, _pytest
ACCEPTABLE_FAILURE_RATE = 50
#pytest.hookimpl()
def pytest_sessionfinish(session, exitstatus):
if exitstatus != _pytest.main.EXIT_TESTSFAILED:
return
failure_rate = (100.0 * session.testsfailed) / session.testscollected
if failure_rate <= ACCEPTABLE_FAILURE_RATE:
session.exitstatus = 0
then invoke pytest:
$ pytest --tb=no -q tests.py
F.. [100%]
1 failed, 2 passed in 0.06 seconds
here the failure rate is 1 / 3 == 33.3%, below 50%:
$ echo $?
0
you could see the exit status of pytest is 0.

Categories

Resources