When using #pytest.mark.parametrize('arg', param) is there a way to find out if the last item in param is being run? The reason I'm asking is I want to run a cleanup function unique to that test that should only run after the very last iteration of param.
param = [(1, 'James'), (2, 'John')]
#pytest.mark.parametrize('id, user', param)
def test_myfunc(id, user):
# Do something
print(user)
# Run this only after the last param which would be after (2, 'John')
print('All done!')
I can run a conditional which checks for the value of param but I was just wondering if pytest has a way for this.
You'll need to perform this logic within a pytest hook, specifically the pytest_runtest_teardown hook.
Assuming your test looks like the following,
import pytest
param = [(1, 'James'), (2, 'John')]
#pytest.mark.parametrize('id, user', param)
def test_myfunc(id, user):
print(f"Iteration number {id}")
In the root of your test folder, create a conftest.py file and place the following,
func_of_interest = "test_myfunc"
def pytest_runtest_teardown(item, nextitem):
curr_name = item.function.__qualname__
# check to see it is the function we want
if curr_name == func_of_interest:
# check to see if there are any more functions after this one
if nextitem is not None:
next_name = nextitem.function.__qualname__
else:
next_name = "random_name"
# check to see if the next item is a different function
if curr_name != next_name:
print("\nPerform some teardown once")
Then when we run it, it produces the following output,
===================================== test session starts ======================================
platform darwin -- Python 3.9.1, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
cachedir: .pytest_cache
rootdir: ***
collected 2 items
test_grab.py::test_myfunc[1-James] Iteration number 1
PASSED
test_grab.py::test_myfunc[2-John] Iteration number 2
PASSED
Perform some teardown once
As we can see, the teardown logic was called exactly once, after the final iteration of the test call.
Related
Could you explain to me please why fixture with scope function(which is supposed to run anew for each test) runs for the whole test class?
#pytest.fixture(scope="function")
def application_with_assigned_task_id(api, db, application_with_tasks_id, set_user_with_st_types):
with allure.step("ищу задание по по id заявки"):
task = \
api.grpc.express_shipping.list_tasks(page_size=100, page_number=1, eams_id=application_with_tasks_id)[
'tasks'][
0]
task_id = task["id"]
with allure.step("назначаю задания на пользователя"):
db.rezon.upd_user_warehouse(set_user_with_st_types, 172873)
db.rezon.storetask_upd_user(task_id, set_user_with_st_types)
with allure.step("проверяю назначение задания"):
res = api.grpc.express_shipping.get_task(id=int(task_id))
assert res["storekeeper"] == db.rezon.get_rezon_user_full_name_by_id(set_user_with_st_types)
return application_with_tasks_id
application_with_tasks_id fixture has function scope as well, set_user_with_st_types has session scope(which I do not wish to change) What can I do?
I tried setting the scope specifically, even though I thought it normal for the fixture to run for each test anew by default
Setting the scope did not work
This simple example shows clearly that the tests are executed for each function and that's all (the output file will contain two lines). If it is not the case for you, please follow hoefling's comment and create a reproducible example (your code is both not complete and contains too many irrelevant things).
import pytest
#pytest.fixture # (scope="function") is the default
def setup():
f = open("/tmp/log", "a")
print("One setup", file=f)
f.close()
def test_one(setup):
assert True
def test_two(setup):
assert True
I have the following example:
conftest.py:
#pytest.fixture:
def my_fixture_1(main_device)
yield
if FAILED:
-- code lines --
else:
pass
main.py:
def my_test(my_fixture_1):
main_device = ...
-- code lines --
assert 0
-- code lines --
assert 1
When assert 0, for example, the test should fail and execute my_fixture_1. If the test pass, the fixture must not execute. I tried using hookimpl but didn't found a solution, the fixture is always executing even if the test pass.
Note that main_device is the device connected where my test is running.
You could use request as an argument to your fixture. From that, you can check the status of the corresponding tests, i.e. if it has failed or not. In case it failed, you can execute the code you want to get executed on failure. In code that reads as
#pytest.fixture
def my_fixture_1(request):
yield
if request.session.testsfailed:
print("Only print if failed")
Of course, the fixture will always run but the branch will only be executed if the corresponding test failed.
In Simon Hawe's answer, request.session.testsfailed denotes the number of test failures in that particular test run.
Here is an alternative solution that I can think of.
import os
#pytest.fixture(scope="module")
def main_device():
return None
#pytest.fixture(scope='function', autouse=True)
def my_fixture_1(main_device):
yield
if os.environ["test_result"] == "failed":
print("+++++++++ Test Failed ++++++++")
elif os.environ["test_result"] == "passed":
print("+++++++++ Test Passed ++++++++")
elif os.environ["test_result"] == "skipped":
print("+++++++++ Test Skipped ++++++++")
def pytest_runtest_logreport(report):
if report.when == 'call':
os.environ["test_result"] = report.outcome
You can do your implementations directly in the pytest_runtest_logreport hook itself. But the drawback is that you won't get access to the fixtures other than the report.
So, if you need main_device, you have to go with a custom fixture like as shown above.
Use #pytest.fixture(scope='function', autouse=True) which will automatically run it for every test case. you don't have to give main_device in all test functions as an argument.
My setup is such that; pytest test.py executes nothing while pytest --first-test test.py executes the target function test_skip.
In order to determine whether a certain test should be conducted or not, this is what I have been using:
skip_first = pytest.mark.skipif(
not (
pytest.config.getoption("--first-test")
or os.environ.get('FULL_AUTH_TEST')
), reason="Skipping live"
)
#skip_first
def test_skip():
assert_something
Now that, pytest.config.getoption is being deprecated, I am trying to update my code. This is what I have written:
#pytest.fixture
def skip_first(request):
def _skip_first():
return pytest.mark.skipif(
not (
request.config.getoption("--first-test")
or os.environ.get('FULL_AUTH_TEST')),
reason="Skipping"
)
return _skip_first()
# And, to call:
def test_skip(skip_first):
assert 1==2
However, whether I do pytest test.py or pytest --first-test test.py, test_skip will always execute. But, the skip_first seems to be working fine - Inserting a print statement shows skip_first = MarkDecorator(mark=Mark(name='skipif', args=(False,), kwargs={'reason': 'Skipping first'})), when --first-test is given, and args=(True) when given. (Same thing was observed when using the first setup).
Am I missing something?? I even tried to return the function _skip_first instead of it's output in the def skip_first but no difference.
When using a test class, the manual indicates we need to use #pytest.mark.usefixtures("fixturename") but that proved to be of no use either (with classes).
Ideas? This is my system: platform linux -- Python 3.6.7, pytest-4.0.2, py-1.7.0, pluggy-0.8.0
In order to cause a SKIP from a fixture, you must raise pytest.skip. Here's an example using your code above:
import os
import pytest
#pytest.fixture
def skip_first(request):
if (
request.config.getoption("--first-test")
or os.environ.get('FULL_AUTH_TEST')
):
raise pytest.skip('Skipping!')
# And, to call:
def test_skip(skip_first):
assert 1==2
If you want, you can almost replace your original code by doing:
#pytest.fixture
def skip_first_fixture(request): ...
skip_first = pytest.mark.usefixtures('skip_first_fixture')
#skip_first
def test_skip(): ...
Here's the execution showing this working:
$ pytest t.py -q
F [100%]
=================================== FAILURES ===================================
__________________________________ test_skip ___________________________________
skip_first = None
def test_skip(skip_first):
> assert 1==2
E assert 1 == 2
E -1
E +2
t.py:16: AssertionError
1 failed in 0.03 seconds
$ pytest t.py --first-test -q
s [100%]
1 skipped in 0.01 seconds
Dependent on the overall test result of a pytest test run I would like to execute conditional tear down. This means the access to the overall test result must happen after all tests have been executed but before the test runner has been left. How can I achieve this?
I could not find a suitable pytest hook to access the overall test result yet.
You don't need one; just collect the test results yourself. This is the blueprint I usually use when in need of accessing the test results in batch:
# conftest.py
import pytest
def pytest_sessionstart(session):
session.results = dict()
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
result = outcome.get_result()
if result.when == 'call':
item.session.results[item] = result
Now all test results are stored under session.results dict; example usage:
# conftest.py (continued)
def pytest_sessionfinish(session, exitstatus):
print()
print('run status code:', exitstatus)
passed_amount = sum(1 for result in session.results.values() if result.passed)
failed_amount = sum(1 for result in session.results.values() if result.failed)
print(f'there are {passed_amount} passed and {failed_amount} failed tests')
Running the tests will yield:
$ pytest -sv
================================== test session starts ====================================
platform darwin -- Python 3.6.4, pytest-3.7.1, py-1.5.3, pluggy-0.7.1 -- /Users/hoefling/.virtualenvs/stackoverflow/bin/python3.6
cachedir: .pytest_cache
rootdir: /Users/hoefling/projects/private/stackoverflow/so-51711988, inifile:
collected 3 items
test_spam.py::test_spam PASSED
test_spam.py::test_eggs PASSED
test_spam.py::test_fail FAILED
run status code: 1
there are 2 passed and 1 failed tests
======================================== FAILURES =========================================
_______________________________________ test_fail _________________________________________
def test_fail():
> assert False
E assert False
test_spam.py:10: AssertionError
=========================== 1 failed, 2 passed in 0.05 seconds ============================
In case the overall pytest exit code (exitstatus) is sufficient info (info about # passed, # failed, etc. not required) use the following:
# conftest.py
def pytest_sessionfinish(session, exitstatus):
print()
print('run status code:', exitstatus)
Accessing the error
You can access the error details from the call.excinfo object:
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
result = outcome.get_result()
if result.outcome == "failed":
exception = call.excinfo.value
exception_class = call.excinfo.type
exception_class_name = call.excinfo.typename
exception_type_and_message_formatted = call.excinfo.exconly()
exception_traceback = call.excinfo.traceback
When a test fails, there's an output indicating the context of the test, e.g.
=================================== FAILURES ===================================
______________________________ Test.test_sum_even ______________________________
numbers = [2, 4, 6]
#staticmethod
def test_sum_even(numbers):
assert sum(numbers) % 2 == 0
> assert False
E assert False
test_preprocessing.py:52: AssertionError
What if I want the same thing for passed tests as well? so that I can have a quick check on the parameters that get passed to the tests are correct?
I tried command line options line --full-trace, -l, --tb long, and -rpP, but none of them works.
Any idea?
Executing pytest with the --verbose flag will cause it to list the fully qualified name of every test as it executes, e.g.,:
tests/dsl/test_ancestor.py::TestAncestor::test_finds_ancestor_nodes
tests/dsl/test_and.py::TestAnd::test_aliased_as_ampersand
tests/dsl/test_and.py::TestAnd::test_finds_all_nodes_in_both_expressions
tests/dsl/test_anywhere.py::TestAnywhere::test_finds_all_nodes_when_no_arguments_given_regardless_of_the_context
tests/dsl/test_anywhere.py::TestAnywhere::test_finds_multiple_kinds_of_nodes_regardless_of_the_context
tests/dsl/test_anywhere.py::TestAnywhere::test_finds_nodes_regardless_of_the_context
tests/dsl/test_axis.py::TestAxis::test_finds_nodes_given_the_xpath_axis
tests/dsl/test_axis.py::TestAxis::test_finds_nodes_given_the_xpath_axis_without_a_specific_tag
If you are just asking for standard output from passed test cases, then you need to pass the -s option to pytest to prevent capturing of standard output. More info about standard output suppression is available in the pytest docs.
pytest doesn't have this functionality. What it does is showing you the error from the exception when an assertion fails.
A workaround is to explicitly include the information you want to see from the passing tests by using python's logging module and then use the caplog fixture from pytest.
For example one version of a func.py could be:
import logging
logging.basicConfig(level=logging.DEBUG)
log = logging.getLogger('my-even-logger')
def is_even(numbers):
res = sum(numbers) % 2 == 0
if res is True:
log.warning('Sum is Even')
else:
log.warning('Sum is Odd')
#... do stuff ...
and then a test_func.py:
import logging
import pytest
from func import is_even
#pytest.fixture
def my_list():
numbers = [2, 4, 6]
return numbers
def test_is_even(caplog, my_list):
with caplog.at_level(logging.DEBUG, logger='my-even-logger'):
is_even(my_list)
assert 'Even' in caplog.text
If you run pytest -s test_even.py and since the test passes, the logger shows you the following message:
test_even.py WARNING:my-sum-logger:Sum is Even