How do I run a fixture only when the test fails? - python

I have the following example:
conftest.py:
#pytest.fixture:
def my_fixture_1(main_device)
yield
if FAILED:
-- code lines --
else:
pass
main.py:
def my_test(my_fixture_1):
main_device = ...
-- code lines --
assert 0
-- code lines --
assert 1
When assert 0, for example, the test should fail and execute my_fixture_1. If the test pass, the fixture must not execute. I tried using hookimpl but didn't found a solution, the fixture is always executing even if the test pass.
Note that main_device is the device connected where my test is running.

You could use request as an argument to your fixture. From that, you can check the status of the corresponding tests, i.e. if it has failed or not. In case it failed, you can execute the code you want to get executed on failure. In code that reads as
#pytest.fixture
def my_fixture_1(request):
yield
if request.session.testsfailed:
print("Only print if failed")
Of course, the fixture will always run but the branch will only be executed if the corresponding test failed.

In Simon Hawe's answer, request.session.testsfailed denotes the number of test failures in that particular test run.
Here is an alternative solution that I can think of.
import os
#pytest.fixture(scope="module")
def main_device():
return None
#pytest.fixture(scope='function', autouse=True)
def my_fixture_1(main_device):
yield
if os.environ["test_result"] == "failed":
print("+++++++++ Test Failed ++++++++")
elif os.environ["test_result"] == "passed":
print("+++++++++ Test Passed ++++++++")
elif os.environ["test_result"] == "skipped":
print("+++++++++ Test Skipped ++++++++")
def pytest_runtest_logreport(report):
if report.when == 'call':
os.environ["test_result"] = report.outcome
You can do your implementations directly in the pytest_runtest_logreport hook itself. But the drawback is that you won't get access to the fixtures other than the report.
So, if you need main_device, you have to go with a custom fixture like as shown above.
Use #pytest.fixture(scope='function', autouse=True) which will automatically run it for every test case. you don't have to give main_device in all test functions as an argument.

Related

pytest fixture runs for class instead of each tests separately

Could you explain to me please why fixture with scope function(which is supposed to run anew for each test) runs for the whole test class?
#pytest.fixture(scope="function")
def application_with_assigned_task_id(api, db, application_with_tasks_id, set_user_with_st_types):
with allure.step("ищу задание по по id заявки"):
task = \
api.grpc.express_shipping.list_tasks(page_size=100, page_number=1, eams_id=application_with_tasks_id)[
'tasks'][
0]
task_id = task["id"]
with allure.step("назначаю задания на пользователя"):
db.rezon.upd_user_warehouse(set_user_with_st_types, 172873)
db.rezon.storetask_upd_user(task_id, set_user_with_st_types)
with allure.step("проверяю назначение задания"):
res = api.grpc.express_shipping.get_task(id=int(task_id))
assert res["storekeeper"] == db.rezon.get_rezon_user_full_name_by_id(set_user_with_st_types)
return application_with_tasks_id
application_with_tasks_id fixture has function scope as well, set_user_with_st_types has session scope(which I do not wish to change) What can I do?
I tried setting the scope specifically, even though I thought it normal for the fixture to run for each test anew by default
Setting the scope did not work
This simple example shows clearly that the tests are executed for each function and that's all (the output file will contain two lines). If it is not the case for you, please follow hoefling's comment and create a reproducible example (your code is both not complete and contains too many irrelevant things).
import pytest
#pytest.fixture # (scope="function") is the default
def setup():
f = open("/tmp/log", "a")
print("One setup", file=f)
f.close()
def test_one(setup):
assert True
def test_two(setup):
assert True

How to patch or mock subsequent API calls in python unit testing?

I have following function
def test_dir_creation():
if not os.path.exists(root_dir_path):
raise Exception("Root dir don't exits")
if not os.path.exists(log_dir_path):
raise Exception("Log dir don't exits")
if not os.path.exists(subscription_handle_dir_path):
raise Exception("subscription_handle_dir_path dir don't exits")
I want to test three conditions for that my test case is as follows -
#mock.patch("os.path.exists", return_value=False)
def test_args_parser_when_root_dir_dont_exists(*mocks):
with pytest.raises(Exception) as excinfo:
args_parser()
expected_message = "Root dir don't exits"
assert expected_message in str(excinfo)
This test case works but I want to test other two conditions as well, how to do that?
How I can patch something like os.path.exists(log_dir_path) == False?
You're mocking the call for the entire function in your example. If you want to have the mock change behavior, I would recommend using a context manager.
import os.path
def do_test():
with mock.patch('os.path.exists', return_value='hello'):
assert os.path.exists() == 'hello'
with mock.patch('os.path.exists', return_value=-5):
assert os.path.exists() == -5

How to have a teardown function in module scope based on running test result

I want to clean up some files after all tests pass. If they fail, keep them for debug. I read https://docs.pytest.org/en/latest/example/simple.html#making-test-result-information-available-in-fixtures so I have the following in my conftest.py:
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
# execute all other hooks to obtain the report object
outcome = yield
rep = outcome.get_result()
# set a report attribute for each phase of a call, which can
# be "setup", "call", "teardown"
setattr(item, "rep_" + rep.when, rep)
#pytest.fixture(scope="module", autouse=True)
def teardown(request):
yield
# request.node is an "item" because we use the default
# "function" scope
if request.node.rep_setup.failed:
print("setting up a test failed!", request.node.nodeid)
elif request.node.rep_setup.passed:
#clean up my files
however, I got the error:
AttributeError: 'Module' object has no attribute 'rep_setup'
The only difference from doc example is that my teardown has 'scope=module'. But I have to do this because I want to clean up files after all tests pass, some files are used by all tests. If I use the default scope which is 'function' level, it will clean up after each test case rather than after the whole module. How can I fix this?
Update: Before I had 'hook', I still had the teardown which is "module" level, and it worked fine, meaning it cleaned up all files for me after all tests running, the only problem is that it will clean up for me no matter tests pass or fail.
If you are in module scope, request.node represents the module, not a single test. If you want just check for failed tests, you can check the session:
#pytest.fixture(scope="module", autouse=True)
def teardown(request):
yield
if request.session.testsfailed > 0:
print(f"{} test(s) failed!", request.session.testsfailed)
else:
# clean up my files
I'm not sure if there is any information about setup failures in the request at this point, if you are only interested in these.
In this case you could implement a file scoped fixture which sets a flag in case of a setup failure, and use that, something like:
SETUP_FAILED = False
#pytest.fixture(autouse=True)
def teardown_test(request):
yield
if request.node.rep_setup.failed:
global SETUP_FAILED
SETUP_FAILED = True
#pytest.fixture(scope="module", autouse=True)
def teardown_module():
global SETUP_FAILED
SETUP_FAILED = False
yield
if SETUP_FAILED:
print("At least one test setup failed!")
else:
# clean up my files
This is not nice, and maybe someone knows a better solution, but it will work.
You could also collect information about the tests where the setup failed if needed.

Show exhaustive information for passed tests in pytest

When a test fails, there's an output indicating the context of the test, e.g.
=================================== FAILURES ===================================
______________________________ Test.test_sum_even ______________________________
numbers = [2, 4, 6]
#staticmethod
def test_sum_even(numbers):
assert sum(numbers) % 2 == 0
> assert False
E assert False
test_preprocessing.py:52: AssertionError
What if I want the same thing for passed tests as well? so that I can have a quick check on the parameters that get passed to the tests are correct?
I tried command line options line --full-trace, -l, --tb long, and -rpP, but none of them works.
Any idea?
Executing pytest with the --verbose flag will cause it to list the fully qualified name of every test as it executes, e.g.,:
tests/dsl/test_ancestor.py::TestAncestor::test_finds_ancestor_nodes
tests/dsl/test_and.py::TestAnd::test_aliased_as_ampersand
tests/dsl/test_and.py::TestAnd::test_finds_all_nodes_in_both_expressions
tests/dsl/test_anywhere.py::TestAnywhere::test_finds_all_nodes_when_no_arguments_given_regardless_of_the_context
tests/dsl/test_anywhere.py::TestAnywhere::test_finds_multiple_kinds_of_nodes_regardless_of_the_context
tests/dsl/test_anywhere.py::TestAnywhere::test_finds_nodes_regardless_of_the_context
tests/dsl/test_axis.py::TestAxis::test_finds_nodes_given_the_xpath_axis
tests/dsl/test_axis.py::TestAxis::test_finds_nodes_given_the_xpath_axis_without_a_specific_tag
If you are just asking for standard output from passed test cases, then you need to pass the -s option to pytest to prevent capturing of standard output. More info about standard output suppression is available in the pytest docs.
pytest doesn't have this functionality. What it does is showing you the error from the exception when an assertion fails.
A workaround is to explicitly include the information you want to see from the passing tests by using python's logging module and then use the caplog fixture from pytest.
For example one version of a func.py could be:
import logging
logging.basicConfig(level=logging.DEBUG)
log = logging.getLogger('my-even-logger')
def is_even(numbers):
res = sum(numbers) % 2 == 0
if res is True:
log.warning('Sum is Even')
else:
log.warning('Sum is Odd')
#... do stuff ...
and then a test_func.py:
import logging
import pytest
from func import is_even
#pytest.fixture
def my_list():
numbers = [2, 4, 6]
return numbers
def test_is_even(caplog, my_list):
with caplog.at_level(logging.DEBUG, logger='my-even-logger'):
is_even(my_list)
assert 'Even' in caplog.text
If you run pytest -s test_even.py and since the test passes, the logger shows you the following message:
test_even.py WARNING:my-sum-logger:Sum is Even

Pytest Testrail Module - Post Test Results for Test Runs

I am trying to use the pytest testrail module and started with this demo script:
import pytest
from pytest_testrail.plugin import testrail
#testrail('C165')
def test_run():
print "T165:pass"
It does create a test run but does not post any results to the corresponding test cases.
Try adding an assertion as that is what the pytest hook is looking for:
import pytest
from pytest_testrail.plugin import testrail
#testrail('C165')
def test_run():
assert False
Here is add_result function. The testrail plugin executes it when your test (test_run) is finished.
You can notice the parameter status in the function, it requires your test need to return the result from assertion (ex. assert False is good example).
In your case, just print a string is not good to testrail know the status of the test.
def add_result(self, test_ids, status, comment='', duration=0):
"""
Add a new result to results dict to be submitted at the end.
:param list test_ids: list of test_ids.
:param int status: status code of test (pass or fail).
:param comment: None or a failure representation.
:param duration: Time it took to run just the test.
"""

Categories

Resources