When a test is xfailed the reason that is printed reports about test file, test class and test case, while the skipped test case reports only test file and a line where skip is called.
Here is a test example:
#!/usr/bin/env pytest
import pytest
#pytest.mark.xfail(reason="Reason of failure")
def test_1():
pytest.fail("This will fail here")
#pytest.mark.skip(reason="Reason of skipping")
def test_2():
pytest.fail("This will fail here")
This is the actual result:
pytest test_file.py -rsx
============================= test session starts =============================
platform linux -- Python 3.5.2, pytest-4.4.1, py-1.7.0, pluggy-0.9.0
rootdir: /home/ashot/questions
collected 2 items
test_file.py xs [100%]
=========================== short test summary info ===========================
SKIPPED [1] test_file.py:9: Reason of skipping
XFAIL test_file.py::test_1
Reason of failure
==================== 1 skipped, 1 xfailed in 0.05 seconds =====================
But I would expect to get something like:
pytest test_file.py -rsx
============================= test session starts =============================
platform linux -- Python 3.5.2, pytest-4.4.1, py-1.7.0, pluggy-0.9.0
rootdir: /home/ashot/questions
collected 2 items
test_file.py xs [100%]
=========================== short test summary info ===========================
XFAIL test_file.py::test_1: Reason of failure
SKIPPED test_file.py::test_2: Reason of skipping
==================== 1 skipped, 1 xfailed in 0.05 seconds =====================
You have two possible ways to achieve this. The quick and dirty way: just redefine _pytest.skipping.show_xfailed in your test_file.py:
import _pytest
def custom_show_xfailed(terminalreporter, lines):
xfailed = terminalreporter.stats.get("xfailed")
if xfailed:
for rep in xfailed:
pos = terminalreporter.config.cwd_relative_nodeid(rep.nodeid)
reason = rep.wasxfail
s = "XFAIL %s" % (pos,)
if reason:
s += ": " + str(reason)
lines.append(s)
# show_xfailed_bkp = _pytest.skipping.show_xfailed
_pytest.skipping.show_xfailed = custom_show_xfailed
... your tests
The (not so) clean way: create a conftest.py file in the same directory as your test_file.py, and add a hook:
import pytest
import _pytest
def custom_show_xfailed(terminalreporter, lines):
xfailed = terminalreporter.stats.get("xfailed")
if xfailed:
for rep in xfailed:
pos = terminalreporter.config.cwd_relative_nodeid(rep.nodeid)
reason = rep.wasxfail
s = "XFAIL %s" % (pos,)
if reason:
s += ": " + str(reason)
lines.append(s)
#pytest.hookimpl(tryfirst=True)
def pytest_terminal_summary(terminalreporter):
tr = terminalreporter
if not tr.reportchars:
return
lines = []
for char in tr.reportchars:
if char == "x":
custom_show_xfailed(terminalreporter, lines)
elif char == "X":
_pytest.skipping.show_xpassed(terminalreporter, lines)
elif char in "fF":
_pytest.skipping.show_simple(terminalreporter, lines, 'failed', "FAIL %s")
elif char in "sS":
_pytest.skipping.show_skipped(terminalreporter, lines)
elif char == "E":
_pytest.skipping.show_simple(terminalreporter, lines, 'error', "ERROR %s")
elif char == 'p':
_pytest.skipping.show_simple(terminalreporter, lines, 'passed', "PASSED %s")
if lines:
tr._tw.sep("=", "short test summary info")
for line in lines:
tr._tw.line(line)
tr.reportchars = [] # to avoid further output
The second method is overkill, because you have to redefine the whole pytest_terminal_summary.
Thanks to this answer I've found the following solution that works perfectly for me.
I've created conftest.py file in the root of my test suite with the following content:
import _pytest.skipping as s
def show_xfailed(tr, lines):
for rep in tr.stats.get("xfailed", []):
pos = tr.config.cwd_relative_nodeid(rep.nodeid)
reason = rep.wasxfail
s = "XFAIL\t%s" % pos
if reason:
s += ": " + str(reason)
lines.append(s)
s.REPORTCHAR_ACTIONS["x"] = show_xfailed
def show_skipped(tr, lines):
for rep in tr.stats.get("skipped", []):
pos = tr.config.cwd_relative_nodeid(rep.nodeid)
reason = rep.longrepr[-1]
if reason.startswith("Skipped: "):
reason = reason[9:]
verbose_word = s._get_report_str(tr.config, report=rep)
lines.append("%s\t%s: %s" % (verbose_word, pos, reason))
s.REPORTCHAR_ACTIONS["s"] = show_skipped
s.REPORTCHAR_ACTIONS["S"] = show_skipped
And now I'm getting to following output:
./test_file.py -rsx
============================= test session starts =============================
platform linux -- Python 3.5.2, pytest-4.4.1, py-1.7.0, pluggy-0.9.0
rootdir: /home/ashot/questions
collected 2 items
test_file.py xs [100%]
=========================== short test summary info ===========================
SKIPPED test_file.py::test_2: Reason of skipping
XFAIL test_file.py::test_1: Reason of failure
==================== 1 skipped, 1 xfailed in 0.05 seconds =====================
Related
I'm using pytest to run tests on my python scripts.
In one case I need to compare text output of a script which contains both spaces and tabs.
By default, pytest doesn't seem to use special characters for whitespace and tabs, which makes comparing errors very difficult.
Every time I have to copy the error into my editor to find the difference.
Is it possible to make pytest more distinctly represent invisible characters?
Here is what I mean:
First line containing 4 spaces, second line has a tab and last line is empty.
You should use following hook in your conftest.py file to define custom assertion failure output:
TAB_REPLACEMENT = "[tab]"
SPACE_REPLACEMENT = "[space]"
def pytest_assertrepr_compare(config, op, left, right):
left = left.replace(" ", SPACE_REPLACEMENT).replace("\t", TAB_REPLACEMENT)
right = right.replace(" ", SPACE_REPLACEMENT).replace("\t", TAB_REPLACEMENT)
return [f"{left} {op} {right} failed!"]
Test example:
def test_1():
assert " " == " "
Output:
=== test session starts ===
platform win32 -- Python 3.10.4, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
collected 1 item
test.py F [100%]
=== FAILURES ===
___ test_1 ___
def test_1():
> assert " " == " "
E assert [space][space][space][space] == [tab] failed!
test.py:2: AssertionError
=== short test summary info ===
FAILED test.py::test_1 - assert [space][space][space][space] == [tab] failed!
=== 1 failed in 0.20s ===
So you can replace [tab] and [space] with any symbol you want.
I am trying to capture the return value of a PyTest. I am running these tests programmatically, and I want to return relevant information when the test fails.
I thought I could perhaps return the value of kernel as follows such that I can print that information later when listing failed tests:
def test_eval(test_input, expected):
kernel = os.system("uname -r")
assert eval(test_input) == expected, kernel
This doens't work. When I am later looping through the TestReports which are generated, there is no way to access any return information. The only information available in the TestReport is the name of the test and a True/False.
For example one of the test reports looks as follows:
<TestReport 'test_simulation.py::test_host_has_correct_kernel_version[simulation-host]' when='call' outcome='failed'>
Is there a way to return a value after the assert fails, back to the TestReport? I have tried doing this with PyTest plugins but have been unsuccessful.
Here is the code I am using to run the tests programmatically. You can see where I am trying to access the return value.
import pytest
from util import bcolors
class Plugin:
def __init__(self):
self.passed_tests = set()
self.skipped_tests = set()
self.failed_tests = set()
self.unknown_tests = set()
def pytest_runtest_logreport(self, report):
print(report)
if report.passed:
self.passed_tests.add(report)
elif report.skipped:
self.skipped_tests.add(report)
elif report.failed:
self.failed_tests.add(report)
else:
self.unknown_tests.add(report)
if __name__ == "__main__":
plugin = Plugin()
pytest.main(["-s", "-p", "no:terminal"], plugins=[plugin])
for passed in plugin.passed_tests:
result = passed.nodeid
print(bcolors.OKGREEN + "[OK]\t" + bcolors.ENDC + result)
for skipped in plugin.skipped_tests:
result = skipped.nodeid
print(bcolors.OKBLUE + "[SKIPPED]\t" + bcolors.ENDC + result)
for failed in plugin.failed_tests:
result = failed.nodeid
print(bcolors.FAIL + "[FAIL]\t" + bcolors.ENDC + result)
for unknown in plugin.unknown_tests:
result = unknown.nodeid
print(bcolors.FAIL + "[FAIL]\t" + bcolors.ENDC + result)
The goal is to be able to print out "extra context information" when printing the FAILED tests, so that there is information immediately available to help debug why the test is failing.
You can extract failure details from the raised AssertionError in the custom pytest_exception_interact hookimpl. Example:
# conftest.py
def pytest_exception_interact(node, call, report):
# assertion message should be parsed here
# because pytest rewrites assert statements in bytecode
message = call.excinfo.value.args[0]
lines = message.split()
kernel = lines[0]
report.sections.append((
'Kernels reported in assert failures:',
f'{report.nodeid} reported {kernel}'
))
Running a test module
import subprocess
def test_bacon():
assert True
def test_eggs():
kernel = subprocess.run(
["uname", "-r"],
stdout=subprocess.PIPE,
text=True
).stdout
assert 0 == 1, kernel
yields:
test_spam.py::test_bacon PASSED [ 50%]
test_spam.py::test_eggs FAILED [100%]
=================================== FAILURES ===================================
__________________________________ test_eggs ___________________________________
def test_eggs():
kernel = subprocess.run(
["uname", "-r"],
stdout=subprocess.PIPE,
text=True
).stdout
> assert 0 == 1, kernel
E AssertionError: 5.5.15-200.fc31.x86_64
E
E assert 0 == 1
E +0
E -1
test_spam.py:12: AssertionError
--------------------- Kernels reported in assert failures: ---------------------
test_spam.py::test_eggs reported 5.5.15-200.fc31.x86_64
=========================== short test summary info ============================
FAILED test_spam.py::test_eggs - AssertionError: 5.5.15-200.fc31.x86_64
========================= 1 failed, 1 passed in 0.05s ==========================
I run pytest with option -q.
Unfortunately this prints out a lot of dots. Example:
...................................................................................s...............s...................................ssssss..................................................................................................................................s..............s.........................s..............................................................................................................F....s.s............s.....................s...........................................................................................................................
=================================== FAILURES ===================================
_____________________ TestFoo.test_bar _____________________
Traceback (most recent call last):
(cut)
Is there a way to avoid this long list of dots and "s" characters?
Update
There is a valid answer. But somehow it is too long for me. I use this workaround now:I added this to the script which calls pytest: pytest -q | perl -pe 's/^[.sxFE]{20,}$//g'
The verbosity options can't turn off the test outcome printing. However, pytest can be customized in many ways, including the outcome printing. To change this, you would override the pytest_report_teststatus hook.
turn off short letters
Create a file conftest.py with the following content:
import pytest
def pytest_report_teststatus(report):
category, short, verbose = '', '', ''
if hasattr(report, 'wasxfail'):
if report.skipped:
category = 'xfailed'
verbose = 'xfail'
elif report.passed:
category = 'xpassed'
verbose = ('XPASS', {'yellow': True})
return (category, short, verbose)
elif report.when in ('setup', 'teardown'):
if report.failed:
category = 'error'
verbose = 'ERROR'
elif report.skipped:
category = 'skipped'
verbose = 'SKIPPED'
return (category, short, verbose)
category = report.outcome
verbose = category.upper()
return (category, short, verbose)
Now running the tests will not print any short outcome letters (.sxFE). The code is a bit verbose, but handles all the standard outcomes defined in the framework.
turn off verbose outcomes
When running in verbose mode, pytest prints the outcome along with the test case name:
$ pytest -sv
=================================== test session starts ===================================
...
test_spam.py::test_spam PASSED
test_spam.py::test_eggs FAILED
test_spam.py::test_bacon SKIPPED
test_spam.py::test_foo xfail
...
If you remove the lines setting verbose from the above hook impl (leaving it set to an empty string), pytest will also stop printing outcomes in verbose mode:
import pytest
def pytest_report_teststatus(report):
category, short, verbose = '', '', ''
if hasattr(report, 'wasxfail'):
if report.skipped:
category = 'xfailed'
elif report.passed:
category = 'xpassed'
return (category, short, verbose)
elif report.when in ('setup', 'teardown'):
if report.failed:
category = 'error'
elif report.skipped:
category = 'skipped'
return (category, short, verbose)
category = report.outcome
return (category, short, verbose)
$ pytest -sv
=================================== test session starts ===================================
...
test_spam.py::test_spam
test_spam.py::test_eggs
test_spam.py::test_bacon
test_spam.py::test_foo
...
introducing custom reporting mode via command line switch
The below example will turn off printing both short and verbose outcomes when --silent flag is passed from command line:
import pytest
def pytest_addoption(parser):
parser.addoption('--silent', action='store_true', default=False)
def pytest_report_teststatus(report, config):
category, short, verbose = '', '', ''
if not config.getoption('--silent'):
return None
if hasattr(report, 'wasxfail'):
if report.skipped:
category = 'xfailed'
elif report.passed:
category = 'xpassed'
return (category, short, verbose)
elif report.when in ('setup', 'teardown'):
if report.failed:
category = 'error'
elif report.skipped:
category = 'skipped'
return (category, short, verbose)
category = report.outcome
return (category, short, verbose)
Using my plugin pytest-custom-report you can easily suppress those dots by using an empty-string marker for the report.
pip install pytest-custom-report
pytest -q --report-passed=""
I want to get a list of all tests (e.g. in the form of a py.test TestReport) at the end of all tests.
I know that pytest_runtest_makereportdoes something similar, but only for a single test. But I want to implement a hook or something in conftest.py to process the whole list of tests before the py.test application terminates.
Is there a way to do this?
Here an example which can help you. Structure of files:
/example:
__init__.py # empty file
/test_pack_1
__init__.py # empty file
conftest.py # pytest hooks
test_my.py # a few tests for demonstration
There are 2 tests in test_my.py:
def test_one():
assert 1 == 1
print('1==1')
def test_two():
assert 1 == 2
print('1!=2')
Example of conftest.py:
import pytest
from _pytest.runner import TestReport
from _pytest.terminal import TerminalReporter
#pytest.hookimpl(hookwrapper=True)
def pytest_terminal_summary(terminalreporter): # type: (TerminalReporter) -> generator
yield
# you can do here anything - I just print report info
print('*' * 8 + 'HERE CUSTOM LOGIC' + '*' * 8)
for failed in terminalreporter.stats.get('failed', []): # type: TestReport
print('failed! node_id:%s, duration: %s, details: %s' % (failed.nodeid,
failed.duration,
str(failed.longrepr)))
for passed in terminalreporter.stats.get('passed', []): # type: TestReport
print('passed! node_id:%s, duration: %s, details: %s' % (passed.nodeid,
passed.duration,
str(passed.longrepr)))
Documentation says that pytest_terminal_summary has exitstatus arg
Run tests without any additional options: py.test ./example. Example of output:
example/test_pack_1/test_my.py .F
********HERE CUSTOM LOGIC********
failed! node_id:test_pack_1/test_my.py::test_two, duration: 0.000385999679565, details: def test_two():
> assert 1 == 2
E assert 1 == 2
example/test_pack_1/test_my.py:7: AssertionError
passed! node_id:test_pack_1/test_my.py::test_one, duration: 0.00019907951355, details: None
=================================== FAILURES ===================================
___________________________________ test_two ___________________________________
def test_two():
> assert 1 == 2
E assert 1 == 2
example/test_pack_1/test_my.py:7: AssertionError
====================== 1 failed, 1 passed in 0.01 seconds ======================
Hope this helps.
Note! Make sure that .pyc files was removed before running tests
I am relatively new to pytest hooks and plugins and I am unable to figure out how to get my pytest code to give me test execution summary with reason of failure.
Consider the code:
class Foo:
def __init__(self, val):
self.val = val
def test_compare12():
f1 = Foo(1)
f2 = Foo(2)
assert f1 == f2, "F2 does not match F1"
def test_compare34():
f3 = Foo(3)
f4 = Foo(4)
assert f3 == f4, "F4 does not match F3"
When I run the pytest script with -v option, it gives me the following result on the console:
========================= test session starts=================================
platform darwin -- Python 2.7.5 -- py-1.4.26 -- pytest-2.7.0 -- /Users/nehau/src/QA/bin/python
rootdir: /Users/nehau/src/QA/test, inifile:
plugins: capturelog
collected 2 items
test_foocompare.py::test_compare12 FAILED
test_foocompare.py::test_compare34 FAILED
================================ FAILURES ===============================
_______________________________ test_compare12 _________________________
def test_compare12():
f1 = Foo(1)
f2 = Foo(2)
> assert f1 == f2, "F2 does not match F1"
E AssertionError: F2 does not match F1
E assert <test.test_foocompare.Foo instance at 0x107640368> == <test.test_foocompare.Foo instance at 0x107640488>
test_foocompare.py:11: AssertionError
_____________________________ test_compare34______________________________
def test_compare34():
f3 = Foo(3)
f4 = Foo(4)
> assert f3 == f4, "F4 does not match F3"
E AssertionError: F4 does not match F3
E assert <test.test_foocompare.Foo instance at 0x107640248> == <test.test_foocompare.Foo instance at 0x10761fe60>
test_foocompare.py:16: AssertionError
=============================== 2 failed in 0.01 seconds ==========================
I am running close to 2000 test cases, so it would be really helpful if I could have pytest display output in the following format:
::
test_foocompare.py::test_compare12 FAILED AssertionError:F2 does not match F1
test_foocompare.py::test_compare34 FAILED AssertionError:F2 does not match F1
::
I have looked at pytest_runtest_makereport plugin but can't seem to get it working. Anyone has any other ideas?
Thanks
Try the -tb flag:
pytest --tb=line
This gives one line of output per test.
See the docs.
Also try pytest -v --tb=no to show all pass/fail results.
Try -rA option of pytest. It will provide a summary at the end of the logs that show all (failures, skips, etc. and passed).
See https://docs.pytest.org/en/latest/usage.html#detailed-summary-report