How to supress the py.test one-character test result output? [duplicate] - python

I run pytest with option -q.
Unfortunately this prints out a lot of dots. Example:
...................................................................................s...............s...................................ssssss..................................................................................................................................s..............s.........................s..............................................................................................................F....s.s............s.....................s...........................................................................................................................
=================================== FAILURES ===================================
_____________________ TestFoo.test_bar _____________________
Traceback (most recent call last):
(cut)
Is there a way to avoid this long list of dots and "s" characters?
Update
There is a valid answer. But somehow it is too long for me. I use this workaround now:I added this to the script which calls pytest: pytest -q | perl -pe 's/^[.sxFE]{20,}$//g'

The verbosity options can't turn off the test outcome printing. However, pytest can be customized in many ways, including the outcome printing. To change this, you would override the pytest_report_teststatus hook.
turn off short letters
Create a file conftest.py with the following content:
import pytest
def pytest_report_teststatus(report):
category, short, verbose = '', '', ''
if hasattr(report, 'wasxfail'):
if report.skipped:
category = 'xfailed'
verbose = 'xfail'
elif report.passed:
category = 'xpassed'
verbose = ('XPASS', {'yellow': True})
return (category, short, verbose)
elif report.when in ('setup', 'teardown'):
if report.failed:
category = 'error'
verbose = 'ERROR'
elif report.skipped:
category = 'skipped'
verbose = 'SKIPPED'
return (category, short, verbose)
category = report.outcome
verbose = category.upper()
return (category, short, verbose)
Now running the tests will not print any short outcome letters (.sxFE). The code is a bit verbose, but handles all the standard outcomes defined in the framework.
turn off verbose outcomes
When running in verbose mode, pytest prints the outcome along with the test case name:
$ pytest -sv
=================================== test session starts ===================================
...
test_spam.py::test_spam PASSED
test_spam.py::test_eggs FAILED
test_spam.py::test_bacon SKIPPED
test_spam.py::test_foo xfail
...
If you remove the lines setting verbose from the above hook impl (leaving it set to an empty string), pytest will also stop printing outcomes in verbose mode:
import pytest
def pytest_report_teststatus(report):
category, short, verbose = '', '', ''
if hasattr(report, 'wasxfail'):
if report.skipped:
category = 'xfailed'
elif report.passed:
category = 'xpassed'
return (category, short, verbose)
elif report.when in ('setup', 'teardown'):
if report.failed:
category = 'error'
elif report.skipped:
category = 'skipped'
return (category, short, verbose)
category = report.outcome
return (category, short, verbose)
$ pytest -sv
=================================== test session starts ===================================
...
test_spam.py::test_spam
test_spam.py::test_eggs
test_spam.py::test_bacon
test_spam.py::test_foo
...
introducing custom reporting mode via command line switch
The below example will turn off printing both short and verbose outcomes when --silent flag is passed from command line:
import pytest
def pytest_addoption(parser):
parser.addoption('--silent', action='store_true', default=False)
def pytest_report_teststatus(report, config):
category, short, verbose = '', '', ''
if not config.getoption('--silent'):
return None
if hasattr(report, 'wasxfail'):
if report.skipped:
category = 'xfailed'
elif report.passed:
category = 'xpassed'
return (category, short, verbose)
elif report.when in ('setup', 'teardown'):
if report.failed:
category = 'error'
elif report.skipped:
category = 'skipped'
return (category, short, verbose)
category = report.outcome
return (category, short, verbose)

Using my plugin pytest-custom-report you can easily suppress those dots by using an empty-string marker for the report.
pip install pytest-custom-report
pytest -q --report-passed=""

Related

How to save screenshots including test results, etc. when the test is finished in pytest

I want to save a screenshot when a test fails.
However, the screenshot name must include the information used in the test.
example
-> Currently my code is able to send the city I used in the test.
However, it does not operate according to success or failure.
How can I make the test receive a variable as well and act on success/failure?
(The file name I want: {city}{id}{pass/fail}.png
paris_testaccount01_pass.png)
test_example.py
class TesExample(BaseTest):
def test_example(self, capture_screenshot):
city = "paris"
id = "testaccount01"
# ~~ Skip test content ~~
capture_screenshot(self.driver, city)
BaseTest.py
import pytest
#pytest.mark.usefixtures("get_browser")
class BaseTest:
pass
conftest.py
#pytest.fixture()
def capture_screenshot():
def _capture_screenshot(get_browser, name):
driver = get_browser
driver.get_screenshot_as_file(f'../Screenshots/{name}.png')
return _capture_screenshot
return _capture_screenshot
If you use pytest-html for reporting and If you would take screenshot in case of fail, you can use following code in your conftest:
#mark.hookwrapper
def pytest_runtest_makereport(item, call):
pytest_html = item.config.pluginmanager.getplugin('html')
outcome = yield
report = outcome.get_result()
extra = getattr(report, 'extra', [])
if report.when == 'call':
xfail_state = hasattr(report, 'wasxfail')
if (report.skipped and xfail_state) or (report.failed and not xfail_state):
mydriver = item.funcargs['driver']
screenshot = mydriver.get_screenshot_as_base64()
extra.append(pytest_html.extras.image(screenshot, ''))
report.extra = extra

How do I get all the arguments specified on the command line in a list?

I'm creating a function in config:
def pytest_addoption(parser):
parser.addoption('--type', action = 'append', default = [])
parser.addoption('--input', default='')
parser.addoption('--output', default='')
.......
There are many such options. How can I get their arguments specified on the command line all at once? For example, in a list (format is not important).
How to implement such that I don't need to write a separate line to get an argument for each parameter? Because now it looks like this:
type = request.config.getoption("type")
input = request.config.getoption("input")
output = request.config.getoption("output")
.........
request.config has an args method, but it contains only the path of the file to be launched.
As an option, the following code:
args = [arg[2:] for arg in sys.argv if '--' in arg[:2] and '--t' not in arg]
for arg in args:
val_arg = request.config.getoption(arg)
But you can't get the default values this way. And the method itself is not particularly good.
If you need to get all passed options:
The request.config object has invocation_params attribute which stores:
The parameters with which pytest was invoked.
For example, if pytest was called like this:
$ pytest tests/test_1.py tests/test_2.py --maxfail=1 --cache-clear --custom-opt2=222 --custom-opt4=444 -rP
where
tests/*.py are the positional file_or_dir tests
--maxfail=1, --cache-clear, -rP are built-in pytest options
--custom-opt1, --custom-opt2 are custom options added using pytest_addoption hook
then reading invocation_params.args:
from pytest import FixtureRequest
def test_passed_options(request: FixtureRequest):
print(request.config.invocation_params.args)
would give a Tuple of:
('tests/test_1.py', 'tests/test_2.py', '--maxfail=1', '--cache-clear', '--custom-opt2=222', '--custom-opt4=444', '-rP')
To get just the options and their values (for non-flag-like options), you can filter out the positional args:
from itertools import filterfalse
from pytest import FixtureRequest
def test_passed_options(request: FixtureRequest):
all_args = request.config.invocation_params.args
all_opts = filterfalse(lambda arg: not arg.startswith("-"), all_args)
for opt in all_opts:
print(opt)
--maxfail=1
--cache-clear
--custom-opt2=222
--custom-opt4=444
-rP
Now it's just a matter of splitting on = to get option names and their values:
from itertools import filterfalse
from pytest import FixtureRequest
def test_passed_options(request: FixtureRequest):
all_args = request.config.invocation_params.args
all_opts = {}
for opt in filterfalse(lambda arg: not arg.startswith("-"), all_args):
opt_name, _, opt_val = opt.partition("=")
all_opts[opt_name] = opt_val or True
print(all_opts)
{'--cache-clear': True,
'--custom-opt2': '222',
'--custom-opt4': '444',
'--maxfail': '1',
'-rP': True}
If you need to get all available options and their default values:
The request.config object has option attribute which stores all the available command line options along with their default values. It's essentially an argparse.Namespace object, so you can just apply vars() on it get a dict of all the available options.
from pytest import FixtureRequest
def test_available_options(request: FixtureRequest):
all_opts = vars(request.config.option)
print(all_opts)
{'assertmode': 'rewrite',
'basetemp': None,
'cacheclear': True,
'cacheshow': None,
'capture': 'fd',
...
'custom_opt1': 'default_opt1_val',
'custom_opt2': '222',
'custom_opt3': 'default_opt3_val',
'custom_opt4': '444',
'debug': None,
...
'file_or_dir': ['tests/test_1.py', 'tests/test_2.py'],
...
'verbose': 0,
'version': 0,
'xmlpath': None}
Note, this is actually tricky to use because it includes both the built-in pytest options and your custom options. But, it does include the default values of all the options, if that's what your after.
If it's possible to prefix your custom options with an identifier
def pytest_addoption(parser):
parser.addoption("--custom-opt1", action="store", default="default_opt1_val")
parser.addoption("--custom-opt2", action="store", default="default_opt2_val")
parser.addoption("--custom-opt3", action="store", default="default_opt3_val")
parser.addoption("--custom-opt4", action="store", default="default_opt4_val")
Then, you can filter only your custom options from the Namespace object:
$ pytest tests/test_1.py tests/test_2.py --maxfail=1 --cache-clear --custom-opt2=222 --custom-opt4=444 -rP
from pytest import FixtureRequest
def test_available_options(request: FixtureRequest):
all_opts = vars(request.config.option)
my_opts = {}
for opt in filterfalse(lambda opt: not opt.startswith("custom"), all_opts.keys()):
my_opts[opt] = all_opts[opt]
print(my_opts)
{'custom_opt1': 'default_opt1_val',
'custom_opt2': '222',
'custom_opt3': 'default_opt3_val',
'custom_opt4': '444'}

Capture assertion message in Pytest

I am trying to capture the return value of a PyTest. I am running these tests programmatically, and I want to return relevant information when the test fails.
I thought I could perhaps return the value of kernel as follows such that I can print that information later when listing failed tests:
def test_eval(test_input, expected):
kernel = os.system("uname -r")
assert eval(test_input) == expected, kernel
This doens't work. When I am later looping through the TestReports which are generated, there is no way to access any return information. The only information available in the TestReport is the name of the test and a True/False.
For example one of the test reports looks as follows:
<TestReport 'test_simulation.py::test_host_has_correct_kernel_version[simulation-host]' when='call' outcome='failed'>
Is there a way to return a value after the assert fails, back to the TestReport? I have tried doing this with PyTest plugins but have been unsuccessful.
Here is the code I am using to run the tests programmatically. You can see where I am trying to access the return value.
import pytest
from util import bcolors
class Plugin:
def __init__(self):
self.passed_tests = set()
self.skipped_tests = set()
self.failed_tests = set()
self.unknown_tests = set()
def pytest_runtest_logreport(self, report):
print(report)
if report.passed:
self.passed_tests.add(report)
elif report.skipped:
self.skipped_tests.add(report)
elif report.failed:
self.failed_tests.add(report)
else:
self.unknown_tests.add(report)
if __name__ == "__main__":
plugin = Plugin()
pytest.main(["-s", "-p", "no:terminal"], plugins=[plugin])
for passed in plugin.passed_tests:
result = passed.nodeid
print(bcolors.OKGREEN + "[OK]\t" + bcolors.ENDC + result)
for skipped in plugin.skipped_tests:
result = skipped.nodeid
print(bcolors.OKBLUE + "[SKIPPED]\t" + bcolors.ENDC + result)
for failed in plugin.failed_tests:
result = failed.nodeid
print(bcolors.FAIL + "[FAIL]\t" + bcolors.ENDC + result)
for unknown in plugin.unknown_tests:
result = unknown.nodeid
print(bcolors.FAIL + "[FAIL]\t" + bcolors.ENDC + result)
The goal is to be able to print out "extra context information" when printing the FAILED tests, so that there is information immediately available to help debug why the test is failing.
You can extract failure details from the raised AssertionError in the custom pytest_exception_interact hookimpl. Example:
# conftest.py
def pytest_exception_interact(node, call, report):
# assertion message should be parsed here
# because pytest rewrites assert statements in bytecode
message = call.excinfo.value.args[0]
lines = message.split()
kernel = lines[0]
report.sections.append((
'Kernels reported in assert failures:',
f'{report.nodeid} reported {kernel}'
))
Running a test module
import subprocess
def test_bacon():
assert True
def test_eggs():
kernel = subprocess.run(
["uname", "-r"],
stdout=subprocess.PIPE,
text=True
).stdout
assert 0 == 1, kernel
yields:
test_spam.py::test_bacon PASSED [ 50%]
test_spam.py::test_eggs FAILED [100%]
=================================== FAILURES ===================================
__________________________________ test_eggs ___________________________________
def test_eggs():
kernel = subprocess.run(
["uname", "-r"],
stdout=subprocess.PIPE,
text=True
).stdout
> assert 0 == 1, kernel
E AssertionError: 5.5.15-200.fc31.x86_64
E
E assert 0 == 1
E +0
E -1
test_spam.py:12: AssertionError
--------------------- Kernels reported in assert failures: ---------------------
test_spam.py::test_eggs reported 5.5.15-200.fc31.x86_64
=========================== short test summary info ============================
FAILED test_spam.py::test_eggs - AssertionError: 5.5.15-200.fc31.x86_64
========================= 1 failed, 1 passed in 0.05s ==========================

pytest-4.x.x: How to report SKIPPED tests like XFAILED?

When a test is xfailed the reason that is printed reports about test file, test class and test case, while the skipped test case reports only test file and a line where skip is called.
Here is a test example:
#!/usr/bin/env pytest
import pytest
#pytest.mark.xfail(reason="Reason of failure")
def test_1():
pytest.fail("This will fail here")
#pytest.mark.skip(reason="Reason of skipping")
def test_2():
pytest.fail("This will fail here")
This is the actual result:
pytest test_file.py -rsx
============================= test session starts =============================
platform linux -- Python 3.5.2, pytest-4.4.1, py-1.7.0, pluggy-0.9.0
rootdir: /home/ashot/questions
collected 2 items
test_file.py xs [100%]
=========================== short test summary info ===========================
SKIPPED [1] test_file.py:9: Reason of skipping
XFAIL test_file.py::test_1
Reason of failure
==================== 1 skipped, 1 xfailed in 0.05 seconds =====================
But I would expect to get something like:
pytest test_file.py -rsx
============================= test session starts =============================
platform linux -- Python 3.5.2, pytest-4.4.1, py-1.7.0, pluggy-0.9.0
rootdir: /home/ashot/questions
collected 2 items
test_file.py xs [100%]
=========================== short test summary info ===========================
XFAIL test_file.py::test_1: Reason of failure
SKIPPED test_file.py::test_2: Reason of skipping
==================== 1 skipped, 1 xfailed in 0.05 seconds =====================
You have two possible ways to achieve this. The quick and dirty way: just redefine _pytest.skipping.show_xfailed in your test_file.py:
import _pytest
def custom_show_xfailed(terminalreporter, lines):
xfailed = terminalreporter.stats.get("xfailed")
if xfailed:
for rep in xfailed:
pos = terminalreporter.config.cwd_relative_nodeid(rep.nodeid)
reason = rep.wasxfail
s = "XFAIL %s" % (pos,)
if reason:
s += ": " + str(reason)
lines.append(s)
# show_xfailed_bkp = _pytest.skipping.show_xfailed
_pytest.skipping.show_xfailed = custom_show_xfailed
... your tests
The (not so) clean way: create a conftest.py file in the same directory as your test_file.py, and add a hook:
import pytest
import _pytest
def custom_show_xfailed(terminalreporter, lines):
xfailed = terminalreporter.stats.get("xfailed")
if xfailed:
for rep in xfailed:
pos = terminalreporter.config.cwd_relative_nodeid(rep.nodeid)
reason = rep.wasxfail
s = "XFAIL %s" % (pos,)
if reason:
s += ": " + str(reason)
lines.append(s)
#pytest.hookimpl(tryfirst=True)
def pytest_terminal_summary(terminalreporter):
tr = terminalreporter
if not tr.reportchars:
return
lines = []
for char in tr.reportchars:
if char == "x":
custom_show_xfailed(terminalreporter, lines)
elif char == "X":
_pytest.skipping.show_xpassed(terminalreporter, lines)
elif char in "fF":
_pytest.skipping.show_simple(terminalreporter, lines, 'failed', "FAIL %s")
elif char in "sS":
_pytest.skipping.show_skipped(terminalreporter, lines)
elif char == "E":
_pytest.skipping.show_simple(terminalreporter, lines, 'error', "ERROR %s")
elif char == 'p':
_pytest.skipping.show_simple(terminalreporter, lines, 'passed', "PASSED %s")
if lines:
tr._tw.sep("=", "short test summary info")
for line in lines:
tr._tw.line(line)
tr.reportchars = [] # to avoid further output
The second method is overkill, because you have to redefine the whole pytest_terminal_summary.
Thanks to this answer I've found the following solution that works perfectly for me.
I've created conftest.py file in the root of my test suite with the following content:
import _pytest.skipping as s
def show_xfailed(tr, lines):
for rep in tr.stats.get("xfailed", []):
pos = tr.config.cwd_relative_nodeid(rep.nodeid)
reason = rep.wasxfail
s = "XFAIL\t%s" % pos
if reason:
s += ": " + str(reason)
lines.append(s)
s.REPORTCHAR_ACTIONS["x"] = show_xfailed
def show_skipped(tr, lines):
for rep in tr.stats.get("skipped", []):
pos = tr.config.cwd_relative_nodeid(rep.nodeid)
reason = rep.longrepr[-1]
if reason.startswith("Skipped: "):
reason = reason[9:]
verbose_word = s._get_report_str(tr.config, report=rep)
lines.append("%s\t%s: %s" % (verbose_word, pos, reason))
s.REPORTCHAR_ACTIONS["s"] = show_skipped
s.REPORTCHAR_ACTIONS["S"] = show_skipped
And now I'm getting to following output:
./test_file.py -rsx
============================= test session starts =============================
platform linux -- Python 3.5.2, pytest-4.4.1, py-1.7.0, pluggy-0.9.0
rootdir: /home/ashot/questions
collected 2 items
test_file.py xs [100%]
=========================== short test summary info ===========================
SKIPPED test_file.py::test_2: Reason of skipping
XFAIL test_file.py::test_1: Reason of failure
==================== 1 skipped, 1 xfailed in 0.05 seconds =====================

Pytest-html Customization

I am using pytest-html plugin for report generation for my test cases. I want to add a line item if the script fails. So here's my code---
import pytest
#pytest.mark.hookwrapper
def pytest_runtest_makereport(item, call):
pytest_html = item.config.pluginmanager.getplugin('html')
outcome = yield
report = outcome.get_result()
extra = getattr(report, 'extra', [])
if report.when == 'call':
# always add url to report
extra.append(pytest_html.extras.url('http://www.example.com/'))
xfail = hasattr(report, 'wasxfail')
if (report.skipped and xfail) or (report.failed and not xfail):
# only add additional html on failure
extra.append(pytest_html.extras.html('<div>Additional HTML</div>'))
report.extra = extra
def test_sayHello():
assert False, "I mean for this to fail"
print "hello"
def test_sayBye():
print "Bye"
I am running the scipt using -
pytest --html=report.html
I can see the report getting generatesd but it doesnt have a line item as Additional HTML.
Also is there a way via which I can add such line items in between of my scripts to the report.
Really appreciate help.
This should work:
#pytest.mark.hookwrapper
def pytest_runtest_makereport(item, call):
pytest_html = item.config.pluginmanager.getplugin('html')
outcome = yield
report = outcome.get_result()
extra = getattr(report, 'extra', [])
if report.when == 'call':
extra.append(pytest_html.extras.html('<p>some html</p>'))
report.extra = extra

Categories

Resources