I am relatively new to pytest hooks and plugins and I am unable to figure out how to get my pytest code to give me test execution summary with reason of failure.
Consider the code:
class Foo:
def __init__(self, val):
self.val = val
def test_compare12():
f1 = Foo(1)
f2 = Foo(2)
assert f1 == f2, "F2 does not match F1"
def test_compare34():
f3 = Foo(3)
f4 = Foo(4)
assert f3 == f4, "F4 does not match F3"
When I run the pytest script with -v option, it gives me the following result on the console:
========================= test session starts=================================
platform darwin -- Python 2.7.5 -- py-1.4.26 -- pytest-2.7.0 -- /Users/nehau/src/QA/bin/python
rootdir: /Users/nehau/src/QA/test, inifile:
plugins: capturelog
collected 2 items
test_foocompare.py::test_compare12 FAILED
test_foocompare.py::test_compare34 FAILED
================================ FAILURES ===============================
_______________________________ test_compare12 _________________________
def test_compare12():
f1 = Foo(1)
f2 = Foo(2)
> assert f1 == f2, "F2 does not match F1"
E AssertionError: F2 does not match F1
E assert <test.test_foocompare.Foo instance at 0x107640368> == <test.test_foocompare.Foo instance at 0x107640488>
test_foocompare.py:11: AssertionError
_____________________________ test_compare34______________________________
def test_compare34():
f3 = Foo(3)
f4 = Foo(4)
> assert f3 == f4, "F4 does not match F3"
E AssertionError: F4 does not match F3
E assert <test.test_foocompare.Foo instance at 0x107640248> == <test.test_foocompare.Foo instance at 0x10761fe60>
test_foocompare.py:16: AssertionError
=============================== 2 failed in 0.01 seconds ==========================
I am running close to 2000 test cases, so it would be really helpful if I could have pytest display output in the following format:
::
test_foocompare.py::test_compare12 FAILED AssertionError:F2 does not match F1
test_foocompare.py::test_compare34 FAILED AssertionError:F2 does not match F1
::
I have looked at pytest_runtest_makereport plugin but can't seem to get it working. Anyone has any other ideas?
Thanks
Try the -tb flag:
pytest --tb=line
This gives one line of output per test.
See the docs.
Also try pytest -v --tb=no to show all pass/fail results.
Try -rA option of pytest. It will provide a summary at the end of the logs that show all (failures, skips, etc. and passed).
See https://docs.pytest.org/en/latest/usage.html#detailed-summary-report
Related
I am trying to write test case, I want to mock data object returned from MongoClient(), below is the code.
numbers.py
def get_count():
client_int = MongoClient('abc.xyz.com', port=27010)
return client_int
test_numbers.py
#patch('pymongo.MongoClient')
def test_get_count(mocked_object):
mocked_object.return_value = [{'1': 'data'}]
assert numbers.get_count() == [{'1': 'data'}] # Here i am getting Assertion Error, MongoClient!=[{'1': 'data'}]
How to make this work?? What went wrong??
First of all, you should rename your module. You can't use numbers, because it conflicts with python built-in library numbers.
You didn't patch the target correctly. You should patch MongoClient of my_numbers.py module. For more info, see where-to-patch
E.g.
my_numbers.py:
from pymongo import MongoClient
def get_count():
client_int = MongoClient('abc.xyz.com', port=27010)
return client_int
test_my_numbers.py:
import unittest
from unittest.mock import patch
import my_numbers
class TestNumbers(unittest.TestCase):
#patch('my_numbers.MongoClient')
def test_get_count(self, mocked_object):
mocked_object.return_value = [{'1': 'data'}]
assert my_numbers.get_count() == [{'1': 'data'}]
mocked_object.called_once_with_value('abc.xyz.com', port=27010)
if __name__ == '__main__':
unittest.main()
unit test result:
⚡ coverage run /Users/dulin/workspace/github.com/mrdulin/python-codelab/src/stackoverflow/66852436/test_my_numbers.py && coverage report -m --include='./src/**'
.
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
Name Stmts Miss Cover Missing
-----------------------------------------------------------------------------
src/stackoverflow/66852436/my_numbers.py 4 0 100%
src/stackoverflow/66852436/test_my_numbers.py 11 0 100%
-----------------------------------------------------------------------------
TOTAL 15 0 100%
I am trying to capture the return value of a PyTest. I am running these tests programmatically, and I want to return relevant information when the test fails.
I thought I could perhaps return the value of kernel as follows such that I can print that information later when listing failed tests:
def test_eval(test_input, expected):
kernel = os.system("uname -r")
assert eval(test_input) == expected, kernel
This doens't work. When I am later looping through the TestReports which are generated, there is no way to access any return information. The only information available in the TestReport is the name of the test and a True/False.
For example one of the test reports looks as follows:
<TestReport 'test_simulation.py::test_host_has_correct_kernel_version[simulation-host]' when='call' outcome='failed'>
Is there a way to return a value after the assert fails, back to the TestReport? I have tried doing this with PyTest plugins but have been unsuccessful.
Here is the code I am using to run the tests programmatically. You can see where I am trying to access the return value.
import pytest
from util import bcolors
class Plugin:
def __init__(self):
self.passed_tests = set()
self.skipped_tests = set()
self.failed_tests = set()
self.unknown_tests = set()
def pytest_runtest_logreport(self, report):
print(report)
if report.passed:
self.passed_tests.add(report)
elif report.skipped:
self.skipped_tests.add(report)
elif report.failed:
self.failed_tests.add(report)
else:
self.unknown_tests.add(report)
if __name__ == "__main__":
plugin = Plugin()
pytest.main(["-s", "-p", "no:terminal"], plugins=[plugin])
for passed in plugin.passed_tests:
result = passed.nodeid
print(bcolors.OKGREEN + "[OK]\t" + bcolors.ENDC + result)
for skipped in plugin.skipped_tests:
result = skipped.nodeid
print(bcolors.OKBLUE + "[SKIPPED]\t" + bcolors.ENDC + result)
for failed in plugin.failed_tests:
result = failed.nodeid
print(bcolors.FAIL + "[FAIL]\t" + bcolors.ENDC + result)
for unknown in plugin.unknown_tests:
result = unknown.nodeid
print(bcolors.FAIL + "[FAIL]\t" + bcolors.ENDC + result)
The goal is to be able to print out "extra context information" when printing the FAILED tests, so that there is information immediately available to help debug why the test is failing.
You can extract failure details from the raised AssertionError in the custom pytest_exception_interact hookimpl. Example:
# conftest.py
def pytest_exception_interact(node, call, report):
# assertion message should be parsed here
# because pytest rewrites assert statements in bytecode
message = call.excinfo.value.args[0]
lines = message.split()
kernel = lines[0]
report.sections.append((
'Kernels reported in assert failures:',
f'{report.nodeid} reported {kernel}'
))
Running a test module
import subprocess
def test_bacon():
assert True
def test_eggs():
kernel = subprocess.run(
["uname", "-r"],
stdout=subprocess.PIPE,
text=True
).stdout
assert 0 == 1, kernel
yields:
test_spam.py::test_bacon PASSED [ 50%]
test_spam.py::test_eggs FAILED [100%]
=================================== FAILURES ===================================
__________________________________ test_eggs ___________________________________
def test_eggs():
kernel = subprocess.run(
["uname", "-r"],
stdout=subprocess.PIPE,
text=True
).stdout
> assert 0 == 1, kernel
E AssertionError: 5.5.15-200.fc31.x86_64
E
E assert 0 == 1
E +0
E -1
test_spam.py:12: AssertionError
--------------------- Kernels reported in assert failures: ---------------------
test_spam.py::test_eggs reported 5.5.15-200.fc31.x86_64
=========================== short test summary info ============================
FAILED test_spam.py::test_eggs - AssertionError: 5.5.15-200.fc31.x86_64
========================= 1 failed, 1 passed in 0.05s ==========================
When a test is xfailed the reason that is printed reports about test file, test class and test case, while the skipped test case reports only test file and a line where skip is called.
Here is a test example:
#!/usr/bin/env pytest
import pytest
#pytest.mark.xfail(reason="Reason of failure")
def test_1():
pytest.fail("This will fail here")
#pytest.mark.skip(reason="Reason of skipping")
def test_2():
pytest.fail("This will fail here")
This is the actual result:
pytest test_file.py -rsx
============================= test session starts =============================
platform linux -- Python 3.5.2, pytest-4.4.1, py-1.7.0, pluggy-0.9.0
rootdir: /home/ashot/questions
collected 2 items
test_file.py xs [100%]
=========================== short test summary info ===========================
SKIPPED [1] test_file.py:9: Reason of skipping
XFAIL test_file.py::test_1
Reason of failure
==================== 1 skipped, 1 xfailed in 0.05 seconds =====================
But I would expect to get something like:
pytest test_file.py -rsx
============================= test session starts =============================
platform linux -- Python 3.5.2, pytest-4.4.1, py-1.7.0, pluggy-0.9.0
rootdir: /home/ashot/questions
collected 2 items
test_file.py xs [100%]
=========================== short test summary info ===========================
XFAIL test_file.py::test_1: Reason of failure
SKIPPED test_file.py::test_2: Reason of skipping
==================== 1 skipped, 1 xfailed in 0.05 seconds =====================
You have two possible ways to achieve this. The quick and dirty way: just redefine _pytest.skipping.show_xfailed in your test_file.py:
import _pytest
def custom_show_xfailed(terminalreporter, lines):
xfailed = terminalreporter.stats.get("xfailed")
if xfailed:
for rep in xfailed:
pos = terminalreporter.config.cwd_relative_nodeid(rep.nodeid)
reason = rep.wasxfail
s = "XFAIL %s" % (pos,)
if reason:
s += ": " + str(reason)
lines.append(s)
# show_xfailed_bkp = _pytest.skipping.show_xfailed
_pytest.skipping.show_xfailed = custom_show_xfailed
... your tests
The (not so) clean way: create a conftest.py file in the same directory as your test_file.py, and add a hook:
import pytest
import _pytest
def custom_show_xfailed(terminalreporter, lines):
xfailed = terminalreporter.stats.get("xfailed")
if xfailed:
for rep in xfailed:
pos = terminalreporter.config.cwd_relative_nodeid(rep.nodeid)
reason = rep.wasxfail
s = "XFAIL %s" % (pos,)
if reason:
s += ": " + str(reason)
lines.append(s)
#pytest.hookimpl(tryfirst=True)
def pytest_terminal_summary(terminalreporter):
tr = terminalreporter
if not tr.reportchars:
return
lines = []
for char in tr.reportchars:
if char == "x":
custom_show_xfailed(terminalreporter, lines)
elif char == "X":
_pytest.skipping.show_xpassed(terminalreporter, lines)
elif char in "fF":
_pytest.skipping.show_simple(terminalreporter, lines, 'failed', "FAIL %s")
elif char in "sS":
_pytest.skipping.show_skipped(terminalreporter, lines)
elif char == "E":
_pytest.skipping.show_simple(terminalreporter, lines, 'error', "ERROR %s")
elif char == 'p':
_pytest.skipping.show_simple(terminalreporter, lines, 'passed', "PASSED %s")
if lines:
tr._tw.sep("=", "short test summary info")
for line in lines:
tr._tw.line(line)
tr.reportchars = [] # to avoid further output
The second method is overkill, because you have to redefine the whole pytest_terminal_summary.
Thanks to this answer I've found the following solution that works perfectly for me.
I've created conftest.py file in the root of my test suite with the following content:
import _pytest.skipping as s
def show_xfailed(tr, lines):
for rep in tr.stats.get("xfailed", []):
pos = tr.config.cwd_relative_nodeid(rep.nodeid)
reason = rep.wasxfail
s = "XFAIL\t%s" % pos
if reason:
s += ": " + str(reason)
lines.append(s)
s.REPORTCHAR_ACTIONS["x"] = show_xfailed
def show_skipped(tr, lines):
for rep in tr.stats.get("skipped", []):
pos = tr.config.cwd_relative_nodeid(rep.nodeid)
reason = rep.longrepr[-1]
if reason.startswith("Skipped: "):
reason = reason[9:]
verbose_word = s._get_report_str(tr.config, report=rep)
lines.append("%s\t%s: %s" % (verbose_word, pos, reason))
s.REPORTCHAR_ACTIONS["s"] = show_skipped
s.REPORTCHAR_ACTIONS["S"] = show_skipped
And now I'm getting to following output:
./test_file.py -rsx
============================= test session starts =============================
platform linux -- Python 3.5.2, pytest-4.4.1, py-1.7.0, pluggy-0.9.0
rootdir: /home/ashot/questions
collected 2 items
test_file.py xs [100%]
=========================== short test summary info ===========================
SKIPPED test_file.py::test_2: Reason of skipping
XFAIL test_file.py::test_1: Reason of failure
==================== 1 skipped, 1 xfailed in 0.05 seconds =====================
I want to get a list of all tests (e.g. in the form of a py.test TestReport) at the end of all tests.
I know that pytest_runtest_makereportdoes something similar, but only for a single test. But I want to implement a hook or something in conftest.py to process the whole list of tests before the py.test application terminates.
Is there a way to do this?
Here an example which can help you. Structure of files:
/example:
__init__.py # empty file
/test_pack_1
__init__.py # empty file
conftest.py # pytest hooks
test_my.py # a few tests for demonstration
There are 2 tests in test_my.py:
def test_one():
assert 1 == 1
print('1==1')
def test_two():
assert 1 == 2
print('1!=2')
Example of conftest.py:
import pytest
from _pytest.runner import TestReport
from _pytest.terminal import TerminalReporter
#pytest.hookimpl(hookwrapper=True)
def pytest_terminal_summary(terminalreporter): # type: (TerminalReporter) -> generator
yield
# you can do here anything - I just print report info
print('*' * 8 + 'HERE CUSTOM LOGIC' + '*' * 8)
for failed in terminalreporter.stats.get('failed', []): # type: TestReport
print('failed! node_id:%s, duration: %s, details: %s' % (failed.nodeid,
failed.duration,
str(failed.longrepr)))
for passed in terminalreporter.stats.get('passed', []): # type: TestReport
print('passed! node_id:%s, duration: %s, details: %s' % (passed.nodeid,
passed.duration,
str(passed.longrepr)))
Documentation says that pytest_terminal_summary has exitstatus arg
Run tests without any additional options: py.test ./example. Example of output:
example/test_pack_1/test_my.py .F
********HERE CUSTOM LOGIC********
failed! node_id:test_pack_1/test_my.py::test_two, duration: 0.000385999679565, details: def test_two():
> assert 1 == 2
E assert 1 == 2
example/test_pack_1/test_my.py:7: AssertionError
passed! node_id:test_pack_1/test_my.py::test_one, duration: 0.00019907951355, details: None
=================================== FAILURES ===================================
___________________________________ test_two ___________________________________
def test_two():
> assert 1 == 2
E assert 1 == 2
example/test_pack_1/test_my.py:7: AssertionError
====================== 1 failed, 1 passed in 0.01 seconds ======================
Hope this helps.
Note! Make sure that .pyc files was removed before running tests
I have a set of nose tests which I use to test a piece of hardware. For example, the test below is concerned with testing the alarm for each mode on the system:
import target
modes = ("start","stop","restart","stage1","stage2")
max_alarm_time = 10
# generate tests for testing each mode
def test_generator():
for m in modes:
yield check_alarm, m, max_alarm_time
# test alarm for a mode
def check_alarm(m, max_alarm_time):
target.set_mode(m)
assert target.alarm() < max_alarm_time
Most of my tests have this appearance where I am testing a particular function for all modes on the system.
I now wish to use the same set of tests to test a new piece of hardware which has two extra modes:
modes = ("start","stop","restart","stage1","stage2","stage3","stage4")
Of course, I want my tests to still work for the old hardware also. When running automated test I will need to hardcode, for the test environment, the hardware I am connected to.
I believe the best way to do this is to create a paramaters.py module as follows:
def init(hardware):
global max_alarm_time
global modes
max_alarm_time = 10
if hardware == "old":
modes = ("start","stop","restart","stage1","stage2")
elif hardware == "new":
modes = ("start","stop","restart","stage1","stage2","stage3","stage4")
with test_alarms.py now looking like this instead:
import target
import parameters
# generate tests for testing each mode
def test_generator():
for m in parameters.modes:
yield check_alarm, m, parameters.max_alarm_time
# test alarm for a mode
def check_alarm(m, max_alarm_time):
target.set_mode(m)
assert target.alarm() < max_alarm_time
Then in my main I have the following:
import nose
import parameters
parameters.init("new")
nose.main()
Is this a valid approach in your opinion?
An alternative way to solve a similar problem is to abuse the #attr decorator from the attribute plugin in the following way:
from nose.plugins.attrib import attr
max_alarm_time = 10
# generate tests for testing each mode
#attr(hardware='old')
#attr(modes = ("start","stop","restart","stage1","stage2"))
def test_generator_old():
for m in test_generator_old.__dict__['modes']:
yield check_alarm, m, max_alarm_time
#attr(hardware='new')
#attr(modes = ("start","stop","restart","stage1","stage2", "stage3","stage4"))
def test_generator_new():
for m in test_generator_new.__dict__['modes']:
yield check_alarm, m, max_alarm_time
# test alarm for a mode
def check_alarm(m, max_alarm_time):
print "mode=", m
You can immediately switch between 'old' and 'new',like this:
$ nosetests modes_test.py -a hardware=new -v
modes_test.test_generator_new('start', 10) ... ok
modes_test.test_generator_new('stop', 10) ... ok
modes_test.test_generator_new('restart', 10) ... ok
modes_test.test_generator_new('stage1', 10) ... ok
modes_test.test_generator_new('stage2', 10) ... ok
modes_test.test_generator_new('stage3', 10) ... ok
modes_test.test_generator_new('stage4', 10) ... ok
----------------------------------------------------------------------
Ran 7 tests in 0.020s
OK
And the old one:
$ nosetests modes_test.py -a hardware=old -v
modes_test.test_generator_old('start', 10) ... ok
modes_test.test_generator_old('stop', 10) ... ok
modes_test.test_generator_old('restart', 10) ... ok
modes_test.test_generator_old('stage1', 10) ... ok
modes_test.test_generator_old('stage2', 10) ... ok
----------------------------------------------------------------------
Ran 5 tests in 0.015s
OK
Also, although I have not played with it that much, nose-testconfig could help you to do the same trick.