Been searching on this one for a while and been surprised to not find much. I'm currently working away with pytest and looking to improve the detail on passed tests.
The aim here is to report the individual tests that passed alongside the failures with the same level of detail. Using the example from the site for a failed test:
$ pytest
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items
test_sample.py F
======= FAILURES ========
_______ test_answer ________
def test_answer():
> assert func(3) == 5
E assert 4 == 5
E + where 4 = func(3)
test_sample.py:5: AssertionError
======= 1 failed in 0.12 seconds ========
I'm looking for a way for the passed tests to be reported in a similar manor, possibly with custom text?
If not a way to add custom text to the end report would suffice.
Is this possible or am I trying something here that's not correct?
Cheers,
R.
py.test -s shows stdout of successful tests.
This is not like fail result in the example above, but in successful pass you do not have any asserts fired.
So you would see just what your test will output to stdout in successful pass.
Related
I've multiple test functions created in a file. Example:
def testA():
change_user_permission_to_allow()
assert action == success
change_user_permission_to_deny()
def testB():
assert action == fail
## Multiple other tests...
By default, user are denied for action. When I run testB individually, it passes. When I run the test file as a whole.
pytest testfile.py
The testB fails. When I debug the user permission is allowed. Seems like testA is making issue in testB. Is there a way to tell pytest to run test one after another?
you can read the pytest-ordering: run your tests in order
With pytest-ordering, you can change the default ordering as follows:
import pytest
#pytest.mark.order2
def test_foo():
assert True
#pytest.mark.order1
def test_bar():
assert True
$ py.test test_foo.py -vv
============================= test session starts ==============================
platform darwin -- Python 2.7.5 -- py-1.4.20 -- pytest-2.5.2 -- env/bin/python
plugins: ordering
collected 2 items
test_foo.py:7: test_bar PASSED
test_foo.py:3: test_foo PASSED
=========================== 2 passed in 0.01 seconds ===========================
I was looking for this also
--sw, --stepwise exit on test failure and continue from last failing
test next time
--sw-skip, --stepwise-skip
ignore the first failing test but stop on the next
failing test.
implicitly enables --stepwise.
Currently in the company where I'm working, we have a framework to run tests. We want to integrate pytest to be able to write tests in the pytest way, but we need the old framework for all the things it's doing in the background.
The issue I'm facing is regarding assertions. Currently we have a bunch of assertion functions. All of them use a private method to write both to python logging and to a json file. I would like to get rid of them and use only "assert".
What I did until now is to monkeypatch _pytest.assertion.rewrite.py with a custom module I created, where I changed the visit_Assert method and add this piece of code after line 873:
if isinstance(assert_.test, ast.Compare):
test_value = BINOP_MAP[assert_.test.ops[0].__class__]
test_type = "Comparison"
elif isinstance(assert_.test, ast.Call):
test_value = str(assert_.test.func.id)
test_type = "FunctionCall"
And then I call the same private method I mentioned above to save the results.
As you could guess I don't think it's the best way to do that: is there a better way?
I tried with the different hooks, but could not find the information I need (what is the comparison the assert is doing), especially because pytest is very good when the tests fail (it makes sense), but not so rich in information when the tests pass.
It depends a bit on which version of Pytest you're using, since the hooks are under pretty active development. But in any relatively recent version, you could implement the hook pytest_assertrepr_compare, which is called to report custom error messages on asserts that fail. This method can be defined in conftest.py, and pytest will happily use that definition.
A method like this:
def pytest_assertrepr_compare(config, op, left, right):
print("Call legacy method here")
return None
Would instruct pytest that no custom error messages are required (that's the return None part), but it would allow you to call arbitrary code on assert failures.
As an example, running pytest on a dummy test file, test_foo.py with contents:
def test_foo():
assert 0 == 1, "No bueno"
Should give the following output on your terminal:
================================================= test session starts ==================================================
platform darwin -- Python 3.9.0, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 -- /usr/local/opt/python#3.9/bin/python3.9
cachedir: .pytest_cache
rootdir: /Users/bnaecker/tmp
plugins: cov-2.10.1
collected 1 item
foo.py::test_foo FAILED [100%]
======================================================= FAILURES =======================================================
_______________________________________________________ test_foo _______________________________________________________
def test_foo():
> assert 0 == 1, "No bueno"
E AssertionError: No bueno
E assert 0 == 1
E +0
E -1
foo.py:6: AssertionError
------------------------------------------------- Captured stdout call -------------------------------------------------
Call legacy method here
=============================================== short test summary info ================================================
FAILED foo.py::test_foo - AssertionError: No bueno
================================================== 1 failed in 0.10s ===================================================
The captured stdout is a stand-in for calling your custom logging function. Also, note I'm using pytest-6.1.2, and it's not clear when this hook was included. Other similar hooks were introduced in 5.0, so it's plausible that anything in the >=6.0 would be fine, but YMMV.
Rereading your question, it occurs that you might be more specifically asking about how to call your custom method when an assertion passes, rather than when it fails. In that case, the experimental method pytest_assertion_pass may be what you're looking for. The setup is the same, just implement that method instead in your conftest.py.
I'm more familiar with the google test framework and know about the primary behavior pair they support about ASSERT_* vs EXPECT_* which are the fatal and non-fatal assert modes.
From the documentation:
The assertions come in pairs that test the same thing but have
different effects on the current function. ASSERT_* versions generate
fatal failures when they fail, and abort the current function.
EXPECT_* versions generate nonfatal failures, which don't abort the
current function. Usually EXPECT_* are preferred, as they allow more
than one failures to be reported in a test. However, you should use
ASSERT_* if it doesn't make sense to continue when the assertion in
question fails.
Question: does pytest also have a non fatal assert flavor or mode I can enable?
It's nice to allow a full range of tests to maximally execute to get the richest failure history rather than abort at the first failure and potentially hide subsequent failures that have to be discovered piecewise by running multiple instances of the test application.
I use pytest-assume for non-fatal assertions. It does the job pretty well.
Installation
As usual,
$ pip install pytest-assume
Usage example
import pytest
def test_spam():
pytest.assume(True)
pytest.assume(False)
a, b = True, False
pytest.assume(a == b)
pytest.assume(1 == 0)
pytest.assume(1 < 0)
pytest.assume('')
pytest.assume([])
pytest.assume({})
If you feel writing pytest.assume is a bit too much, just alias the import:
import pytest.assume as expect
def test_spam():
expect(True)
...
Running the above test yields:
$ pytest -v
============================= test session starts ==============================
platform linux -- Python 3.6.5, pytest-3.6.0, py-1.5.3, pluggy-0.6.0 -- /data/gentoo64-prefix/u0_a82/projects/stackoverflow/so-50630845
cachedir: .pytest_cache
rootdir: /data/gentoo64-prefix/u0_a82/projects/stackoverflow/so-50630845, inifile:
plugins: assume-1.2
collecting ... collected 1 item
test_spam.py::test_spam FAILED [100%]
=================================== FAILURES ===================================
__________________________________ test_spam ___________________________________
test_spam.py:6: AssumptionFailure
pytest.assume(False)
test_spam.py:9: AssumptionFailure
pytest.assume(a == b)
test_spam.py:11: AssumptionFailure
pytest.assume(1 == 0)
test_spam.py:12: AssumptionFailure
pytest.assume(1 < 0)
test_spam.py:13: AssumptionFailure
pytest.assume('')
test_spam.py:14: AssumptionFailure
pytest.assume([])
test_spam.py:14: AssumptionFailure
pytest.assume([])
test_spam.py:15: AssumptionFailure
pytest.assume({})
------------------------------------------------------------
Failed Assumptions: 7
=========================== 1 failed in 0.18 seconds ===========================
No, there is no feature like that in pytest. The most popular approach is to use regular assert statements, which fail the test immediately if the expression is falsey.
It's nice to allow a full range of tests to maximally execute to get the richest failure history rather than abort at the first failure and potentially hide subsequent failures that have to be discovered piecewise by running multiple instances of the test application.
Opinions differ on whether this is nice or not. In the open source Python community, at least, the popular approach is: every potential "subsequent failure that is discovered piecewise" would be written in its own separate test. More tests, smaller tests, that (ideally) only assert on one thing.
You could easily recreate the EXPECT_* thing by appending to a list of errors and then asserting the list is empty at the end of the test, but there is no support directly in pytest for such a feature.
I'm starting a new project and trying to follow strict Test-Driven Development. I have a basic setup in place and working and am using pytest to run tests.
Tests are discovered and run correctly. They fail when they should and pass when they should. But in pytest's results, the number of tests performed is zero. This isn't a big deal, but I would like the visual feedback that confirms the test file is being run.
Failing:
============================= test session starts =============================
...
collected 0 items / 1 errors
=================================== ERRORS ====================================
___________ ERROR collecting tests/functional_tests/test_package.py ___________
...
=========================== 1 error in 0.05 seconds ===========================
Passing:
============================= test session starts =============================
...
collected 0 items
======================== no tests ran in 0.03 seconds =========================
For the record, my first functional test is just importing the package.
# Can we import the package?
import packagename
assert packagename is not None
The slightly redundant assert was my attempt at getting pytest to count this as "a test", since I know it rewrites assert to be more informative.
The test is run correctly, but the test session doesn't count this as being a test. I don't much care how it counts the tests (the whole file is one, each assert is one, whatever), but I would like it to do so!
Put it in a function.
Okay, after playing around some more, I may have solved my own problem. This, for instance, works (the whole function gets counted as one test).
def test_import():
# Can we import the package?
import packagename
I'll leave the question open to see if anyone has a better answer.
Here is a simple test file:
# test_single.py
def test_addition():
"Two plus two is still four"
assert 2 + 2 == 4
def test_addition2():
"One plus one is still two"
assert 1 + 1 == 2
The default output in py.test is like
$ py.test test_single.py -v
[...]
test_single.py::test_addition PASSED
test_single.py::test_addition2 PASSED
I would like to have
Two plus two is still four PASSED
One plus one is still two PASSED
i.e. use the docstrings as descriptions for the tests.
I tried to use a customization in a conftest.py file:
import pytest
#pytest.mark.tryfirst
def pytest_runtest_makereport(item, call, __multicall__):
# execute all other hooks to obtain the report object
rep = __multicall__.execute()
if rep.when == "call":
extra = item._obj.__doc__.strip()
rep.nodeid = extra
return rep
that is close, but it repeats the filename on every line:
$ py.test test_single.py
======================================================================================== test session starts =========================================================================================
platform darwin -- Python 2.7.7 -- py-1.4.26 -- pytest-2.6.4
plugins: greendots, osxnotify, pycharm
collected 2 items
test_single.py
And two plus two is still four .
test_single.py
And one plus one is still two .
====================================================================================== 2 passed in 0.11 seconds ======================================================================================
How can I avoid the lines with test_single.py in the output, or maybe print it only once?
Looking into the source of py.test and some of its plugins did not help.
I am aware of the pytest-spec plugin, but that uses the function's name as a description. I don't want to write def test_two_plus_two_is_four().
To expand on my comment to #michael-wan's answer: to achive something similar to specplugin put into conftest.py:
def pytest_itemcollected(item):
par = item.parent.obj
node = item.obj
pref = par.__doc__.strip() if par.__doc__ else par.__class__.__name__
suf = node.__doc__.strip() if node.__doc__ else node.__name__
if pref or suf:
item._nodeid = ' '.join((pref, suf))
and the pytest output of
class TestSomething:
"""Something"""
def test_ok(self):
"""should be ok"""
pass
will look like
If you omit docstrings class/func names will be used.
I was missing rspec in ruby for python. So, based on the plugin pytest-testdox., I have written similar one which takes doc strings as report message. You can check it out pytest-pspec.
For a plugin that (I think) does what you want out of the box, check out pytest-testdox.
It provides a friendly formatted list of each test function name, with test_ stripped, and underscores replaced with spaces, so that the test names are readible. It also breaks up the sections by test file.
This is what the output looks like:
#Matthias Berth, you can try to use pytest_itemcollected
def pytest_itemcollected(item):
""" we just collected a test item. """
item.setNodeid('' if item._obj.__doc__ is None else item._obj.__doc__.strip() )
and modify pydir/Lib/site-packages/pytest-2.9.1-py2.7.egg/_pytest/unittest.py
add the following function to the TestCaseFunction class
def setNodeid(self, value):
self._nodeid = value
and the result will be :
platform win32 -- Python 2.7.10, pytest-2.9.1, py-1.4.31, pluggy-0.3.1 -- D:\Python27\python.exe
cachedir: .cache
rootdir: E:\workspace\satp2\atest\testcase\Search\grp_sp, inifile:
plugins: html-1.8.0, pep8-1.0.6
collecting 0 itemsNone
collected 2 items
Two plus two is still four <- sut_amap3.py PASSED
One plus one is still two <- sut_amap3.py PASSED
by the way when you are using pytest-html
you can use the pytest_runtest_makereport function you make and it will generate the report with the name you customized.
hope this helps.
I wanted to do the same but in a simpler way, preferably without an external plugin to do more than needed and also avoiding changing the nodeid as it could break other things
I came up with the following solution:
test_one.py
import logging
logger = logging.getLogger(__name__)
def test_one():
""" The First test does something """
logger.info("One")
def test_two():
""" Now this Second test tests other things """
logger.info("Two")
def test_third():
""" Third test is basically checking crazy stuff """
logger.info("Three")
conftest.py
import pytest
import inspect
#pytest.mark.trylast
def pytest_configure(config):
terminal_reporter = config.pluginmanager.getplugin('terminalreporter')
config.pluginmanager.register(TestDescriptionPlugin(terminal_reporter), 'testdescription')
class TestDescriptionPlugin:
def __init__(self, terminal_reporter):
self.terminal_reporter = terminal_reporter
self.desc = None
def pytest_runtest_protocol(self, item):
self.desc = inspect.getdoc(item.obj)
#pytest.hookimpl(hookwrapper=True, tryfirst=True)
def pytest_runtest_logstart(self, nodeid, location):
if self.terminal_reporter.verbosity == 0:
yield
else:
self.terminal_reporter.write('\n')
yield
if self.desc:
self.terminal_reporter.write(f'\n{self.desc} ')
Running with --verbose
============================= test session starts =============================
platform win32 -- Python 3.8.2, pytest-5.4.1.dev62+g2d9dac95e, py-1.8.1, pluggy-0.13.1 -- C:\Users\Victor\PycharmProjects\pytest\venv\Scripts\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\Victor\PycharmProjects\pytest, inifile: tox.ini
collecting ... collected 3 items
test_one.py::test_one
The First test does something PASSED [ 33%]
test_one.py::test_two
Now this Second test tests other things PASSED [ 66%]
test_one.py::test_third
Third test is basically checking crazy stuff PASSED [100%]
============================== 3 passed in 0.07s ==============================
Running with --log-cli-level=INFO
============================= test session starts =============================
platform win32 -- Python 3.8.2, pytest-5.4.1.dev62+g2d9dac95e, py-1.8.1, pluggy-0.13.1
rootdir: C:\Users\Victor\PycharmProjects\pytest, inifile: tox.ini
collected 3 items
test_one.py::test_one
The First test does something
-------------------------------- live log call --------------------------------
INFO test_one:test_one.py:7 One
PASSED [ 33%]
test_one.py::test_two
Now this Second test tests other things
-------------------------------- live log call --------------------------------
INFO test_one:test_one.py:11 Two
PASSED [ 66%]
test_one.py::test_third
Third test is basically checking crazy stuff
-------------------------------- live log call --------------------------------
INFO test_one:test_one.py:15 Three
PASSED [100%]
============================== 3 passed in 0.07s ==============================
The plugin in conftest.py is probably simple enough for anyone to customize according to their own needs.