Use docstrings to list tests in py.test - python

Here is a simple test file:
# test_single.py
def test_addition():
"Two plus two is still four"
assert 2 + 2 == 4
def test_addition2():
"One plus one is still two"
assert 1 + 1 == 2
The default output in py.test is like
$ py.test test_single.py -v
[...]
test_single.py::test_addition PASSED
test_single.py::test_addition2 PASSED
I would like to have
Two plus two is still four PASSED
One plus one is still two PASSED
i.e. use the docstrings as descriptions for the tests.
I tried to use a customization in a conftest.py file:
import pytest
#pytest.mark.tryfirst
def pytest_runtest_makereport(item, call, __multicall__):
# execute all other hooks to obtain the report object
rep = __multicall__.execute()
if rep.when == "call":
extra = item._obj.__doc__.strip()
rep.nodeid = extra
return rep
that is close, but it repeats the filename on every line:
$ py.test test_single.py
======================================================================================== test session starts =========================================================================================
platform darwin -- Python 2.7.7 -- py-1.4.26 -- pytest-2.6.4
plugins: greendots, osxnotify, pycharm
collected 2 items
test_single.py
And two plus two is still four .
test_single.py
And one plus one is still two .
====================================================================================== 2 passed in 0.11 seconds ======================================================================================
How can I avoid the lines with test_single.py in the output, or maybe print it only once?
Looking into the source of py.test and some of its plugins did not help.
I am aware of the pytest-spec plugin, but that uses the function's name as a description. I don't want to write def test_two_plus_two_is_four().

To expand on my comment to #michael-wan's answer: to achive something similar to specplugin put into conftest.py:
def pytest_itemcollected(item):
par = item.parent.obj
node = item.obj
pref = par.__doc__.strip() if par.__doc__ else par.__class__.__name__
suf = node.__doc__.strip() if node.__doc__ else node.__name__
if pref or suf:
item._nodeid = ' '.join((pref, suf))
and the pytest output of
class TestSomething:
"""Something"""
def test_ok(self):
"""should be ok"""
pass
will look like
If you omit docstrings class/func names will be used.

I was missing rspec in ruby for python. So, based on the plugin pytest-testdox., I have written similar one which takes doc strings as report message. You can check it out pytest-pspec.

For a plugin that (I think) does what you want out of the box, check out pytest-testdox.
It provides a friendly formatted list of each test function name, with test_ stripped, and underscores replaced with spaces, so that the test names are readible. It also breaks up the sections by test file.
This is what the output looks like:

#Matthias Berth, you can try to use pytest_itemcollected
def pytest_itemcollected(item):
""" we just collected a test item. """
item.setNodeid('' if item._obj.__doc__ is None else item._obj.__doc__.strip() )
and modify pydir/Lib/site-packages/pytest-2.9.1-py2.7.egg/_pytest/unittest.py
add the following function to the TestCaseFunction class
def setNodeid(self, value):
self._nodeid = value
and the result will be :
platform win32 -- Python 2.7.10, pytest-2.9.1, py-1.4.31, pluggy-0.3.1 -- D:\Python27\python.exe
cachedir: .cache
rootdir: E:\workspace\satp2\atest\testcase\Search\grp_sp, inifile:
plugins: html-1.8.0, pep8-1.0.6
collecting 0 itemsNone
collected 2 items
Two plus two is still four <- sut_amap3.py PASSED
One plus one is still two <- sut_amap3.py PASSED
by the way when you are using pytest-html
you can use the pytest_runtest_makereport function you make and it will generate the report with the name you customized.
hope this helps.

I wanted to do the same but in a simpler way, preferably without an external plugin to do more than needed and also avoiding changing the nodeid as it could break other things
I came up with the following solution:
test_one.py
import logging
logger = logging.getLogger(__name__)
def test_one():
""" The First test does something """
logger.info("One")
def test_two():
""" Now this Second test tests other things """
logger.info("Two")
def test_third():
""" Third test is basically checking crazy stuff """
logger.info("Three")
conftest.py
import pytest
import inspect
#pytest.mark.trylast
def pytest_configure(config):
terminal_reporter = config.pluginmanager.getplugin('terminalreporter')
config.pluginmanager.register(TestDescriptionPlugin(terminal_reporter), 'testdescription')
class TestDescriptionPlugin:
def __init__(self, terminal_reporter):
self.terminal_reporter = terminal_reporter
self.desc = None
def pytest_runtest_protocol(self, item):
self.desc = inspect.getdoc(item.obj)
#pytest.hookimpl(hookwrapper=True, tryfirst=True)
def pytest_runtest_logstart(self, nodeid, location):
if self.terminal_reporter.verbosity == 0:
yield
else:
self.terminal_reporter.write('\n')
yield
if self.desc:
self.terminal_reporter.write(f'\n{self.desc} ')
Running with --verbose
============================= test session starts =============================
platform win32 -- Python 3.8.2, pytest-5.4.1.dev62+g2d9dac95e, py-1.8.1, pluggy-0.13.1 -- C:\Users\Victor\PycharmProjects\pytest\venv\Scripts\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\Victor\PycharmProjects\pytest, inifile: tox.ini
collecting ... collected 3 items
test_one.py::test_one
The First test does something PASSED [ 33%]
test_one.py::test_two
Now this Second test tests other things PASSED [ 66%]
test_one.py::test_third
Third test is basically checking crazy stuff PASSED [100%]
============================== 3 passed in 0.07s ==============================
Running with --log-cli-level=INFO
============================= test session starts =============================
platform win32 -- Python 3.8.2, pytest-5.4.1.dev62+g2d9dac95e, py-1.8.1, pluggy-0.13.1
rootdir: C:\Users\Victor\PycharmProjects\pytest, inifile: tox.ini
collected 3 items
test_one.py::test_one
The First test does something
-------------------------------- live log call --------------------------------
INFO test_one:test_one.py:7 One
PASSED [ 33%]
test_one.py::test_two
Now this Second test tests other things
-------------------------------- live log call --------------------------------
INFO test_one:test_one.py:11 Two
PASSED [ 66%]
test_one.py::test_third
Third test is basically checking crazy stuff
-------------------------------- live log call --------------------------------
INFO test_one:test_one.py:15 Three
PASSED [100%]
============================== 3 passed in 0.07s ==============================
The plugin in conftest.py is probably simple enough for anyone to customize according to their own needs.

Related

Pytest a function that prints to stdout

def f(n):
nuw = n.casefold()
for i in ["a","e","i","o","u"]:
nuw = nuw.replace(i,"")
print(nuw)
if __name__=='__main__':
ask = input("Word? ")
f(ask)
Here is the solution according to the docs:
def f(n):
nuw = n.casefold()
for i in ["a","e","i","o","u"]:
nuw = nuw.replace(i,"")
print(nuw)
def test_my_func_f(capsys): # or use "capfd" for fd-level
f("oren")
captured = capsys.readouterr()
assert captured.out == "rn\n"
When you run it, it goes smooth:
$ pytest --capture=sys main.py
================================================== test session starts ===================================================
platform linux -- Python 3.10.4, pytest-7.1.2, pluggy-1.0.0
rootdir: /home/oren/Downloads/LLL
collected 1 item
main.py . [100%]
=================================================== 1 passed in 0.01s ====================================================
While it's totally possible to test what's actually sent to stdout, it's not the recommended way.
Instead, design your functions so that they return their results. I.e., they have no side effects, as another commenter wrote. The advantage is, functions like these ("pure functions") are extremely easy to test: it simply gives you output — and it should be the same output for the same input every time.
And then, finally, to actually produce output from your program, do the IO in the top-level. E.g., in main() or some other top-level function/file. And sure, you can test this as well, if it's important. But I find that testing the lower level functions (which is silly easy to do) gives me enough confidence that my code works.

Session scope with pytest-dependency

Referring to the sample code copied from pytest-dependency, slight changes by removing "tests" folder, I am expecting "test_e" and "test_g" to pass, however, both are skipped. Kindly advise if I have done anything silly that stopping the session scope from working properly.
Note:
pytest-dependency 0.5.1 is used.
Both modules are stored relative to the current working directory respectively.
test_mod_01.py
import pytest
#pytest.mark.dependency()
def test_a():
pass
#pytest.mark.dependency()
#pytest.mark.xfail(reason="deliberate fail")
def test_b():
assert False
#pytest.mark.dependency(depends=["test_a"])
def test_c():
pass
class TestClass(object):
#pytest.mark.dependency()
def test_b(self):
pass
test_mod_02.py
import pytest
#pytest.mark.dependency()
#pytest.mark.xfail(reason="deliberate fail")
def test_a():
assert False
#pytest.mark.dependency(
depends=["./test_mod_01.py::test_a", "./test_mod_01.py::test_c"],
scope='session'
)
def test_e():
pass
#pytest.mark.dependency(
depends=["./test_mod_01.py::test_b", "./test_mod_02.py::test_e"],
scope='session'
)
def test_f():
pass
#pytest.mark.dependency(
depends=["./test_mod_01.py::TestClass::test_b"],
scope='session'
)
def test_g():
pass
Unexpected output
=========================================================== test session starts ===========================================================
...
collected 4 items
test_mod_02.py xsss
[100%]
====================================================== 3 skipped, 1 xfailed in 0.38s ======================================================
Expected output
=========================================================== test session starts ===========================================================
...
collected 4 items
test_mod_02.py x.s.
[100%]
====================================================== 2 passed, 1 skipped, 1 xfailed in 0.38s ======================================================
The first problem is that pytest-dependency uses the full test node names if used in session scope. That means that you have to exactly match that string, which never contains relative paths like "." in your case.
Instead of using "./test_mod_01.py::test_c", you have to use something like "tests/test_mod_01.py::test_c", or "test_mod_01.py::test_c", depending where your test root is.
The second problem is that pytest-dependency will only work if the tests other tests are depend on are run before in the same test session, e.g. in your case both test_mod_01 and test_mod_02 modules have to be in the same test session. The test dependencies are looked up at runtime in the list of tests that already have been run.
Note that this also means that you cannot make tests in test_mod_01 depend on tests in test_mod_02, if you run the tests in the default order. You have to ensure that the tests are run in the correct order either by adapting the names accordingly, or by using some ordering plugin like pytest-order, which has an option (--order-dependencies) to order the tests if needed in such a case.
Disclaimer: I'm the maintainer of pytest-order.

Assertion improvements

Currently in the company where I'm working, we have a framework to run tests. We want to integrate pytest to be able to write tests in the pytest way, but we need the old framework for all the things it's doing in the background.
The issue I'm facing is regarding assertions. Currently we have a bunch of assertion functions. All of them use a private method to write both to python logging and to a json file. I would like to get rid of them and use only "assert".
What I did until now is to monkeypatch _pytest.assertion.rewrite.py with a custom module I created, where I changed the visit_Assert method and add this piece of code after line 873:
if isinstance(assert_.test, ast.Compare):
test_value = BINOP_MAP[assert_.test.ops[0].__class__]
test_type = "Comparison"
elif isinstance(assert_.test, ast.Call):
test_value = str(assert_.test.func.id)
test_type = "FunctionCall"
And then I call the same private method I mentioned above to save the results.
As you could guess I don't think it's the best way to do that: is there a better way?
I tried with the different hooks, but could not find the information I need (what is the comparison the assert is doing), especially because pytest is very good when the tests fail (it makes sense), but not so rich in information when the tests pass.
It depends a bit on which version of Pytest you're using, since the hooks are under pretty active development. But in any relatively recent version, you could implement the hook pytest_assertrepr_compare, which is called to report custom error messages on asserts that fail. This method can be defined in conftest.py, and pytest will happily use that definition.
A method like this:
def pytest_assertrepr_compare(config, op, left, right):
print("Call legacy method here")
return None
Would instruct pytest that no custom error messages are required (that's the return None part), but it would allow you to call arbitrary code on assert failures.
As an example, running pytest on a dummy test file, test_foo.py with contents:
def test_foo():
assert 0 == 1, "No bueno"
Should give the following output on your terminal:
================================================= test session starts ==================================================
platform darwin -- Python 3.9.0, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 -- /usr/local/opt/python#3.9/bin/python3.9
cachedir: .pytest_cache
rootdir: /Users/bnaecker/tmp
plugins: cov-2.10.1
collected 1 item
foo.py::test_foo FAILED [100%]
======================================================= FAILURES =======================================================
_______________________________________________________ test_foo _______________________________________________________
def test_foo():
> assert 0 == 1, "No bueno"
E AssertionError: No bueno
E assert 0 == 1
E +0
E -1
foo.py:6: AssertionError
------------------------------------------------- Captured stdout call -------------------------------------------------
Call legacy method here
=============================================== short test summary info ================================================
FAILED foo.py::test_foo - AssertionError: No bueno
================================================== 1 failed in 0.10s ===================================================
The captured stdout is a stand-in for calling your custom logging function. Also, note I'm using pytest-6.1.2, and it's not clear when this hook was included. Other similar hooks were introduced in 5.0, so it's plausible that anything in the >=6.0 would be fine, but YMMV.
Rereading your question, it occurs that you might be more specifically asking about how to call your custom method when an assertion passes, rather than when it fails. In that case, the experimental method pytest_assertion_pass may be what you're looking for. The setup is the same, just implement that method instead in your conftest.py.

Pytest - how to skip tests unless you declare an option/flag?

I have some unit tests, but I'm looking for a way to tag some specific unit tests to have them skipped unless you declare an option when you call the tests.
Example:
If I call pytest test_reports.py, I'd want a couple specific unit tests to not be run.
But if I call pytest -<something> test_reports, then I want all my tests to be run.
I looked into the #pytest.mark.skipif(condition) tag but couldn't quite figure it out, so not sure if I'm on the right track or not. Any guidance here would be great!
The pytest documentation offers a nice example on how to skip tests marked "slow" by default and only run them with a --runslow option:
# conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"--runslow", action="store_true", default=False, help="run slow tests"
)
def pytest_configure(config):
config.addinivalue_line("markers", "slow: mark test as slow to run")
def pytest_collection_modifyitems(config, items):
if config.getoption("--runslow"):
# --runslow given in cli: do not skip slow tests
return
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
for item in items:
if "slow" in item.keywords:
item.add_marker(skip_slow)
We can now mark our tests in the following way:
# test_module.py
from time import sleep
import pytest
def test_func_fast():
sleep(0.1)
#pytest.mark.slow
def test_func_slow():
sleep(10)
The test test_func_fast is always executed (calling e.g. pytest). The "slow" function test_func_slow, however, will only be executed when calling pytest --runslow.
We are using markers with addoption in conftest.py
testcase:
#pytest.mark.no_cmd
def test_skip_if_no_command_line():
assert True
conftest.py:
in function
def pytest_addoption(parser):
parser.addoption("--no_cmd", action="store_true",
help="run the tests only in case of that command line (marked with marker #no_cmd)")
in function
def pytest_runtest_setup(item):
if 'no_cmd' in item.keywords and not item.config.getoption("--no_cmd"):
pytest.skip("need --no_cmd option to run this test")
pytest call:
py.test test_the_marker
-> test will be skipped
py.test test_the_marker --no_cmd
-> test will run
There are two ways to do that:
First method is to tag the functions with #pytest.mark decorator and run / skip the tagged functions alone using -m option.
#pytest.mark.anytag
def test_calc_add():
assert True
#pytest.mark.anytag
def test_calc_multiply():
assert True
def test_calc_divide():
assert True
Running the script as py.test -m anytag test_script.py will run only the first two functions.
Alternatively run as py.test -m "not anytag" test_script.py will run only the third function and skip the first two functions.
Here 'anytag' is the name of the tag. It can be anything.!
Second way is to run the functions with a common substring in their name using -k option.
def test_calc_add():
assert True
def test_calc_multiply():
assert True
def test_divide():
assert True
Running the script as py.test -k calc test_script.py will run the functions and skip the last one.
Note that 'calc' is the common substring present in both the function name and any other function having 'calc' in its name like 'calculate' will also be run.
Following the approach suggested in the pytest docs, thus the answer of #Manu_CJ, is certainly the way to go here.
I'd simply like to show how this can be adapted to easily define multiple options:
The canonical example given by the pytest docs highlights well how to add a single marker through command line options. However, adapting it to add multiple markers might not be straight forward, as the three hooks pytest_addoption, pytest_configure and pytest_collection_modifyitems all need to be evoked to allow adding a single marker through command line option.
This is one way you can adapt the canonical example, if you have several markers, like 'flag1', 'flag2', etc., that you want to be able to add via command line option:
# content of conftest.py
import pytest
# Create a dict of markers.
# The key is used as option, so --{key} will run all tests marked with key.
# The value must be a dict that specifies:
# 1. 'help': the command line help text
# 2. 'marker-descr': a description of the marker
# 3. 'skip-reason': displayed reason whenever a test with this marker is skipped.
optional_markers = {
"flag1": {"help": "<Command line help text for flag1...>",
"marker-descr": "<Description of the marker...>",
"skip-reason": "Test only runs with the --{} option."},
"flag2": {"help": "<Command line help text for flag2...>",
"marker-descr": "<Description of the marker...>",
"skip-reason": "Test only runs with the --{} option."},
# add further markers here
}
def pytest_addoption(parser):
for marker, info in optional_markers.items():
parser.addoption("--{}".format(marker), action="store_true",
default=False, help=info['help'])
def pytest_configure(config):
for marker, info in optional_markers.items():
config.addinivalue_line("markers",
"{}: {}".format(marker, info['marker-descr']))
def pytest_collection_modifyitems(config, items):
for marker, info in optional_markers.items():
if not config.getoption("--{}".format(marker)):
skip_test = pytest.mark.skip(
reason=info['skip-reason'].format(marker)
)
for item in items:
if marker in item.keywords:
item.add_marker(skip_test)
Now you can use the markers defined in optional_markers in your test modules:
# content of test_module.py
import pytest
#pytest.mark.flag1
def test_some_func():
pass
#pytest.mark.flag2
def test_other_func():
pass
If the use-case prohibits modifying either conftest.py and/or pytest.ini, here's how to use environment variables to directly take advantage of the skipif marker.
test_reports.py contents:
import os
import pytest
#pytest.mark.skipif(
not os.environ.get("MY_SPECIAL_FLAG"),
reason="MY_SPECIAL_FLAG not set in environment"
)
def test_skip_if_no_cli_tag():
assert True
def test_always_run():
assert True
In Windows:
> pytest -v test_reports.py --no-header
================== test session starts ===================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag SKIPPED [ 50%]
test_reports.py::test_always_run PASSED [100%]
============== 1 passed, 1 skipped in 0.01s ==============
> cmd /c "set MY_SPECIAL_FLAG=1&pytest -v test_reports.py --no-header"
================== test session starts ===================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag PASSED [ 50%]
test_reports.py::test_always_run PASSED [100%]
=================== 2 passed in 0.01s ====================
In Linux or other *NIX-like systems:
$ pytest -v test_reports.py --no-header
================= test session starts =================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag SKIPPED [ 50%]
test_reports.py::test_always_run PASSED [100%]
============ 1 passed, 1 skipped in 0.00s =============
$ MY_SPECIAL_FLAG=1 pytest -v test_reports.py --no-header
================= test session starts =================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag PASSED [ 50%]
test_reports.py::test_always_run PASSED [100%]
================== 2 passed in 0.00s ==================
MY_SPECIAL_FLAG can be whatever you wish based on your specific use-case and of course --no-header is just being used for this example.
Enjoy.

Improving pytest end report to show individual passed tests?

Been searching on this one for a while and been surprised to not find much. I'm currently working away with pytest and looking to improve the detail on passed tests.
The aim here is to report the individual tests that passed alongside the failures with the same level of detail. Using the example from the site for a failed test:
$ pytest
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items
test_sample.py F
======= FAILURES ========
_______ test_answer ________
def test_answer():
> assert func(3) == 5
E assert 4 == 5
E + where 4 = func(3)
test_sample.py:5: AssertionError
======= 1 failed in 0.12 seconds ========
I'm looking for a way for the passed tests to be reported in a similar manor, possibly with custom text?
If not a way to add custom text to the end report would suffice.
Is this possible or am I trying something here that's not correct?
Cheers,
R.
py.test -s shows stdout of successful tests.
This is not like fail result in the example above, but in successful pass you do not have any asserts fired.
So you would see just what your test will output to stdout in successful pass.

Categories

Resources