Pytest a function that prints to stdout - python

def f(n):
nuw = n.casefold()
for i in ["a","e","i","o","u"]:
nuw = nuw.replace(i,"")
print(nuw)
if __name__=='__main__':
ask = input("Word? ")
f(ask)

Here is the solution according to the docs:
def f(n):
nuw = n.casefold()
for i in ["a","e","i","o","u"]:
nuw = nuw.replace(i,"")
print(nuw)
def test_my_func_f(capsys): # or use "capfd" for fd-level
f("oren")
captured = capsys.readouterr()
assert captured.out == "rn\n"
When you run it, it goes smooth:
$ pytest --capture=sys main.py
================================================== test session starts ===================================================
platform linux -- Python 3.10.4, pytest-7.1.2, pluggy-1.0.0
rootdir: /home/oren/Downloads/LLL
collected 1 item
main.py . [100%]
=================================================== 1 passed in 0.01s ====================================================

While it's totally possible to test what's actually sent to stdout, it's not the recommended way.
Instead, design your functions so that they return their results. I.e., they have no side effects, as another commenter wrote. The advantage is, functions like these ("pure functions") are extremely easy to test: it simply gives you output — and it should be the same output for the same input every time.
And then, finally, to actually produce output from your program, do the IO in the top-level. E.g., in main() or some other top-level function/file. And sure, you can test this as well, if it's important. But I find that testing the lower level functions (which is silly easy to do) gives me enough confidence that my code works.

Related

Assertion improvements

Currently in the company where I'm working, we have a framework to run tests. We want to integrate pytest to be able to write tests in the pytest way, but we need the old framework for all the things it's doing in the background.
The issue I'm facing is regarding assertions. Currently we have a bunch of assertion functions. All of them use a private method to write both to python logging and to a json file. I would like to get rid of them and use only "assert".
What I did until now is to monkeypatch _pytest.assertion.rewrite.py with a custom module I created, where I changed the visit_Assert method and add this piece of code after line 873:
if isinstance(assert_.test, ast.Compare):
test_value = BINOP_MAP[assert_.test.ops[0].__class__]
test_type = "Comparison"
elif isinstance(assert_.test, ast.Call):
test_value = str(assert_.test.func.id)
test_type = "FunctionCall"
And then I call the same private method I mentioned above to save the results.
As you could guess I don't think it's the best way to do that: is there a better way?
I tried with the different hooks, but could not find the information I need (what is the comparison the assert is doing), especially because pytest is very good when the tests fail (it makes sense), but not so rich in information when the tests pass.
It depends a bit on which version of Pytest you're using, since the hooks are under pretty active development. But in any relatively recent version, you could implement the hook pytest_assertrepr_compare, which is called to report custom error messages on asserts that fail. This method can be defined in conftest.py, and pytest will happily use that definition.
A method like this:
def pytest_assertrepr_compare(config, op, left, right):
print("Call legacy method here")
return None
Would instruct pytest that no custom error messages are required (that's the return None part), but it would allow you to call arbitrary code on assert failures.
As an example, running pytest on a dummy test file, test_foo.py with contents:
def test_foo():
assert 0 == 1, "No bueno"
Should give the following output on your terminal:
================================================= test session starts ==================================================
platform darwin -- Python 3.9.0, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 -- /usr/local/opt/python#3.9/bin/python3.9
cachedir: .pytest_cache
rootdir: /Users/bnaecker/tmp
plugins: cov-2.10.1
collected 1 item
foo.py::test_foo FAILED [100%]
======================================================= FAILURES =======================================================
_______________________________________________________ test_foo _______________________________________________________
def test_foo():
> assert 0 == 1, "No bueno"
E AssertionError: No bueno
E assert 0 == 1
E +0
E -1
foo.py:6: AssertionError
------------------------------------------------- Captured stdout call -------------------------------------------------
Call legacy method here
=============================================== short test summary info ================================================
FAILED foo.py::test_foo - AssertionError: No bueno
================================================== 1 failed in 0.10s ===================================================
The captured stdout is a stand-in for calling your custom logging function. Also, note I'm using pytest-6.1.2, and it's not clear when this hook was included. Other similar hooks were introduced in 5.0, so it's plausible that anything in the >=6.0 would be fine, but YMMV.
Rereading your question, it occurs that you might be more specifically asking about how to call your custom method when an assertion passes, rather than when it fails. In that case, the experimental method pytest_assertion_pass may be what you're looking for. The setup is the same, just implement that method instead in your conftest.py.

Where does pytest.skip('Output string') get printed?

I have the following code in a python module called test_me.py:
#pytest.fixture()
def test_me():
if condition:
pytest.skip('Test Message')
def test_func(test_me):
assert ...
The output looks like:
tests/folder/test_me.py::test_me SKIPPED
Question: Where does 'Test Message' get printed or output? I can't see or find it anywhere.
According to the Pytest documentation, you can use the -rs flag to show it.
$ pytest -rs
======================== test session starts ========================
platform darwin -- Python 3.7.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: ...
collected 1 item
test_sample.py s [100%]
====================== short test summary info ======================
SKIPPED [1] test_sample.py:5: Test Message
======================== 1 skipped in 0.02s =========================
import pytest
#pytest.fixture()
def test_me():
pytest.skip('Test Message')
def test_1(test_me):
pass
Not sure if this is platform-specific, or if it works with OPs configuration, since OP didn't provide any specific info.
I don't believe there is any built-in way of printing these messages during the test runs. An alternative is to create your own skip function that does this:
def skip(message):
print(message) # you can add some additional info around the message
pytest.skip()
#pytest.fixture()
def test_me():
if condition:
skip('Test Message')
You could probably turn this custom function into a decorator too.
A cursory look at the code for pytest points to the fact that it becomes wrapped up as the message for the exception that's raised off the back of calling skip() itself. Its behavior, while not explicitly documented, is defined in outcomes.py:
def skip(msg: str = "", *, allow_module_level: bool = False) -> "NoReturn":
"""Skip an executing test with the given message.
This function should be called only during testing (setup, call or teardown) or
during collection by using the ``allow_module_level`` flag. This function can
be called in doctests as well.
:param bool allow_module_level:
Allows this function to be called at module level, skipping the rest
of the module. Defaults to False.
.. note::
It is better to use the :ref:`pytest.mark.skipif ref` marker when
possible to declare a test to be skipped under certain conditions
like mismatching platforms or dependencies.
Similarly, use the ``# doctest: +SKIP`` directive (see `doctest.SKIP
<https://docs.python.org/3/library/doctest.html#doctest.SKIP>`_)
to skip a doctest statically.
"""
__tracebackhide__ = True
raise Skipped(msg=msg, allow_module_level=allow_module_level)
Eventually, this exception is balled up through several layers and finally thrown as a BaseException. As such, you should be able to access the message itself by trapping the associated exception and reading its exception message (relevant SO thread).

Separate test cases per input files?

Most test frameworks assume that "1 test = 1 Python method/function",
and consider a test as passed when the function executes without
raising assertions.
I'm testing a compiler-like program (a program that reads *.foo
files and process their contents), for which I want to execute the same test on many input (*.foo) files. IOW, my test looks like:
class Test(unittest.TestCase):
def one_file(self, filename):
# do the actual test
def list_testcases(self):
# essentially os.listdir('tests/') and filter *.foo files.
def test_all(self):
for f in self.list_testcases():
one_file(f)
My current code uses
unittest from
Python's standard library, i.e. one_file uses self.assert...(...)
statements to check whether the test passes.
This works, in the sense that I do get a program which succeeds/fails
when my code is OK/buggy, but I'm loosing a lot of the advantages of
the testing framework:
I don't get relevant reporting like "X failures out of Y tests" nor
the list of passed/failed tests. (I'm planning to use such system
not only to test my own development but also to grade student's code
as a teacher, so reporting is important for me)
I don't get test independence. The second test runs on the
environment left by the first, and so on. The first failure stops
the testsuite: testcases coming after a failure are not ran at all.
I get the feeling that I'm abusing my test framework: there's only
one test function so automatic test discovery of unittest sounds
overkill for example. The same code could (should?) be written in
plain Python with a basic assert.
An obvious alternative is to change my code to something like
class Test(unittest.TestCase):
def one_file(self, filename):
# do the actual test
def test_file1(self):
one_file("first-testcase.foo")
def test_file2(self):
one_file("second-testcase.foo")
Then I get all the advantages of unittest back, but:
It's a lot more code to write.
It's easy to "forget" a testcase, i.e. create a test file in
tests/ and forget to add it to the Python test.
I can imagine a solution where I would generate one method per testcase dynamically (along the lines of setattr(self, 'test_file' + str(n), ...)), to generate the code for the second solution without having to write it by hand. But that sounds really overkill for a use-case which doesn't seem so complex.
How could I get the best of both, i.e.
automatic testcase discovery (list tests/*.foo files), test
independence and proper reporting?
If you can use pytest as your test runner, then this is actually pretty straightforward using the parametrize decorator:
import pytest, glob
all_files = glob.glob('some/path/*.foo')
#pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
# do the actual test
This will also automatically name the tests in a useful way, so that you can see which files have failed:
$ py.test
================================== test session starts ===================================
platform darwin -- Python 3.6.1, pytest-3.1.3, py-1.4.34, pluggy-0.4.0
[...]
======================================== FAILURES ========================================
_____________________________ test_one_file[some/path/a.foo] _____________________________
filename = 'some/path/a.foo'
#pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
> assert False
E assert False
test_it.py:7: AssertionError
_____________________________ test_one_file[some/path/b.foo] _____________________________
filename = 'some/path/b.foo'
#pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
[...]
Here is a solution, although it might be considered not very beautiful... The idea is to dynamically create new functions, add them to the test class, and use the function names as arguments (e.g., filenames):
# import
import unittest
# test class
class Test(unittest.TestCase):
# example test case
def test_default(self):
print('test_default')
self.assertEqual(2,2)
# set string for creating new function
func_string="""def test(cls):
# get function name and use it to pass information
filename = inspect.stack()[0][3]
# print function name for demonstration purposes
print(filename)
# dummy test for demonstration purposes
cls.assertEqual(type(filename),str)"""
# add new test for each item in list
for f in ['test_bla','test_blu','test_bli']:
# set name of new function
name=func_string.replace('test',f)
# create new function
exec(name)
# add new function to test class
setattr(Test, f, eval(f))
if __name__ == "__main__":
unittest.main()
This correctly runs all four tests and returns:
> test_bla
> test_bli
> test_blu
> test_default
> Ran 4 tests in 0.040s
> OK

Use docstrings to list tests in py.test

Here is a simple test file:
# test_single.py
def test_addition():
"Two plus two is still four"
assert 2 + 2 == 4
def test_addition2():
"One plus one is still two"
assert 1 + 1 == 2
The default output in py.test is like
$ py.test test_single.py -v
[...]
test_single.py::test_addition PASSED
test_single.py::test_addition2 PASSED
I would like to have
Two plus two is still four PASSED
One plus one is still two PASSED
i.e. use the docstrings as descriptions for the tests.
I tried to use a customization in a conftest.py file:
import pytest
#pytest.mark.tryfirst
def pytest_runtest_makereport(item, call, __multicall__):
# execute all other hooks to obtain the report object
rep = __multicall__.execute()
if rep.when == "call":
extra = item._obj.__doc__.strip()
rep.nodeid = extra
return rep
that is close, but it repeats the filename on every line:
$ py.test test_single.py
======================================================================================== test session starts =========================================================================================
platform darwin -- Python 2.7.7 -- py-1.4.26 -- pytest-2.6.4
plugins: greendots, osxnotify, pycharm
collected 2 items
test_single.py
And two plus two is still four .
test_single.py
And one plus one is still two .
====================================================================================== 2 passed in 0.11 seconds ======================================================================================
How can I avoid the lines with test_single.py in the output, or maybe print it only once?
Looking into the source of py.test and some of its plugins did not help.
I am aware of the pytest-spec plugin, but that uses the function's name as a description. I don't want to write def test_two_plus_two_is_four().
To expand on my comment to #michael-wan's answer: to achive something similar to specplugin put into conftest.py:
def pytest_itemcollected(item):
par = item.parent.obj
node = item.obj
pref = par.__doc__.strip() if par.__doc__ else par.__class__.__name__
suf = node.__doc__.strip() if node.__doc__ else node.__name__
if pref or suf:
item._nodeid = ' '.join((pref, suf))
and the pytest output of
class TestSomething:
"""Something"""
def test_ok(self):
"""should be ok"""
pass
will look like
If you omit docstrings class/func names will be used.
I was missing rspec in ruby for python. So, based on the plugin pytest-testdox., I have written similar one which takes doc strings as report message. You can check it out pytest-pspec.
For a plugin that (I think) does what you want out of the box, check out pytest-testdox.
It provides a friendly formatted list of each test function name, with test_ stripped, and underscores replaced with spaces, so that the test names are readible. It also breaks up the sections by test file.
This is what the output looks like:
#Matthias Berth, you can try to use pytest_itemcollected
def pytest_itemcollected(item):
""" we just collected a test item. """
item.setNodeid('' if item._obj.__doc__ is None else item._obj.__doc__.strip() )
and modify pydir/Lib/site-packages/pytest-2.9.1-py2.7.egg/_pytest/unittest.py
add the following function to the TestCaseFunction class
def setNodeid(self, value):
self._nodeid = value
and the result will be :
platform win32 -- Python 2.7.10, pytest-2.9.1, py-1.4.31, pluggy-0.3.1 -- D:\Python27\python.exe
cachedir: .cache
rootdir: E:\workspace\satp2\atest\testcase\Search\grp_sp, inifile:
plugins: html-1.8.0, pep8-1.0.6
collecting 0 itemsNone
collected 2 items
Two plus two is still four <- sut_amap3.py PASSED
One plus one is still two <- sut_amap3.py PASSED
by the way when you are using pytest-html
you can use the pytest_runtest_makereport function you make and it will generate the report with the name you customized.
hope this helps.
I wanted to do the same but in a simpler way, preferably without an external plugin to do more than needed and also avoiding changing the nodeid as it could break other things
I came up with the following solution:
test_one.py
import logging
logger = logging.getLogger(__name__)
def test_one():
""" The First test does something """
logger.info("One")
def test_two():
""" Now this Second test tests other things """
logger.info("Two")
def test_third():
""" Third test is basically checking crazy stuff """
logger.info("Three")
conftest.py
import pytest
import inspect
#pytest.mark.trylast
def pytest_configure(config):
terminal_reporter = config.pluginmanager.getplugin('terminalreporter')
config.pluginmanager.register(TestDescriptionPlugin(terminal_reporter), 'testdescription')
class TestDescriptionPlugin:
def __init__(self, terminal_reporter):
self.terminal_reporter = terminal_reporter
self.desc = None
def pytest_runtest_protocol(self, item):
self.desc = inspect.getdoc(item.obj)
#pytest.hookimpl(hookwrapper=True, tryfirst=True)
def pytest_runtest_logstart(self, nodeid, location):
if self.terminal_reporter.verbosity == 0:
yield
else:
self.terminal_reporter.write('\n')
yield
if self.desc:
self.terminal_reporter.write(f'\n{self.desc} ')
Running with --verbose
============================= test session starts =============================
platform win32 -- Python 3.8.2, pytest-5.4.1.dev62+g2d9dac95e, py-1.8.1, pluggy-0.13.1 -- C:\Users\Victor\PycharmProjects\pytest\venv\Scripts\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\Victor\PycharmProjects\pytest, inifile: tox.ini
collecting ... collected 3 items
test_one.py::test_one
The First test does something PASSED [ 33%]
test_one.py::test_two
Now this Second test tests other things PASSED [ 66%]
test_one.py::test_third
Third test is basically checking crazy stuff PASSED [100%]
============================== 3 passed in 0.07s ==============================
Running with --log-cli-level=INFO
============================= test session starts =============================
platform win32 -- Python 3.8.2, pytest-5.4.1.dev62+g2d9dac95e, py-1.8.1, pluggy-0.13.1
rootdir: C:\Users\Victor\PycharmProjects\pytest, inifile: tox.ini
collected 3 items
test_one.py::test_one
The First test does something
-------------------------------- live log call --------------------------------
INFO test_one:test_one.py:7 One
PASSED [ 33%]
test_one.py::test_two
Now this Second test tests other things
-------------------------------- live log call --------------------------------
INFO test_one:test_one.py:11 Two
PASSED [ 66%]
test_one.py::test_third
Third test is basically checking crazy stuff
-------------------------------- live log call --------------------------------
INFO test_one:test_one.py:15 Three
PASSED [100%]
============================== 3 passed in 0.07s ==============================
The plugin in conftest.py is probably simple enough for anyone to customize according to their own needs.

py.test 2.3.5 does not run finalizer after fixture failure

i was trying py.test for its claimed better support than unittest for module and session fixtures, but i stumbled on a, at least for me, bizarre behavior.
Consider the following code (don't tell me it's dumb, i know it, it's just a quick and dirty hack to replicate the behavior) (i'm running on Python 2.7.5 x86 on windows 7)
import os
import shutil
import pytest
test_work_dir = 'test-work-dir'
tmp = os.environ['tmp']
count = 0
#pytest.fixture(scope='module')
def work_path(request):
global count
count += 1
print('test: ' + str(count))
test_work_path = os.path.join(tmp, test_work_dir)
def cleanup():
print('cleanup: ' + str(count))
if os.path.isdir(test_work_path):
shutil.rmtree(test_work_path)
request.addfinalizer(cleanup)
os.makedirs(test_work_path)
return test_work_path
def test_1(work_path):
assert os.path.isdir(work_path)
def test_2(work_path):
assert os.path.isdir(work_path)
def test_3(work_path):
assert os.path.isdir(work_path)
if __name__ == "__main__":
pytest.main(['-s', '-v', __file__])
If test_work_dir does not exist, then i obtain the expected behavior:
platform win32 -- Python 2.7.5 -- pytest-2.3.5 -- C:\Programs\Python\27-envs\common\Scripts\python.exe
collecting ... collected 4 items
py_test.py: [doctest] PASSED
py_test.py:34: test_1 test: 1
cleanup: 1
PASSED
py_test.py:38: test_2 PASSED
py_test.py:42: test_3 PASSEDcleanup: 1
fixture is called once for the module and cleanup is called once at the end of tests.
Then if test_work_dir exists i would expect something similar to unittest, that fixture is called once, it fails with OSError, tests that need it are not run, cleanup is called once and world peace is established again.
But... here's what i see:
py_test.py: [doctest] PASSED
py_test.py:34: test_1 test: 1
ERROR
py_test.py:38: test_2 test: 2
ERROR
py_test.py:42: test_3 test: 3
ERROR
Despite the failure of the fixture all the tests are run, the fixture that is supposed to be scope='module' is called once for each test and finalizer in never called!
I know that exceptions in fixtures are not good policy, but the real fixtures are complex and i'd rather avoid filling them with try blocks if i can count on the execution of each finalizer set till the point of failure. I don't want to go hunting for test artifacts after a failure.
And moreover trying to run the tests when not all of the fixtures they need are in place makes no sense and can make them at best erratic.
Is this the intended behavior of py.test in case of failure in a fixture?
Thanks, Gabriele
Three issues here:
you should register the finalizer after you performed the action that you want to be undone. So first call makedirs() and then register the finalizer. That's a general issue with fixtures because usually teardown code can only run if there was something successfully created
pytest-2.3.5 has a bug in that it will not call finalizers if the fixture function fails. I've just fixed it and you can install the 2.4.0.dev7 (or higher) version with pip install -i http://pypi.testrun.org -U pytest. It ensures the fixture finalizers are called even if the fixture function partially fails. Actually a bit surprising this hasn't popped up earlier but i guess people, including me, usually just go ahead and fix the fixtures instead of diving into what's happening specifically. So thanks for posting here!
if a module-scoped fixture function fails, the next test needing that fixture will still trigger execution of the fixture function again, as it might have been an intermittent failure. It stands to reason that pytest should memorize the failure for the given scope and not retry execution. If you think so, please open an issue, linking to this stackoverflow discussion.
thanks, holger

Categories

Resources