Assertion improvements - python

Currently in the company where I'm working, we have a framework to run tests. We want to integrate pytest to be able to write tests in the pytest way, but we need the old framework for all the things it's doing in the background.
The issue I'm facing is regarding assertions. Currently we have a bunch of assertion functions. All of them use a private method to write both to python logging and to a json file. I would like to get rid of them and use only "assert".
What I did until now is to monkeypatch _pytest.assertion.rewrite.py with a custom module I created, where I changed the visit_Assert method and add this piece of code after line 873:
if isinstance(assert_.test, ast.Compare):
test_value = BINOP_MAP[assert_.test.ops[0].__class__]
test_type = "Comparison"
elif isinstance(assert_.test, ast.Call):
test_value = str(assert_.test.func.id)
test_type = "FunctionCall"
And then I call the same private method I mentioned above to save the results.
As you could guess I don't think it's the best way to do that: is there a better way?
I tried with the different hooks, but could not find the information I need (what is the comparison the assert is doing), especially because pytest is very good when the tests fail (it makes sense), but not so rich in information when the tests pass.

It depends a bit on which version of Pytest you're using, since the hooks are under pretty active development. But in any relatively recent version, you could implement the hook pytest_assertrepr_compare, which is called to report custom error messages on asserts that fail. This method can be defined in conftest.py, and pytest will happily use that definition.
A method like this:
def pytest_assertrepr_compare(config, op, left, right):
print("Call legacy method here")
return None
Would instruct pytest that no custom error messages are required (that's the return None part), but it would allow you to call arbitrary code on assert failures.
As an example, running pytest on a dummy test file, test_foo.py with contents:
def test_foo():
assert 0 == 1, "No bueno"
Should give the following output on your terminal:
================================================= test session starts ==================================================
platform darwin -- Python 3.9.0, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 -- /usr/local/opt/python#3.9/bin/python3.9
cachedir: .pytest_cache
rootdir: /Users/bnaecker/tmp
plugins: cov-2.10.1
collected 1 item
foo.py::test_foo FAILED [100%]
======================================================= FAILURES =======================================================
_______________________________________________________ test_foo _______________________________________________________
def test_foo():
> assert 0 == 1, "No bueno"
E AssertionError: No bueno
E assert 0 == 1
E +0
E -1
foo.py:6: AssertionError
------------------------------------------------- Captured stdout call -------------------------------------------------
Call legacy method here
=============================================== short test summary info ================================================
FAILED foo.py::test_foo - AssertionError: No bueno
================================================== 1 failed in 0.10s ===================================================
The captured stdout is a stand-in for calling your custom logging function. Also, note I'm using pytest-6.1.2, and it's not clear when this hook was included. Other similar hooks were introduced in 5.0, so it's plausible that anything in the >=6.0 would be fine, but YMMV.
Rereading your question, it occurs that you might be more specifically asking about how to call your custom method when an assertion passes, rather than when it fails. In that case, the experimental method pytest_assertion_pass may be what you're looking for. The setup is the same, just implement that method instead in your conftest.py.

Related

Pytest a function that prints to stdout

def f(n):
nuw = n.casefold()
for i in ["a","e","i","o","u"]:
nuw = nuw.replace(i,"")
print(nuw)
if __name__=='__main__':
ask = input("Word? ")
f(ask)
Here is the solution according to the docs:
def f(n):
nuw = n.casefold()
for i in ["a","e","i","o","u"]:
nuw = nuw.replace(i,"")
print(nuw)
def test_my_func_f(capsys): # or use "capfd" for fd-level
f("oren")
captured = capsys.readouterr()
assert captured.out == "rn\n"
When you run it, it goes smooth:
$ pytest --capture=sys main.py
================================================== test session starts ===================================================
platform linux -- Python 3.10.4, pytest-7.1.2, pluggy-1.0.0
rootdir: /home/oren/Downloads/LLL
collected 1 item
main.py . [100%]
=================================================== 1 passed in 0.01s ====================================================
While it's totally possible to test what's actually sent to stdout, it's not the recommended way.
Instead, design your functions so that they return their results. I.e., they have no side effects, as another commenter wrote. The advantage is, functions like these ("pure functions") are extremely easy to test: it simply gives you output — and it should be the same output for the same input every time.
And then, finally, to actually produce output from your program, do the IO in the top-level. E.g., in main() or some other top-level function/file. And sure, you can test this as well, if it's important. But I find that testing the lower level functions (which is silly easy to do) gives me enough confidence that my code works.

Where does pytest.skip('Output string') get printed?

I have the following code in a python module called test_me.py:
#pytest.fixture()
def test_me():
if condition:
pytest.skip('Test Message')
def test_func(test_me):
assert ...
The output looks like:
tests/folder/test_me.py::test_me SKIPPED
Question: Where does 'Test Message' get printed or output? I can't see or find it anywhere.
According to the Pytest documentation, you can use the -rs flag to show it.
$ pytest -rs
======================== test session starts ========================
platform darwin -- Python 3.7.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: ...
collected 1 item
test_sample.py s [100%]
====================== short test summary info ======================
SKIPPED [1] test_sample.py:5: Test Message
======================== 1 skipped in 0.02s =========================
import pytest
#pytest.fixture()
def test_me():
pytest.skip('Test Message')
def test_1(test_me):
pass
Not sure if this is platform-specific, or if it works with OPs configuration, since OP didn't provide any specific info.
I don't believe there is any built-in way of printing these messages during the test runs. An alternative is to create your own skip function that does this:
def skip(message):
print(message) # you can add some additional info around the message
pytest.skip()
#pytest.fixture()
def test_me():
if condition:
skip('Test Message')
You could probably turn this custom function into a decorator too.
A cursory look at the code for pytest points to the fact that it becomes wrapped up as the message for the exception that's raised off the back of calling skip() itself. Its behavior, while not explicitly documented, is defined in outcomes.py:
def skip(msg: str = "", *, allow_module_level: bool = False) -> "NoReturn":
"""Skip an executing test with the given message.
This function should be called only during testing (setup, call or teardown) or
during collection by using the ``allow_module_level`` flag. This function can
be called in doctests as well.
:param bool allow_module_level:
Allows this function to be called at module level, skipping the rest
of the module. Defaults to False.
.. note::
It is better to use the :ref:`pytest.mark.skipif ref` marker when
possible to declare a test to be skipped under certain conditions
like mismatching platforms or dependencies.
Similarly, use the ``# doctest: +SKIP`` directive (see `doctest.SKIP
<https://docs.python.org/3/library/doctest.html#doctest.SKIP>`_)
to skip a doctest statically.
"""
__tracebackhide__ = True
raise Skipped(msg=msg, allow_module_level=allow_module_level)
Eventually, this exception is balled up through several layers and finally thrown as a BaseException. As such, you should be able to access the message itself by trapping the associated exception and reading its exception message (relevant SO thread).

Does pytest have anything like google test's non-fatal EXPECT_* behavior?

I'm more familiar with the google test framework and know about the primary behavior pair they support about ASSERT_* vs EXPECT_* which are the fatal and non-fatal assert modes.
From the documentation:
The assertions come in pairs that test the same thing but have
different effects on the current function. ASSERT_* versions generate
fatal failures when they fail, and abort the current function.
EXPECT_* versions generate nonfatal failures, which don't abort the
current function. Usually EXPECT_* are preferred, as they allow more
than one failures to be reported in a test. However, you should use
ASSERT_* if it doesn't make sense to continue when the assertion in
question fails.
Question: does pytest also have a non fatal assert flavor or mode I can enable?
It's nice to allow a full range of tests to maximally execute to get the richest failure history rather than abort at the first failure and potentially hide subsequent failures that have to be discovered piecewise by running multiple instances of the test application.
I use pytest-assume for non-fatal assertions. It does the job pretty well.
Installation
As usual,
$ pip install pytest-assume
Usage example
import pytest
def test_spam():
pytest.assume(True)
pytest.assume(False)
a, b = True, False
pytest.assume(a == b)
pytest.assume(1 == 0)
pytest.assume(1 < 0)
pytest.assume('')
pytest.assume([])
pytest.assume({})
If you feel writing pytest.assume is a bit too much, just alias the import:
import pytest.assume as expect
def test_spam():
expect(True)
...
Running the above test yields:
$ pytest -v
============================= test session starts ==============================
platform linux -- Python 3.6.5, pytest-3.6.0, py-1.5.3, pluggy-0.6.0 -- /data/gentoo64-prefix/u0_a82/projects/stackoverflow/so-50630845
cachedir: .pytest_cache
rootdir: /data/gentoo64-prefix/u0_a82/projects/stackoverflow/so-50630845, inifile:
plugins: assume-1.2
collecting ... collected 1 item
test_spam.py::test_spam FAILED [100%]
=================================== FAILURES ===================================
__________________________________ test_spam ___________________________________
test_spam.py:6: AssumptionFailure
pytest.assume(False)
test_spam.py:9: AssumptionFailure
pytest.assume(a == b)
test_spam.py:11: AssumptionFailure
pytest.assume(1 == 0)
test_spam.py:12: AssumptionFailure
pytest.assume(1 < 0)
test_spam.py:13: AssumptionFailure
pytest.assume('')
test_spam.py:14: AssumptionFailure
pytest.assume([])
test_spam.py:14: AssumptionFailure
pytest.assume([])
test_spam.py:15: AssumptionFailure
pytest.assume({})
------------------------------------------------------------
Failed Assumptions: 7
=========================== 1 failed in 0.18 seconds ===========================
No, there is no feature like that in pytest. The most popular approach is to use regular assert statements, which fail the test immediately if the expression is falsey.
It's nice to allow a full range of tests to maximally execute to get the richest failure history rather than abort at the first failure and potentially hide subsequent failures that have to be discovered piecewise by running multiple instances of the test application.
Opinions differ on whether this is nice or not. In the open source Python community, at least, the popular approach is: every potential "subsequent failure that is discovered piecewise" would be written in its own separate test. More tests, smaller tests, that (ideally) only assert on one thing.
You could easily recreate the EXPECT_* thing by appending to a list of errors and then asserting the list is empty at the end of the test, but there is no support directly in pytest for such a feature.

Make pytest include functional tests in its count

I'm starting a new project and trying to follow strict Test-Driven Development. I have a basic setup in place and working and am using pytest to run tests.
Tests are discovered and run correctly. They fail when they should and pass when they should. But in pytest's results, the number of tests performed is zero. This isn't a big deal, but I would like the visual feedback that confirms the test file is being run.
Failing:
============================= test session starts =============================
...
collected 0 items / 1 errors
=================================== ERRORS ====================================
___________ ERROR collecting tests/functional_tests/test_package.py ___________
...
=========================== 1 error in 0.05 seconds ===========================
Passing:
============================= test session starts =============================
...
collected 0 items
======================== no tests ran in 0.03 seconds =========================
For the record, my first functional test is just importing the package.
# Can we import the package?
import packagename
assert packagename is not None
The slightly redundant assert was my attempt at getting pytest to count this as "a test", since I know it rewrites assert to be more informative.
The test is run correctly, but the test session doesn't count this as being a test. I don't much care how it counts the tests (the whole file is one, each assert is one, whatever), but I would like it to do so!
Put it in a function.
Okay, after playing around some more, I may have solved my own problem. This, for instance, works (the whole function gets counted as one test).
def test_import():
# Can we import the package?
import packagename
I'll leave the question open to see if anyone has a better answer.

py.test 2.3.5 does not run finalizer after fixture failure

i was trying py.test for its claimed better support than unittest for module and session fixtures, but i stumbled on a, at least for me, bizarre behavior.
Consider the following code (don't tell me it's dumb, i know it, it's just a quick and dirty hack to replicate the behavior) (i'm running on Python 2.7.5 x86 on windows 7)
import os
import shutil
import pytest
test_work_dir = 'test-work-dir'
tmp = os.environ['tmp']
count = 0
#pytest.fixture(scope='module')
def work_path(request):
global count
count += 1
print('test: ' + str(count))
test_work_path = os.path.join(tmp, test_work_dir)
def cleanup():
print('cleanup: ' + str(count))
if os.path.isdir(test_work_path):
shutil.rmtree(test_work_path)
request.addfinalizer(cleanup)
os.makedirs(test_work_path)
return test_work_path
def test_1(work_path):
assert os.path.isdir(work_path)
def test_2(work_path):
assert os.path.isdir(work_path)
def test_3(work_path):
assert os.path.isdir(work_path)
if __name__ == "__main__":
pytest.main(['-s', '-v', __file__])
If test_work_dir does not exist, then i obtain the expected behavior:
platform win32 -- Python 2.7.5 -- pytest-2.3.5 -- C:\Programs\Python\27-envs\common\Scripts\python.exe
collecting ... collected 4 items
py_test.py: [doctest] PASSED
py_test.py:34: test_1 test: 1
cleanup: 1
PASSED
py_test.py:38: test_2 PASSED
py_test.py:42: test_3 PASSEDcleanup: 1
fixture is called once for the module and cleanup is called once at the end of tests.
Then if test_work_dir exists i would expect something similar to unittest, that fixture is called once, it fails with OSError, tests that need it are not run, cleanup is called once and world peace is established again.
But... here's what i see:
py_test.py: [doctest] PASSED
py_test.py:34: test_1 test: 1
ERROR
py_test.py:38: test_2 test: 2
ERROR
py_test.py:42: test_3 test: 3
ERROR
Despite the failure of the fixture all the tests are run, the fixture that is supposed to be scope='module' is called once for each test and finalizer in never called!
I know that exceptions in fixtures are not good policy, but the real fixtures are complex and i'd rather avoid filling them with try blocks if i can count on the execution of each finalizer set till the point of failure. I don't want to go hunting for test artifacts after a failure.
And moreover trying to run the tests when not all of the fixtures they need are in place makes no sense and can make them at best erratic.
Is this the intended behavior of py.test in case of failure in a fixture?
Thanks, Gabriele
Three issues here:
you should register the finalizer after you performed the action that you want to be undone. So first call makedirs() and then register the finalizer. That's a general issue with fixtures because usually teardown code can only run if there was something successfully created
pytest-2.3.5 has a bug in that it will not call finalizers if the fixture function fails. I've just fixed it and you can install the 2.4.0.dev7 (or higher) version with pip install -i http://pypi.testrun.org -U pytest. It ensures the fixture finalizers are called even if the fixture function partially fails. Actually a bit surprising this hasn't popped up earlier but i guess people, including me, usually just go ahead and fix the fixtures instead of diving into what's happening specifically. So thanks for posting here!
if a module-scoped fixture function fails, the next test needing that fixture will still trigger execution of the fixture function again, as it might have been an intermittent failure. It stands to reason that pytest should memorize the failure for the given scope and not retry execution. If you think so, please open an issue, linking to this stackoverflow discussion.
thanks, holger

Categories

Resources