Make pytest include functional tests in its count - python

I'm starting a new project and trying to follow strict Test-Driven Development. I have a basic setup in place and working and am using pytest to run tests.
Tests are discovered and run correctly. They fail when they should and pass when they should. But in pytest's results, the number of tests performed is zero. This isn't a big deal, but I would like the visual feedback that confirms the test file is being run.
Failing:
============================= test session starts =============================
...
collected 0 items / 1 errors
=================================== ERRORS ====================================
___________ ERROR collecting tests/functional_tests/test_package.py ___________
...
=========================== 1 error in 0.05 seconds ===========================
Passing:
============================= test session starts =============================
...
collected 0 items
======================== no tests ran in 0.03 seconds =========================
For the record, my first functional test is just importing the package.
# Can we import the package?
import packagename
assert packagename is not None
The slightly redundant assert was my attempt at getting pytest to count this as "a test", since I know it rewrites assert to be more informative.
The test is run correctly, but the test session doesn't count this as being a test. I don't much care how it counts the tests (the whole file is one, each assert is one, whatever), but I would like it to do so!

Put it in a function.
Okay, after playing around some more, I may have solved my own problem. This, for instance, works (the whole function gets counted as one test).
def test_import():
# Can we import the package?
import packagename
I'll leave the question open to see if anyone has a better answer.

Related

Assertion improvements

Currently in the company where I'm working, we have a framework to run tests. We want to integrate pytest to be able to write tests in the pytest way, but we need the old framework for all the things it's doing in the background.
The issue I'm facing is regarding assertions. Currently we have a bunch of assertion functions. All of them use a private method to write both to python logging and to a json file. I would like to get rid of them and use only "assert".
What I did until now is to monkeypatch _pytest.assertion.rewrite.py with a custom module I created, where I changed the visit_Assert method and add this piece of code after line 873:
if isinstance(assert_.test, ast.Compare):
test_value = BINOP_MAP[assert_.test.ops[0].__class__]
test_type = "Comparison"
elif isinstance(assert_.test, ast.Call):
test_value = str(assert_.test.func.id)
test_type = "FunctionCall"
And then I call the same private method I mentioned above to save the results.
As you could guess I don't think it's the best way to do that: is there a better way?
I tried with the different hooks, but could not find the information I need (what is the comparison the assert is doing), especially because pytest is very good when the tests fail (it makes sense), but not so rich in information when the tests pass.
It depends a bit on which version of Pytest you're using, since the hooks are under pretty active development. But in any relatively recent version, you could implement the hook pytest_assertrepr_compare, which is called to report custom error messages on asserts that fail. This method can be defined in conftest.py, and pytest will happily use that definition.
A method like this:
def pytest_assertrepr_compare(config, op, left, right):
print("Call legacy method here")
return None
Would instruct pytest that no custom error messages are required (that's the return None part), but it would allow you to call arbitrary code on assert failures.
As an example, running pytest on a dummy test file, test_foo.py with contents:
def test_foo():
assert 0 == 1, "No bueno"
Should give the following output on your terminal:
================================================= test session starts ==================================================
platform darwin -- Python 3.9.0, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 -- /usr/local/opt/python#3.9/bin/python3.9
cachedir: .pytest_cache
rootdir: /Users/bnaecker/tmp
plugins: cov-2.10.1
collected 1 item
foo.py::test_foo FAILED [100%]
======================================================= FAILURES =======================================================
_______________________________________________________ test_foo _______________________________________________________
def test_foo():
> assert 0 == 1, "No bueno"
E AssertionError: No bueno
E assert 0 == 1
E +0
E -1
foo.py:6: AssertionError
------------------------------------------------- Captured stdout call -------------------------------------------------
Call legacy method here
=============================================== short test summary info ================================================
FAILED foo.py::test_foo - AssertionError: No bueno
================================================== 1 failed in 0.10s ===================================================
The captured stdout is a stand-in for calling your custom logging function. Also, note I'm using pytest-6.1.2, and it's not clear when this hook was included. Other similar hooks were introduced in 5.0, so it's plausible that anything in the >=6.0 would be fine, but YMMV.
Rereading your question, it occurs that you might be more specifically asking about how to call your custom method when an assertion passes, rather than when it fails. In that case, the experimental method pytest_assertion_pass may be what you're looking for. The setup is the same, just implement that method instead in your conftest.py.

Does pytest have anything like google test's non-fatal EXPECT_* behavior?

I'm more familiar with the google test framework and know about the primary behavior pair they support about ASSERT_* vs EXPECT_* which are the fatal and non-fatal assert modes.
From the documentation:
The assertions come in pairs that test the same thing but have
different effects on the current function. ASSERT_* versions generate
fatal failures when they fail, and abort the current function.
EXPECT_* versions generate nonfatal failures, which don't abort the
current function. Usually EXPECT_* are preferred, as they allow more
than one failures to be reported in a test. However, you should use
ASSERT_* if it doesn't make sense to continue when the assertion in
question fails.
Question: does pytest also have a non fatal assert flavor or mode I can enable?
It's nice to allow a full range of tests to maximally execute to get the richest failure history rather than abort at the first failure and potentially hide subsequent failures that have to be discovered piecewise by running multiple instances of the test application.
I use pytest-assume for non-fatal assertions. It does the job pretty well.
Installation
As usual,
$ pip install pytest-assume
Usage example
import pytest
def test_spam():
pytest.assume(True)
pytest.assume(False)
a, b = True, False
pytest.assume(a == b)
pytest.assume(1 == 0)
pytest.assume(1 < 0)
pytest.assume('')
pytest.assume([])
pytest.assume({})
If you feel writing pytest.assume is a bit too much, just alias the import:
import pytest.assume as expect
def test_spam():
expect(True)
...
Running the above test yields:
$ pytest -v
============================= test session starts ==============================
platform linux -- Python 3.6.5, pytest-3.6.0, py-1.5.3, pluggy-0.6.0 -- /data/gentoo64-prefix/u0_a82/projects/stackoverflow/so-50630845
cachedir: .pytest_cache
rootdir: /data/gentoo64-prefix/u0_a82/projects/stackoverflow/so-50630845, inifile:
plugins: assume-1.2
collecting ... collected 1 item
test_spam.py::test_spam FAILED [100%]
=================================== FAILURES ===================================
__________________________________ test_spam ___________________________________
test_spam.py:6: AssumptionFailure
pytest.assume(False)
test_spam.py:9: AssumptionFailure
pytest.assume(a == b)
test_spam.py:11: AssumptionFailure
pytest.assume(1 == 0)
test_spam.py:12: AssumptionFailure
pytest.assume(1 < 0)
test_spam.py:13: AssumptionFailure
pytest.assume('')
test_spam.py:14: AssumptionFailure
pytest.assume([])
test_spam.py:14: AssumptionFailure
pytest.assume([])
test_spam.py:15: AssumptionFailure
pytest.assume({})
------------------------------------------------------------
Failed Assumptions: 7
=========================== 1 failed in 0.18 seconds ===========================
No, there is no feature like that in pytest. The most popular approach is to use regular assert statements, which fail the test immediately if the expression is falsey.
It's nice to allow a full range of tests to maximally execute to get the richest failure history rather than abort at the first failure and potentially hide subsequent failures that have to be discovered piecewise by running multiple instances of the test application.
Opinions differ on whether this is nice or not. In the open source Python community, at least, the popular approach is: every potential "subsequent failure that is discovered piecewise" would be written in its own separate test. More tests, smaller tests, that (ideally) only assert on one thing.
You could easily recreate the EXPECT_* thing by appending to a list of errors and then asserting the list is empty at the end of the test, but there is no support directly in pytest for such a feature.

Py.test skip messages don't show in jenkins

I have a minor issue using py.test for my unit tests.
I use py.test to run my tests and output a junitxml report of the tests.
This xml report is imported in jenkins and generates nice statistics.
When I use a test class which derives from unittest.TestCase,
I skip expected failures using:
#unittest.skip("Bug 1234 : This does not work")
This message also shows up in jenkins when selecting this test.
When I don't use a unittest.TestCase class, e.g. to use py.test parametrize functionality,
I skip expected failures using:
#pytest.mark.xfail(reason="Bug 1234 : This does not work", run=False)
But then this reason is not actually displayed in jenkins, instead it will say:
Skip Message
expected test failure
How can I fix this?
I solved it using this line as the first line of the test:
pytest.skip("Bug 1234: This does not work")
I'd rather have used one of the pytest decorators, but this'll do.
I had a similar problem except I had a different Jenkins message and could not tell which test was skipped.
It turns out that if the only test in the module is a skipped test, then jenkins would not show the test in the test result list (using either decorator or jr-be's soloution). You could see that there was a skipped test in the total resuls, but could not tell which test or which module the skipped test was in.
To solve this (ok hack solve), I went back to using the decorator on my test and added a dummy test (so have 1 test that runs and 1 test that gets skipped):
#pytest.skip('SONIC-3218')
def test_segments_create_delete(self, api):
logging.info('TestCreateDeleteSegments.test_segments_create_delete')
def test_dummy(self, api):
'''
Dummy test to see if suite will display in jenkins if one
test is run and 1 is skipped (instead of having only skipped tests)
'''
logging.info('TestCreateDeleteSegments.test_dummy')
For me that works since I would rather have 1 extra dummy test and be able to find my skipped tests.

py.test 2.3.5 does not run finalizer after fixture failure

i was trying py.test for its claimed better support than unittest for module and session fixtures, but i stumbled on a, at least for me, bizarre behavior.
Consider the following code (don't tell me it's dumb, i know it, it's just a quick and dirty hack to replicate the behavior) (i'm running on Python 2.7.5 x86 on windows 7)
import os
import shutil
import pytest
test_work_dir = 'test-work-dir'
tmp = os.environ['tmp']
count = 0
#pytest.fixture(scope='module')
def work_path(request):
global count
count += 1
print('test: ' + str(count))
test_work_path = os.path.join(tmp, test_work_dir)
def cleanup():
print('cleanup: ' + str(count))
if os.path.isdir(test_work_path):
shutil.rmtree(test_work_path)
request.addfinalizer(cleanup)
os.makedirs(test_work_path)
return test_work_path
def test_1(work_path):
assert os.path.isdir(work_path)
def test_2(work_path):
assert os.path.isdir(work_path)
def test_3(work_path):
assert os.path.isdir(work_path)
if __name__ == "__main__":
pytest.main(['-s', '-v', __file__])
If test_work_dir does not exist, then i obtain the expected behavior:
platform win32 -- Python 2.7.5 -- pytest-2.3.5 -- C:\Programs\Python\27-envs\common\Scripts\python.exe
collecting ... collected 4 items
py_test.py: [doctest] PASSED
py_test.py:34: test_1 test: 1
cleanup: 1
PASSED
py_test.py:38: test_2 PASSED
py_test.py:42: test_3 PASSEDcleanup: 1
fixture is called once for the module and cleanup is called once at the end of tests.
Then if test_work_dir exists i would expect something similar to unittest, that fixture is called once, it fails with OSError, tests that need it are not run, cleanup is called once and world peace is established again.
But... here's what i see:
py_test.py: [doctest] PASSED
py_test.py:34: test_1 test: 1
ERROR
py_test.py:38: test_2 test: 2
ERROR
py_test.py:42: test_3 test: 3
ERROR
Despite the failure of the fixture all the tests are run, the fixture that is supposed to be scope='module' is called once for each test and finalizer in never called!
I know that exceptions in fixtures are not good policy, but the real fixtures are complex and i'd rather avoid filling them with try blocks if i can count on the execution of each finalizer set till the point of failure. I don't want to go hunting for test artifacts after a failure.
And moreover trying to run the tests when not all of the fixtures they need are in place makes no sense and can make them at best erratic.
Is this the intended behavior of py.test in case of failure in a fixture?
Thanks, Gabriele
Three issues here:
you should register the finalizer after you performed the action that you want to be undone. So first call makedirs() and then register the finalizer. That's a general issue with fixtures because usually teardown code can only run if there was something successfully created
pytest-2.3.5 has a bug in that it will not call finalizers if the fixture function fails. I've just fixed it and you can install the 2.4.0.dev7 (or higher) version with pip install -i http://pypi.testrun.org -U pytest. It ensures the fixture finalizers are called even if the fixture function partially fails. Actually a bit surprising this hasn't popped up earlier but i guess people, including me, usually just go ahead and fix the fixtures instead of diving into what's happening specifically. So thanks for posting here!
if a module-scoped fixture function fails, the next test needing that fixture will still trigger execution of the fixture function again, as it might have been an intermittent failure. It stands to reason that pytest should memorize the failure for the given scope and not retry execution. If you think so, please open an issue, linking to this stackoverflow discussion.
thanks, holger

In py.test, when I explicitly skip a test that is marked as xfail, how can I get it reported as 'skipped' instead of 'xfailed'?

I have a py.test test function marked as xfail:
#pytest.mark.xfail
def test_that_fails():
assert 1 == 2
In my pytest_runtest_setup() hook, I skip this test explicitly:
def pytest_runtest_setup (item):
pytest.skip ('Skipping this test')
When I run py.test, it reports that the test xfailed:
tests.py x
========================== 1 xfailed in 1.69 seconds ===========================
How can I get py.test to report this test as skipped?
It seems like I am asking, "How can I remove the xfail marking form this test in my pytest_runtest_setup() hook?"
Thanks.
Thanks for the bug report :)
I agree that expecting py.test to report this as a "skipped" tests rather than xfail makes sense. After all, the xfail-outcome (xfail/xpass) cannot be determined if the test is skipped. I just fixed this issue which you can install by typing:
pip_install -i http://pypi.testrun.org -U pytest
# using easy_install is fine as well
this should get you 2.2.5.dev4 with "py.test --version" and fix your issue. It will be part of the next pypi release.

Categories

Resources