I have the following code in a python module called test_me.py:
#pytest.fixture()
def test_me():
if condition:
pytest.skip('Test Message')
def test_func(test_me):
assert ...
The output looks like:
tests/folder/test_me.py::test_me SKIPPED
Question: Where does 'Test Message' get printed or output? I can't see or find it anywhere.
According to the Pytest documentation, you can use the -rs flag to show it.
$ pytest -rs
======================== test session starts ========================
platform darwin -- Python 3.7.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: ...
collected 1 item
test_sample.py s [100%]
====================== short test summary info ======================
SKIPPED [1] test_sample.py:5: Test Message
======================== 1 skipped in 0.02s =========================
import pytest
#pytest.fixture()
def test_me():
pytest.skip('Test Message')
def test_1(test_me):
pass
Not sure if this is platform-specific, or if it works with OPs configuration, since OP didn't provide any specific info.
I don't believe there is any built-in way of printing these messages during the test runs. An alternative is to create your own skip function that does this:
def skip(message):
print(message) # you can add some additional info around the message
pytest.skip()
#pytest.fixture()
def test_me():
if condition:
skip('Test Message')
You could probably turn this custom function into a decorator too.
A cursory look at the code for pytest points to the fact that it becomes wrapped up as the message for the exception that's raised off the back of calling skip() itself. Its behavior, while not explicitly documented, is defined in outcomes.py:
def skip(msg: str = "", *, allow_module_level: bool = False) -> "NoReturn":
"""Skip an executing test with the given message.
This function should be called only during testing (setup, call or teardown) or
during collection by using the ``allow_module_level`` flag. This function can
be called in doctests as well.
:param bool allow_module_level:
Allows this function to be called at module level, skipping the rest
of the module. Defaults to False.
.. note::
It is better to use the :ref:`pytest.mark.skipif ref` marker when
possible to declare a test to be skipped under certain conditions
like mismatching platforms or dependencies.
Similarly, use the ``# doctest: +SKIP`` directive (see `doctest.SKIP
<https://docs.python.org/3/library/doctest.html#doctest.SKIP>`_)
to skip a doctest statically.
"""
__tracebackhide__ = True
raise Skipped(msg=msg, allow_module_level=allow_module_level)
Eventually, this exception is balled up through several layers and finally thrown as a BaseException. As such, you should be able to access the message itself by trapping the associated exception and reading its exception message (relevant SO thread).
Related
Referring to the sample code copied from pytest-dependency, slight changes by removing "tests" folder, I am expecting "test_e" and "test_g" to pass, however, both are skipped. Kindly advise if I have done anything silly that stopping the session scope from working properly.
Note:
pytest-dependency 0.5.1 is used.
Both modules are stored relative to the current working directory respectively.
test_mod_01.py
import pytest
#pytest.mark.dependency()
def test_a():
pass
#pytest.mark.dependency()
#pytest.mark.xfail(reason="deliberate fail")
def test_b():
assert False
#pytest.mark.dependency(depends=["test_a"])
def test_c():
pass
class TestClass(object):
#pytest.mark.dependency()
def test_b(self):
pass
test_mod_02.py
import pytest
#pytest.mark.dependency()
#pytest.mark.xfail(reason="deliberate fail")
def test_a():
assert False
#pytest.mark.dependency(
depends=["./test_mod_01.py::test_a", "./test_mod_01.py::test_c"],
scope='session'
)
def test_e():
pass
#pytest.mark.dependency(
depends=["./test_mod_01.py::test_b", "./test_mod_02.py::test_e"],
scope='session'
)
def test_f():
pass
#pytest.mark.dependency(
depends=["./test_mod_01.py::TestClass::test_b"],
scope='session'
)
def test_g():
pass
Unexpected output
=========================================================== test session starts ===========================================================
...
collected 4 items
test_mod_02.py xsss
[100%]
====================================================== 3 skipped, 1 xfailed in 0.38s ======================================================
Expected output
=========================================================== test session starts ===========================================================
...
collected 4 items
test_mod_02.py x.s.
[100%]
====================================================== 2 passed, 1 skipped, 1 xfailed in 0.38s ======================================================
The first problem is that pytest-dependency uses the full test node names if used in session scope. That means that you have to exactly match that string, which never contains relative paths like "." in your case.
Instead of using "./test_mod_01.py::test_c", you have to use something like "tests/test_mod_01.py::test_c", or "test_mod_01.py::test_c", depending where your test root is.
The second problem is that pytest-dependency will only work if the tests other tests are depend on are run before in the same test session, e.g. in your case both test_mod_01 and test_mod_02 modules have to be in the same test session. The test dependencies are looked up at runtime in the list of tests that already have been run.
Note that this also means that you cannot make tests in test_mod_01 depend on tests in test_mod_02, if you run the tests in the default order. You have to ensure that the tests are run in the correct order either by adapting the names accordingly, or by using some ordering plugin like pytest-order, which has an option (--order-dependencies) to order the tests if needed in such a case.
Disclaimer: I'm the maintainer of pytest-order.
Currently in the company where I'm working, we have a framework to run tests. We want to integrate pytest to be able to write tests in the pytest way, but we need the old framework for all the things it's doing in the background.
The issue I'm facing is regarding assertions. Currently we have a bunch of assertion functions. All of them use a private method to write both to python logging and to a json file. I would like to get rid of them and use only "assert".
What I did until now is to monkeypatch _pytest.assertion.rewrite.py with a custom module I created, where I changed the visit_Assert method and add this piece of code after line 873:
if isinstance(assert_.test, ast.Compare):
test_value = BINOP_MAP[assert_.test.ops[0].__class__]
test_type = "Comparison"
elif isinstance(assert_.test, ast.Call):
test_value = str(assert_.test.func.id)
test_type = "FunctionCall"
And then I call the same private method I mentioned above to save the results.
As you could guess I don't think it's the best way to do that: is there a better way?
I tried with the different hooks, but could not find the information I need (what is the comparison the assert is doing), especially because pytest is very good when the tests fail (it makes sense), but not so rich in information when the tests pass.
It depends a bit on which version of Pytest you're using, since the hooks are under pretty active development. But in any relatively recent version, you could implement the hook pytest_assertrepr_compare, which is called to report custom error messages on asserts that fail. This method can be defined in conftest.py, and pytest will happily use that definition.
A method like this:
def pytest_assertrepr_compare(config, op, left, right):
print("Call legacy method here")
return None
Would instruct pytest that no custom error messages are required (that's the return None part), but it would allow you to call arbitrary code on assert failures.
As an example, running pytest on a dummy test file, test_foo.py with contents:
def test_foo():
assert 0 == 1, "No bueno"
Should give the following output on your terminal:
================================================= test session starts ==================================================
platform darwin -- Python 3.9.0, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 -- /usr/local/opt/python#3.9/bin/python3.9
cachedir: .pytest_cache
rootdir: /Users/bnaecker/tmp
plugins: cov-2.10.1
collected 1 item
foo.py::test_foo FAILED [100%]
======================================================= FAILURES =======================================================
_______________________________________________________ test_foo _______________________________________________________
def test_foo():
> assert 0 == 1, "No bueno"
E AssertionError: No bueno
E assert 0 == 1
E +0
E -1
foo.py:6: AssertionError
------------------------------------------------- Captured stdout call -------------------------------------------------
Call legacy method here
=============================================== short test summary info ================================================
FAILED foo.py::test_foo - AssertionError: No bueno
================================================== 1 failed in 0.10s ===================================================
The captured stdout is a stand-in for calling your custom logging function. Also, note I'm using pytest-6.1.2, and it's not clear when this hook was included. Other similar hooks were introduced in 5.0, so it's plausible that anything in the >=6.0 would be fine, but YMMV.
Rereading your question, it occurs that you might be more specifically asking about how to call your custom method when an assertion passes, rather than when it fails. In that case, the experimental method pytest_assertion_pass may be what you're looking for. The setup is the same, just implement that method instead in your conftest.py.
Python 3.8.0, pytest 5.3.2, logging 0.5.1.2.
My code has an input loop, and to prevent the program crashing entirely, I catch any exceptions that get thrown, log them as critical, reset the program state, and keep going. That means that a test that causes such an exception won't outright fail, so long as the output is still what is expected. This might happen if the error was a side effect of the test code but didn't affect the main tested logic. I would still like to know that the test is exposing an error-causing bug however.
Most of the Googling I have done shows results on how to display logs within pytest, which I am doing, but I can't find out if there is a way to expose the logs within the test, such that I can fail any test with a log at Error or Critical level.
Edit: This is a minimal example of a failing attempt:
test.py:
import subject
import logging
import pytest
#pytest.fixture(autouse=True)
def no_log_errors(caplog):
yield # Run in teardown
print(caplog.records)
# caplog.set_level(logging.INFO)
errors = [record for record in caplog.records if record.levelno >= logging.ERROR]
assert not errors
def test_main():
subject.main()
# assert False
subject.py:
import logging
logger = logging.Logger('s')
def main():
logger.critical("log critical")
Running python3 -m pytest test.py passes with no errors.
Uncommenting the assert statement fails the test without errors, and prints [] to stdout, and log critical to stderr.
Edit 2:
I found why this fails. From the documentation on caplog:
The caplog.records attribute contains records from the current stage only, so inside the setup phase it contains only setup logs, same with the call and teardown phases
However, right underneath is what I should have found the first time:
To access logs from other stages, use the caplog.get_records(when) method. As an example, if you want to make sure that tests which use a certain fixture never log any warnings, you can inspect the records for the setup and call stages during teardown like so:
#pytest.fixture
def window(caplog):
window = create_window()
yield window
for when in ("setup", "call"):
messages = [
x.message for x in caplog.get_records(when) if x.levelno == logging.WARNING
]
if messages:
pytest.fail(
"warning messages encountered during testing: {}".format(messages)
)
However this still doesn't make a difference, and print(caplog.get_records("call")) still returns []
You can build something like this using the caplog fixture
here's some sample code from the docs which does some assertions based on the levels:
def test_baz(caplog):
func_under_test()
for record in caplog.records:
assert record.levelname != "CRITICAL"
assert "wally" not in caplog.text
since the records are the standard logging record types, you can use whatever you need there
here's one way you might do this ~more automatically using an autouse fixture:
#pytest.fixture(autouse=True)
def no_logs_gte_error(caplog):
yield
errors = [record for record in caplog.get_records('call') if record.levelno >= logging.ERROR]
assert not errors
(disclaimer: I'm a core dev on pytest)
You can use the unittest.mock module (even if using pytest) and monkey-patch whatever function / method you use for logging. Then in your test, you can have some assert that fails if, say, logging.error was called.
That'd be a short term solution. But it might also be the case that your design could benefit from more separation, so that you can easily test your application without a zealous try ... except block catching / suppressing just about everything.
What is best way to skip every remaining test if a specific test fails, here test_002_wips_online.py failed, and then there is no point in running further:
tests/test_001_springboot_monitor.py::TestClass::test_service_monitor[TEST12] PASSED [ 2%]
tests/test_002_wips_online.py::TestClass::test01_online[TEST12] FAILED [ 4%]
tests/test_003_idpro_test_api.py::TestClass::test01_testapi_present[TEST12] PASSED [ 6%]
I like to skip all remaining tests and write test report.
Should I write to a status file and write a function that checks it?
#pytest.mark.skipif(setup_failed(), reason="requirements failed")
pytest-skipif-reference
You should really look at pytest-dependency plugin: https://pytest-dependency.readthedocs.io/en/latest/usage.html
import pytest
#pytest.mark.dependency()
def test_b():
pass
#pytest.mark.dependency(depends=["test_b"])
def test_d():
pass
in this example test_d won't be executed if test_b fails
from the docs: http://pytest.org/en/latest/usage.html#stopping-after-the-first-or-n-failures
pytest -x # stop after first failure
pytest --maxfail=2 # stop after two failures
I found that calling pytest.exit("error message") if any of my critical predefine tests fail is the most convenient. pytest will finish all post jobs like html-report, screenshots and your error message will be printed at the end:
!!!!!!!!!!! _pytest.outcomes.Exit: Critical Error: wips frontend in test1 not running !!!!!!!!!!!
===================== 1 failed, 8 passed in 92.72 seconds ======================
I wanted to put results of pytest aserts into log.
First I tried this solution
def logged_assert(self, testval, msg=None):
if not testval:
if msg is None:
try:
assert testval
except AssertionError as e:
self.logger.exception(e)
raise e
self.logger.error(msg)
assert testval, msg
It work's fine but I need to use my own msg for every assert instead if build in. The problem is that testval evaluates when it passed into function and error msg is
AssertionError: False
I found an excellent way to solve the problem http://code.activestate.com/recipes/577074-logging-asserts/ here in first comment.
And I wrote this function in my logger wrapper module
def logged_excepthook(er_type, value, trace):
print('HOOK!')
if isinstance(er_type, AssertionError):
current = sys.modules[sys._getframe(1).f_globals['__name__']]
if 'logger' in sys.modules[current]:
sys.__excepthook__(er_type, value, trace)
sys.modules[current].error(exc_info=(er_type, value, trace))
else:
sys.__excepthook__(er_type, value, trace)
else:
sys.__excepthook__(er_type, value, trace)
and then
sys.excepthook = logged_excepthook
In test module, where i have asserts output of
import sys
print(sys.excepthook, sys.__excepthook__, logged_excepthook)
is
<function logged_excepthook at 0x02D672B8> <built-in function excepthook> <function logged_excepthook at 0x02D672B8>
But there is no 'Hook' message in my output. And also no ERROR message in my log files. All works like with builtin sys.excepthook.
I looked through pytest sources but sys.excepthook doesn't changed there.
But if I interrupt my code execution with Cntrl-C I got 'Hook' message in stdout.
The main question is why builtin sys.excepthook called isntead my custom function and how can I fix that.
But it also intresting to me if another way to log assert errors exists.
I am using python3.2 (32bit) at 64bit windows 8.1.
excepthook is only triggered if there is an unhandled exception, i.e. the one that normally terminates your program. Any exceptions in a test are handled by the test framework.
See Asserting with the assert statement - pytest documentation on how the feature is intended to be used. A custom message is specified the standard way: assert condition, failure_message.
If you're not satisfied with the way pytest handles asserts, you need to either
use a wrapper, or
hook the assert statement.
pytest uses an assert hook as well. Its logic is located in Lib\site-packages\_pytest\assertion (a stock plugin). It's probably enough to wrap/replace a few functions in there. To avoid patching the code base, you may be able to do with your own plugin: patch the exceptions plugin at runtime, or
disable it and reuse its functionality yourself instead.