When talking about pytest we know two things:
When a test pass, no output is given in principle
Sometimes the assertion failures can have very cryptic messages.
I took a course that solved this by using print to clarify desired outputs and calling the pytest as pytest -v -s. I think it is a great solution
Another developer in my company thinks that test code should be as free of "side effects" as possible (and considers prints as side effect). He suggests outputting to a file which I think it is not a good practice. (I think that is an undesirable side effect)
So I would like to hear about this from other developers.
How do you solve the two points given in the beginning and do you use prints in your tests?
As someone already pointed out you can provide your own assert message:
def test_something():
i = 2
assert i == 1, "i should be equal to one"
There should be really no difference between using assert messages and prints, but in case of an assert message only it would be visible in pytest report, and not all the stdout calls:
In this case 0-9 would be printed in pytest report
def test_something():
i = 2
for i in range(10):
print(i)
assert i == 1
Logging everything to a file would definitely make working with pytest harder, and would be a pain to debug if your tests fail in CI.
If you need descriptive messages I would prefer using assert messages and, maybe, prints for debug information.
using print() in your test is not a good solution, you need to see data in cli or on a pipeline.
so for assertion you can share custom messages for assertion in pass or fail case or even in raise an exception
here is a basic tutorial for this
https://docs.pytest.org/en/7.1.x/how-to/assert.html
for general test steps the best way to get the info is logging
logging on different levels
import logging as logger
logger.info('what info you want to share')
logger.error('what info you want to share')
logger.debug('what info you want to share')
for more info you can check this
https://docs.python.org/3/howto/logging.html
Related
I'm using the flake8-pytest-style plugin and it flags a certain test as violating PT012. This is about having too much logic in the raises() statement.
The code in question is this:
def test_bad_python_version(capsys) -> None:
import platform
from quendor.__main__ import main
with pytest.raises(SystemExit) as pytest_wrapped_e, mock.patch.object(
platform,
"python_version",
) as v_info:
v_info.return_value = "3.5"
main()
terminal_text = capsys.readouterr()
expect(terminal_text.err).to(contain("Quendor requires Python 3.7"))
expect(pytest_wrapped_e.type).to(equal(SystemExit))
expect(pytest_wrapped_e.value.code).to(equal(1))
Basically this is testing the following code:
def main() -> int:
if platform.python_version() < "3.7":
sys.stderr.write("\nQuendor requires Python 3.7 or later.\n")
sys.stderr.write(f"Your current version is {platform.python_version()}\n\n")
sys.exit(1)
What I do is just pass in a version of Python that is less than the required and make sure the error appears as expected. The test itself works perfectly fine. (I realize it can be questionable as to whether this should be a unit test at all since it's really testing more of an aspect of Python than my own code.)
Clearly the lint check is suggesting that my test is a little messy and I can certainly understand that. But it's not clear from the above referenced page what I'm supposed to do about it.
I do realize I could just disable the quality check for this particular test but I'm trying to craft as good of Python code as I can, particularly around tests. And I'm at a loss as to how to refactor this code to meet the criteria.
I know I can create some other test helper function and then have that function called from the raises block. But that strikes me as being less clear overall since now you have to look in two places in order to see what the test is doing.
the lint error is a very good one! in fact in your case because the lint error is not followed you have two lines of unreachable code (!) (the two capsys-related lines) because main() always raises
the lint is suggesting that you only have one line in a raises() block -- the naive refactor from your existing code is:
with mock.patch.object(
platform,
"python_version",
return_value="3.5",
):
with pytest.raises(SystemExit) as pytest_wrapped_e:
main()
terminal_text = capsys.readouterr()
expect(terminal_text.err).to(contain("Quendor requires Python 3.7"))
expect(pytest_wrapped_e.type).to(equal(SystemExit))
expect(pytest_wrapped_e.value.code).to(equal(1))
an aside, you should never use platform.python_version() for version comparisons as it produces incorrect results for python 3.10 -- more on that and a linter for it here
Python 3.8.0, pytest 5.3.2, logging 0.5.1.2.
My code has an input loop, and to prevent the program crashing entirely, I catch any exceptions that get thrown, log them as critical, reset the program state, and keep going. That means that a test that causes such an exception won't outright fail, so long as the output is still what is expected. This might happen if the error was a side effect of the test code but didn't affect the main tested logic. I would still like to know that the test is exposing an error-causing bug however.
Most of the Googling I have done shows results on how to display logs within pytest, which I am doing, but I can't find out if there is a way to expose the logs within the test, such that I can fail any test with a log at Error or Critical level.
Edit: This is a minimal example of a failing attempt:
test.py:
import subject
import logging
import pytest
#pytest.fixture(autouse=True)
def no_log_errors(caplog):
yield # Run in teardown
print(caplog.records)
# caplog.set_level(logging.INFO)
errors = [record for record in caplog.records if record.levelno >= logging.ERROR]
assert not errors
def test_main():
subject.main()
# assert False
subject.py:
import logging
logger = logging.Logger('s')
def main():
logger.critical("log critical")
Running python3 -m pytest test.py passes with no errors.
Uncommenting the assert statement fails the test without errors, and prints [] to stdout, and log critical to stderr.
Edit 2:
I found why this fails. From the documentation on caplog:
The caplog.records attribute contains records from the current stage only, so inside the setup phase it contains only setup logs, same with the call and teardown phases
However, right underneath is what I should have found the first time:
To access logs from other stages, use the caplog.get_records(when) method. As an example, if you want to make sure that tests which use a certain fixture never log any warnings, you can inspect the records for the setup and call stages during teardown like so:
#pytest.fixture
def window(caplog):
window = create_window()
yield window
for when in ("setup", "call"):
messages = [
x.message for x in caplog.get_records(when) if x.levelno == logging.WARNING
]
if messages:
pytest.fail(
"warning messages encountered during testing: {}".format(messages)
)
However this still doesn't make a difference, and print(caplog.get_records("call")) still returns []
You can build something like this using the caplog fixture
here's some sample code from the docs which does some assertions based on the levels:
def test_baz(caplog):
func_under_test()
for record in caplog.records:
assert record.levelname != "CRITICAL"
assert "wally" not in caplog.text
since the records are the standard logging record types, you can use whatever you need there
here's one way you might do this ~more automatically using an autouse fixture:
#pytest.fixture(autouse=True)
def no_logs_gte_error(caplog):
yield
errors = [record for record in caplog.get_records('call') if record.levelno >= logging.ERROR]
assert not errors
(disclaimer: I'm a core dev on pytest)
You can use the unittest.mock module (even if using pytest) and monkey-patch whatever function / method you use for logging. Then in your test, you can have some assert that fails if, say, logging.error was called.
That'd be a short term solution. But it might also be the case that your design could benefit from more separation, so that you can easily test your application without a zealous try ... except block catching / suppressing just about everything.
I generally like the pytest warnings capture hook, as I can use it to force my test suite to not have any warnings triggered. However, I have one test that requires the warnings to print to stderr correctly to work.
How can I disable the warnings capture for just the one test?
For instance, something like
def test_warning():
mystderr = StringIO()
sys.stderr = mystderr
warnings.warn('warning')
assert 'UserWarning: warning' in mystderr.getvalue()
(I know I can use capsys, I just want to show the basic idea)
Thanks to the narrowing down in this discussion, I think the question might better be titled "In pytest, how to capture warnings and their standard error output in a single test?". Given that suggested rewording, I think the answer is "it can't, you need a separate test".
If there were no standard error capture requirement, you should be able to use the #pytest.mark.filterwarnings annotation for this.
#pytest.mark.filterwarnings("ignore")
def test_one():
assert api_v1() == 1
From:
https://docs.pytest.org/en/latest/warnings.html#pytest-mark-filterwarnings
#wim points out in a comment this will not capture the warning, though, and the answer he lays out captures and asserts on the warnings in a standard way.
If there were stderr output but not Python warnings thrown, capsys would be the technique, as you say
https://docs.pytest.org/en/latest/capture.html
I don't think it's meaningful to do both in a pytest test, because of the nature of the pytest implementation.
As previously noted pytest redirects stderr etc to an internal recorder. Secondly, it defines its own warnings handler
https://github.com/pytest-dev/pytest/blob/master/src/_pytest/warnings.py#L59
It is similar in idea to the answer to this question:
https://stackoverflow.com/a/5645133/5729872
I had a little poke around with redefining warnings.showwarning(), which worked fine from vanilla python, but pytest deliberately reinitializes that as well.
won't work in pytest, only straight python -->
def func(x):
warnings.warn('wwarn')
print(warnings.showwarning.__doc__)
# print('ewarn', file=sys.stderr)
return x + 1
sworig = warnings.showwarning
def showwarning_wrapper(message, category, filename, lineno, file=None, line=None):
"""Local override for showwarning()"""
print('swwrapper({})'.format(file) )
sworig(message,category,filename,lineno,file,line)
warnings.showwarning = showwarning_wrapper
<-- won't work in pytest, only straight python
You could probably put a warnings handler in your test case that reoutput to stderr ... but that doesn't prove much about the code under test, at that point.
It is your system at the end of the day. If after consideration of the point made by #wim that testing stderr as such may not prove much, you decide you still need it, I suggest separating the testing of the Python warning object (python caller layer) and the contents of stderr (calling shell layer). The first test would look at Python warning objects only. The new second test case would call the library under test as a script, through popen() or similar, and assert on the resulting standard error and output.
I'll encourage you to think about this problem in a different way.
When you want to assert that some of your code triggers warnings, you should be using a pytest.warns context for that. Check the warning message by using the match keyword, and avoid the extra complications of trying to capture it from stderr.
import re
import warnings
import pytest
def test_warning():
expected_warning_message = "my warning"
match = re.escape(expected_warning_message)
with pytest.warns(UserWarning, match=match):
warnings.warn("my warning", UserWarning)
That should be the edge of your testing responsibility. It is not your responsibility to test that the warnings module itself prints some output to stderr, because that behavior is coming from standard library code and it should be tested by Python itself.
I am using Pycharm to run my pytest unit tests. I am testing a REST API, so I often have to validate blocks of JSON. When a test fails, I'll see something like this:
FAILED
test_document_api.py:0 (test_create_documents)
{'items': [{'i...ages': 1, ...} != {'items': [{'...ages': 1, ...}
Expected :{'items': [{'...ages': 1, ...}
Actual :{'items': [{'i...ages': 1, ...}
<Click to see difference>
When I click on the "Click to see difference" link, most of the difference is converted to points of ellipses, like so
This is useless since it doesn't show me what is different. I get this behavior for any difference larger than a single string or number.
I assume Pycharm and/or pytest tries to elide uninformative parts of differences for large outputs. However, it's being too aggressive here and eliding everything.
How do I get Pycharm and/or pytest to show me the entire difference?
I've tried adding -vvv to pytest's Additional Arguments, but that has no effect.
Since the original post I verified that I see the same behavior when I run unit tests from the command line. So this is an issue with pytest and not Pycharm.
After looking at the answers I've got so far I guess what I'm really asking is "in pytest is it possible to set maxDiff=None without changing the source code of your tests?" The impression I've gotten from reading about pytest is that the -vv switch is what controls this setting, but this does not appear to be the case.
If you look closely into PyCharm sources, from the whole pytest output, PyCharm uses a single line the to parse the data for displaying in the Click to see difference dialog. This is the AssertionError: <message> line:
def test_spam():
> assert v1 == v2
E AssertionError: assert {'foo': 'bar'} == {'foo': 'baz'}
E Differing items:
E {'foo': 'bar'} != {'foo': 'baz'}
E Use -v to get the full diff
If you want to see the full diff line without truncation, you need to customize this line in the output. For a single test, this can be done by adding a custom message to the assert statement:
def test_eggs():
assert a == b, '{0} != {1}'.format(a, b)
If you want to apply this behaviour to all tests, define custom pytest_assertrepr_compare hook. In the conftest.py file:
# conftest.py
def pytest_assertrepr_compare(config, op, left, right):
if op in ('==', '!='):
return ['{0} {1} {2}'.format(left, op, right)]
The equality comparison of the values will now still be stripped when too long; to show the complete line, you still need to increase the verbosity with -vv flag.
Now the equality comparison of the values in the AssertionError line will not be stripped and the full diff is displayed in the Click to see difference dialog, highlighting the diff parts:
Being that pytest integrates with unittest, as a workaround you may be able to set it up as a unittest and then set Test.maxDiff = None or per each specific test self.maxDiff = None
https://docs.pytest.org/en/latest/index.html
Can run unittest (including trial) and nose test suites out of the box;
These may be helpful as well...
https://stackoverflow.com/a/21615720/9530790
https://stackoverflow.com/a/23617918/9530790
Had a look in the pytest code base and maybe you can try some of these out:
1) Set verbosity level in the test execution:
./app_main --pytest --verbose test-suite/
2) Add environment variable for "CI" or "BUILD_NUMBER". In the link to
the truncate file you can see that these env variables are used to
determine whether or not the truncation block is run.
import os
os.environ["BUILD_NUMBER"] = '1'
os.environ["CI"] = 'CI_BUILD'
3) Attempt to set DEFAULT_MAX_LINES and DEFAULT_MAX_CHARS on the truncate module (Not recommending this since it uses a private module):
from _pytest.assertion import truncate
truncate.DEFAULT_MAX_CHARS = 1000
truncate.DEFAULT_MAX_LINES = 1000
According to the code the -vv option should work so it's strange that it's not for you:
Current default behaviour is to truncate assertion explanations at
~8 terminal lines, unless running in "-vv" mode or running on CI.
Pytest truncation file which are what I'm basing my answers off of: pytest/truncate.py
Hope something here helps you!
I have some getattr in assertion and it never shows anything after AssertionError.
I add -lv in the Additional Arguments field to show the local variables.
I was running into something similar and created a function that returns a string with a nice diff of the two dicts. In pytest style test this looks like:
assert expected == actual, build_diff_string(expected, actual)
And in unittest style
self.assertEqual(expected, actual, build_diff_string(expected, actual)
The downside is that you have to modify the all the tests that have this issue, but it's a Keep It Simple and Stupid solution.
In pycharm you can just put -vv in the run configuration Additional Arguments field and this should solve the issue.
Or at least, it worked on my machine...
I have a minor issue using py.test for my unit tests.
I use py.test to run my tests and output a junitxml report of the tests.
This xml report is imported in jenkins and generates nice statistics.
When I use a test class which derives from unittest.TestCase,
I skip expected failures using:
#unittest.skip("Bug 1234 : This does not work")
This message also shows up in jenkins when selecting this test.
When I don't use a unittest.TestCase class, e.g. to use py.test parametrize functionality,
I skip expected failures using:
#pytest.mark.xfail(reason="Bug 1234 : This does not work", run=False)
But then this reason is not actually displayed in jenkins, instead it will say:
Skip Message
expected test failure
How can I fix this?
I solved it using this line as the first line of the test:
pytest.skip("Bug 1234: This does not work")
I'd rather have used one of the pytest decorators, but this'll do.
I had a similar problem except I had a different Jenkins message and could not tell which test was skipped.
It turns out that if the only test in the module is a skipped test, then jenkins would not show the test in the test result list (using either decorator or jr-be's soloution). You could see that there was a skipped test in the total resuls, but could not tell which test or which module the skipped test was in.
To solve this (ok hack solve), I went back to using the decorator on my test and added a dummy test (so have 1 test that runs and 1 test that gets skipped):
#pytest.skip('SONIC-3218')
def test_segments_create_delete(self, api):
logging.info('TestCreateDeleteSegments.test_segments_create_delete')
def test_dummy(self, api):
'''
Dummy test to see if suite will display in jenkins if one
test is run and 1 is skipped (instead of having only skipped tests)
'''
logging.info('TestCreateDeleteSegments.test_dummy')
For me that works since I would rather have 1 extra dummy test and be able to find my skipped tests.