Pytest Logging messages appear twice - python

My testing framework has below structure
Master_test_Class.py ---> Holds generic test cases to be run for smoke and regression test suite
Test_Smoke1.py and Test_Reg1.py --> Child classes inherit Master_test_class.py
I have logging enabled in pytest.ini for INFO
[pytest]
log_cli = 1
log_cli_level = INFO
Below is my code in conftest.py
def pytest_generate_tests(metafunc):
.....
logging.info("This is generated during the test collection !!!")
When i run the test either of the test files, logs are printed 2 times once in the formatting specified in pytest.ini and another in red color
pytest -s Test_Reg1.py
I am so lost why is the logging info getting printed twice.

It's probably because you have a logging handler that sends the logs to the standard output, the solution would be, either run pytest without argument -s (this assumes logs have all the information you need) or remove the logging handler that is using standard output.

If you want to see only the output of logging module, use --log-cli-level=INFO as an argument to pytest test run. You are seeing it twice because of -s switch. And, to test whether it's the same log or different, add timestamp to the log message.

Related

pytest exit on failure only for a specific file

I'm using the pytest framework to run tests that interface with a set of test instruments. I'd like to have a set of initial tests in a single file that will check the connection and configuration of these instruments. If any of these tests fail, I'd like to abort all future tests.
In the remaining tests and test files, I'd like pytest to continue if there are any test failures so that I get a complete report of all failed tests.
For example, the test run may look something like this:
pytest test_setup.py test_series1.py test_series2.py
In this example, I'd like pytest to exit if any tests in test_setup.py fail.
I would like to invoke the test session in a single call so that session based fixtures that I have only get called once. For example, I have a fixture that will connect to a power supply and configure it for the tests.
Is there a way to tell pytest to exit on any test only in a specific file? If I use the -x option it will not continue in subsequent tests.
Ideally, I'd prefer something like a decorator that tells pytest to exit if there is a failure. However, I have not seen anything like this.
Is this possible or am I thinking about this the wrong way?
Update
Based on the answer from pL3b, my final solution looks like this:
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
if 'critical' in [mark.name for mark in item.own_markers]:
result = outcome.get_result()
if result.when == "call" and result.failed:
print('FAILED')
pytest.exit('Exiting pytest due to critical test failure', 1)
I needed to inspect the failure code in order to check if the test failed or not. Otherwise this would exit on every call.
Additionally, I needed to register my custome marker. I chose to also put this in the conftest.py file like so:
def pytest_configure(config):
config.addinivalue_line(
"markers", "critical: make test as critical"
)
You may use following hook in your conftest.py to solve your problem:
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
yield
if 'critical' in [mark.name for mark in item.own_markers]:
pytest.exit('Exiting pytest')
Then just add #pytest.mark.critical decorator to desired tests/classes in test_setup.py.
Learn more about pytest hooks here so you can define desired output and so on.

How can I see the current pytest configuration?

pytest gathers configuration settings from various files and command-line options, but there doesn't appear to be a command-line option to show pytest's current settings, and I can't work out how to easily interrogate pytest for this info. pytest --help (or pytest -h) shows what the available options are but doesn't show their current values.
I've tried running pytest with PYTEST_DEBUG set (e.g. $ env PYTEST_DEBUG=1 pytest --setup-only): This generates a huge quantity of debug info but it doesn't appear to include the configuration settings, or at least not in a digestible format.
I see that there is a Config object but I can't work out how to interrogate it to write a small program to output its content (I think it needs higher-level pytest-fu than I possess); I think this object may be the right place to look, assuming there isn't a command-line option to display the current settings that I've missed.
After looking through the pytest docs, there doesn't appear to be a direct way to enumerate all the config options automatically. Here are some ways to get at the config values though so hopefully these are helpful.
If you have a specific unit test and know particular values there is an example in the pytest docs that show how to diagnose if those values are set.
Also, the config docs describe the config file search and precedence.
Configuration settings come from several places for pytest, including cmd line options, ini files, and env variables.
The arguments to pytestconfig are the parts of Config and are described here in the documentation.
import pytest
import os
def test_answer(pytestconfig):
if pytestconfig.getoption("verbose") > 0:
print("verbose")
print(pytestconfig.inipath)
print(pytestconfig.invocation_params.args)
print(os.getenv('PYTEST_ADDOPTS', None))
print(pytestconfig.getini('addopts'))
assert 0 # to see what was printed
Now if I run this in my directory with pytest test_sample.py (no cmd line arguments).
The contents of test_sample.py are given above.
The contents of the pytest.ini file are
[pytest]
addopts = -vq
and the PYTEST_ADDOPTS is not set I see:
---------------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------------
/Users/rlucas/Desktop/pytest.ini
('test_sample.py',)
None
['-vq']
================================================================================================== short test summary info ===================================================================================================
FAILED test_sample.py::test_answer - assert 0
and using a different call of pytest test_sample.py --verbose you'll see
---------------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------------
verbose
/Users/rlucas/Desktop/pytest.ini
('test_sample.py', '--verbose')
None
['-vq']
================================================================================================== short test summary info ===================================================================================================
Here I'm abbreviating the output somewhat to the relevant info (e.g. I'm not showing the test failure info). If you don't have direct access to the filesystem where the unit tests are running you can always read in the file found in pytestconfig.inipath and print the contents to stdout.
Inspired by #Lucas Roberts answer I've taken a closer look at pytest's pytestconfig fixture which has various interesting attributes, particularly option (also known_args_namespace with similar contents). So if I understand pytestconfig correctly, a rough-and-ready solution to print the currently-in-effect options would be,
# test_sample.py
import pytest
import os
def test_answer(pytestconfig):
for item_name in dir(pytestconfig.option):
if not item_name.startswith('_'):
print(f'{item_name}: {getattr(pytestconfig.option,item_name)}')
assert 0 # to see what was printed
then run this as Lucas suggests with pytest (here with -vv for verbosity 2 for example),
$ pytest -vv test_sample.py
...
verbose: 2
version: False
xmlpath: None
==================== 1 failed in 0.10s ===================
$

Is there a way for pytest to check if a log entry was made at Error level or higher?

Python 3.8.0, pytest 5.3.2, logging 0.5.1.2.
My code has an input loop, and to prevent the program crashing entirely, I catch any exceptions that get thrown, log them as critical, reset the program state, and keep going. That means that a test that causes such an exception won't outright fail, so long as the output is still what is expected. This might happen if the error was a side effect of the test code but didn't affect the main tested logic. I would still like to know that the test is exposing an error-causing bug however.
Most of the Googling I have done shows results on how to display logs within pytest, which I am doing, but I can't find out if there is a way to expose the logs within the test, such that I can fail any test with a log at Error or Critical level.
Edit: This is a minimal example of a failing attempt:
test.py:
import subject
import logging
import pytest
#pytest.fixture(autouse=True)
def no_log_errors(caplog):
yield # Run in teardown
print(caplog.records)
# caplog.set_level(logging.INFO)
errors = [record for record in caplog.records if record.levelno >= logging.ERROR]
assert not errors
def test_main():
subject.main()
# assert False
subject.py:
import logging
logger = logging.Logger('s')
def main():
logger.critical("log critical")
Running python3 -m pytest test.py passes with no errors.
Uncommenting the assert statement fails the test without errors, and prints [] to stdout, and log critical to stderr.
Edit 2:
I found why this fails. From the documentation on caplog:
The caplog.records attribute contains records from the current stage only, so inside the setup phase it contains only setup logs, same with the call and teardown phases
However, right underneath is what I should have found the first time:
To access logs from other stages, use the caplog.get_records(when) method. As an example, if you want to make sure that tests which use a certain fixture never log any warnings, you can inspect the records for the setup and call stages during teardown like so:
#pytest.fixture
def window(caplog):
window = create_window()
yield window
for when in ("setup", "call"):
messages = [
x.message for x in caplog.get_records(when) if x.levelno == logging.WARNING
]
if messages:
pytest.fail(
"warning messages encountered during testing: {}".format(messages)
)
However this still doesn't make a difference, and print(caplog.get_records("call")) still returns []
You can build something like this using the caplog fixture
here's some sample code from the docs which does some assertions based on the levels:
def test_baz(caplog):
func_under_test()
for record in caplog.records:
assert record.levelname != "CRITICAL"
assert "wally" not in caplog.text
since the records are the standard logging record types, you can use whatever you need there
here's one way you might do this ~more automatically using an autouse fixture:
#pytest.fixture(autouse=True)
def no_logs_gte_error(caplog):
yield
errors = [record for record in caplog.get_records('call') if record.levelno >= logging.ERROR]
assert not errors
(disclaimer: I'm a core dev on pytest)
You can use the unittest.mock module (even if using pytest) and monkey-patch whatever function / method you use for logging. Then in your test, you can have some assert that fails if, say, logging.error was called.
That'd be a short term solution. But it might also be the case that your design could benefit from more separation, so that you can easily test your application without a zealous try ... except block catching / suppressing just about everything.

Py.test skip messages don't show in jenkins

I have a minor issue using py.test for my unit tests.
I use py.test to run my tests and output a junitxml report of the tests.
This xml report is imported in jenkins and generates nice statistics.
When I use a test class which derives from unittest.TestCase,
I skip expected failures using:
#unittest.skip("Bug 1234 : This does not work")
This message also shows up in jenkins when selecting this test.
When I don't use a unittest.TestCase class, e.g. to use py.test parametrize functionality,
I skip expected failures using:
#pytest.mark.xfail(reason="Bug 1234 : This does not work", run=False)
But then this reason is not actually displayed in jenkins, instead it will say:
Skip Message
expected test failure
How can I fix this?
I solved it using this line as the first line of the test:
pytest.skip("Bug 1234: This does not work")
I'd rather have used one of the pytest decorators, but this'll do.
I had a similar problem except I had a different Jenkins message and could not tell which test was skipped.
It turns out that if the only test in the module is a skipped test, then jenkins would not show the test in the test result list (using either decorator or jr-be's soloution). You could see that there was a skipped test in the total resuls, but could not tell which test or which module the skipped test was in.
To solve this (ok hack solve), I went back to using the decorator on my test and added a dummy test (so have 1 test that runs and 1 test that gets skipped):
#pytest.skip('SONIC-3218')
def test_segments_create_delete(self, api):
logging.info('TestCreateDeleteSegments.test_segments_create_delete')
def test_dummy(self, api):
'''
Dummy test to see if suite will display in jenkins if one
test is run and 1 is skipped (instead of having only skipped tests)
'''
logging.info('TestCreateDeleteSegments.test_dummy')
For me that works since I would rather have 1 extra dummy test and be able to find my skipped tests.

How can I see normal print output created during pytest run?

Sometimes I want to just insert some print statements in my code, and see what gets printed out when I exercise it. My usual way to "exercise" it is with existing pytest tests. But when I run these, I don't seem able to see any standard output (at least from within PyCharm, my IDE).
Is there a simple way to see standard output during a pytest run?
The -s switch disables per-test capturing (only if a test fails).
-s is equivalent to --capture=no.
pytest captures the stdout from individual tests and displays them only on certain conditions, along with the summary of the tests it prints by default.
Extra summary info can be shown using the '-r' option:
pytest -rP
shows the captured output of passed tests.
pytest -rx
shows the captured output of failed tests (default behaviour).
The formatting of the output is prettier with -r than with -s.
When running the test use the -s option. All print statements in exampletest.py would get printed on the console when test is run.
py.test exampletest.py -s
In an upvoted comment to the accepted answer, Joe asks:
Is there any way to print to the console AND capture the output so that it shows in the junit report?
In UNIX, this is commonly referred to as teeing. Ideally, teeing rather than capturing would be the py.test default. Non-ideally, neither py.test nor any existing third-party py.test plugin (...that I know of, anyway) supports teeing – despite Python trivially supporting teeing out-of-the-box.
Monkey-patching py.test to do anything unsupported is non-trivial. Why? Because:
Most py.test functionality is locked behind a private _pytest package not intended to be externally imported. Attempting to do so without knowing what you're doing typically results in the public pytest package raising obscure exceptions at runtime. Thanks alot, py.test. Really robust architecture you got there.
Even when you do figure out how to monkey-patch the private _pytest API in a safe manner, you have to do so before running the public pytest package run by the external py.test command. You cannot do this in a plugin (e.g., a top-level conftest module in your test suite). By the time py.test lazily gets around to dynamically importing your plugin, any py.test class you wanted to monkey-patch has long since been instantiated – and you do not have access to that instance. This implies that, if you want your monkey-patch to be meaningfully applied, you can no longer safely run the external py.test command. Instead, you have to wrap the running of that command with a custom setuptools test command that (in order):
Monkey-patches the private _pytest API.
Calls the public pytest.main() function to run the py.test command.
This answer monkey-patches py.test's -s and --capture=no options to capture stderr but not stdout. By default, these options capture neither stderr nor stdout. This isn't quite teeing, of course. But every great journey begins with a tedious prequel everyone forgets in five years.
Why do this? I shall now tell you. My py.test-driven test suite contains slow functional tests. Displaying the stdout of these tests is helpful and reassuring, preventing leycec from reaching for killall -9 py.test when yet another long-running functional test fails to do anything for weeks on end. Displaying the stderr of these tests, however, prevents py.test from reporting exception tracebacks on test failures. Which is completely unhelpful. Hence, we coerce py.test to capture stderr but not stdout.
Before we get to it, this answer assumes you already have a custom setuptools test command invoking py.test. If you don't, see the Manual Integration subsection of py.test's well-written Good Practices page.
Do not install pytest-runner, a third-party setuptools plugin providing a custom setuptools test command also invoking py.test. If pytest-runner is already installed, you'll probably need to uninstall that pip3 package and then adopt the manual approach linked to above.
Assuming you followed the instructions in Manual Integration highlighted above, your codebase should now contain a PyTest.run_tests() method. Modify this method to resemble:
class PyTest(TestCommand):
.
.
.
def run_tests(self):
# Import the public "pytest" package *BEFORE* the private "_pytest"
# package. While importation order is typically ignorable, imports can
# technically have side effects. Tragicomically, that is the case here.
# Importing the public "pytest" package establishes runtime
# configuration required by submodules of the private "_pytest" package.
# The former *MUST* always be imported before the latter. Failing to do
# so raises obtuse exceptions at runtime... which is bad.
import pytest
from _pytest.capture import CaptureManager, FDCapture, MultiCapture
# If the private method to be monkey-patched no longer exists, py.test
# is either broken or unsupported. In either case, raise an exception.
if not hasattr(CaptureManager, '_getcapture'):
from distutils.errors import DistutilsClassError
raise DistutilsClassError(
'Class "pytest.capture.CaptureManager" method _getcapture() '
'not found. The current version of py.test is either '
'broken (unlikely) or unsupported (likely).'
)
# Old method to be monkey-patched.
_getcapture_old = CaptureManager._getcapture
# New method applying this monkey-patch. Note the use of:
#
# * "out=False", *NOT* capturing stdout.
# * "err=True", capturing stderr.
def _getcapture_new(self, method):
if method == "no":
return MultiCapture(
out=False, err=True, in_=False, Capture=FDCapture)
else:
return _getcapture_old(self, method)
# Replace the old with the new method.
CaptureManager._getcapture = _getcapture_new
# Run py.test with all passed arguments.
errno = pytest.main(self.pytest_args)
sys.exit(errno)
To enable this monkey-patch, run py.test as follows:
python setup.py test -a "-s"
Stderr but not stdout will now be captured. Nifty!
Extending the above monkey-patch to tee stdout and stderr is left as an exercise to the reader with a barrel-full of free time.
According to pytest documentation, version 3 of pytest can temporary disable capture in a test:
def test_disabling_capturing(capsys):
print('this output is captured')
with capsys.disabled():
print('output not captured, going directly to sys.stdout')
print('this output is also captured')
pytest --capture=tee-sys was recently added (v5.4.0). You can capture as well as see the output on stdout/err.
Try pytest -s -v test_login.py for more info in console.
-v it's a short --verbose
-s means 'disable all capturing'
You can also enable live-logging by setting the following in pytest.ini or tox.ini in your project root.
[pytest]
log_cli = True
Or specify it directly on cli
pytest -o log_cli=True
pytest test_name.py -v -s
Simple!
I would suggest using -h command. There're quite interesting commands might be used for.
but, for this particular case: -s shortcut for --capture=no. is enough
pytest <test_file.py> -s
If you are using logging, you need to specify to turn on logging output in addition to -s for generic stdout. Based on Logging within pytest tests, I am using:
pytest --log-cli-level=DEBUG -s my_directory/
If you are using PyCharm IDE, then you can run that individual test or all tests using Run toolbar. The Run tool window displays output generated by your application and you can see all the print statements in there as part of test output.
If anyone wants to run tests from code with output:
if __name__ == '__main__':
pytest.main(['--capture=no'])
The capsys, capsysbinary, capfd, and capfdbinary fixtures allow access to stdout/stderr output created
during test execution. Here is an example test function that performs some output related checks:
def test_print_something_even_if_the_test_pass(self, capsys):
text_to_be_printed = "Print me when the test pass."
print(text_to_be_printed)
p_t = capsys.readouterr()
sys.stdout.write(p_t.out)
# the two rows above will print the text even if the test pass.
Here is the result:
test_print_something_even_if_the_test_pass PASSED [100%]Print me when the test pass.

Categories

Resources