py.test logging messages and test results/assertions into a single file - python

I am starting to work with py.test at the moment for a new project. We are provisioning Linux servers and I need to write a script to check the setup and configuration of these servers. I thought that py.test is a good way to implement these tests and it is working quite fine until now.
The problem I face right now is that I need a log file at the end of these tests showing some log messages for each test and the result of the test. For the log messages I use logger:
logging.basicConfig(filename='config_check.log', level=logging.INFO)
pytest.main()
logging.info('all done')
As an example test I have this:
def test_taintedKernel():
logging.info('checking for tainted kernel')
output = runcmd('cat /proc/sys/kernel/tainted')
assert output == '0', 'tainted kernel found'
So in my logfile I would like an output like that:
INFO:root:checking for tainted kernel
ERROR:root:tainted kernel found
INFO:root:next test
INFO:root:successful
INFO:root:all done
But I cannot get the test results into the logfile, instead I get the standard output on stdout after the tests:
======================================= test session starts =======================================
platform linux2 -- Python 2.6.8 -- py-1.4.22 -- pytest-2.6.0
collected 14 items
test_basicLinux.py .............F
============================================ FAILURES =============================================
_______________________________________ test_taintedKernel ________________________________________
def test_taintedKernel():
logging.info('checking for tainted kernel')
output = runcmd('cat /proc/sys/kernel/tainted')
> assert output == '0', 'tainted kernel found'
E AssertionError: tainted kernel found
test_basicLinux.py:107: AssertionError
=============================== 1 failed, 13 passed in 6.07 seconds ===============================
This may be quite confusing for the users of my script. I tried to get into logger and pytest_capturelog since it was mentioned here quite often but I am for sure doing something wrong since I just don't get it. Maybe just a lack of understanding how this really works. Hope you can give me some hints on this. Please let me know if anything is missing here.
Thanks in advance for your help,
Stephan

pytest's job is to capture output and present it to the operator. So, rather than trying to get pytest to do the logging the way you want it, you can build the logging into your tests.
Python's assert command just takes a truth value, and a message. So, instead of using a bare assert in your tests, you can write a small function that does the logging if the value is false (which is the same condition that triggers the assert to fail), then calls the assert, so that you get the logging you want, plus the assert-driven behavior that creates the console output.
Here's a small test file using such a function:
# test_foo.py
import logging
def logAssert(test,msg):
if not test:
logging.error(msg)
assert test,msg
def test_foo():
logging.info("testing foo")
logAssert( 'foo' == 'foo', "foo is not foo")
def test_foobar():
logging.info("testing foobar")
logAssert( 'foobar' == 'foo', "foobar is not foo")
Here's the test runner, very similar to yours:
# runtests.py
import logging
import pytest
logging.basicConfig(filename='config_check.log', level=logging.INFO)
logging.info('start')
pytest.main()
logging.info('done')
Here's the output:
# python runtests.py
==== test session starts ========================
platform linux2 -- Python 2.6.6 -- py-1.4.22 -- pytest-2.6.0
collected 2 items
test_foo.py .F
========== FAILURES ============================
________ test_foobar __________________________
def test_foobar():
logging.info("testing foobar")
> logAssert( 'foobar' == 'foo', "foobar is not foo")
test_foo.py:14:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
test = False, msg = 'foobar is not foo'
def logAssert(test,msg):
if not test:
logging.error(msg)
> assert test,msg
E AssertionError: foobar is not foo
test_foo.py:6: AssertionError ==== 1 failed, 1 passed in 0.02 seconds =======
And here's the log that gets written:
# cat config_check.log
INFO:root:start
INFO:root:testing foo
INFO:root:testing foobar
ERROR:root:foobar is not foo
INFO:root:done

Since version 3.3, pytest supports live logging to terminal and file. Example test module:
import logging
import os
def test_taintedKernel():
logging.info('checking for tainted kernel')
output = os.system('cat /proc/sys/kernel/tainted')
assert output == 0, 'tainted kernel found'
Configure of logging to file can be done in pytest.ini:
[pytest]
log_file = my.log
log_file_level = DEBUG
log_file_format = %(asctime)s [%(levelname)8s] %(message)s (%(filename)s:%(lineno)s)
log_file_date_format=%Y-%m-%d %H:%M:%S
Running the test yields as usual:
$ pytest
======================================================= test session starts ========================================================
...
collected 1 item
test_spam.py . [100%]
===================================================== 1 passed in 0.01 seconds =====================================================
Now check the written log file:
$ cat my.log
2019-07-12 23:51:41 [ INFO] checking for tainted kernel (test_spam.py:6)
For more examples of emitting live logs to both terminal and log file, check out my answer to Logging within py.test tests
.
Reference: Live Logs section in pytest docs.

Related

Unable to run unittest test suite (using TestCase subclasses) in PyCharm, whereas it runs in Python console

I am following this answer to generate multiple test cases programmatically using the unittest approach.
Here's the code:
import unittest
import my_code
# Test cases (List of input output pairs not explicitly mentioned here)
known_values = [
{'input': {}, 'output': {}},
{'input': {}, 'output': {}}
]
# Subclass TestCase
class KnownGood(unittest.TestCase):
def __init__(self, input_params, output):
super(KnownGood, self).__init__()
self.input_params = input_params
self.output = output
def runTest(self):
self.assertEqual(
my_code.my_func(self.input_params['a'], self.input_params['b']),
self.output
)
# Test suite
def suite():
global known_values
suite = unittest.TestSuite()
suite.addTests(KnownGood(input_params=k['input'], output=k['output']) for k in known_values)
return suite
if __name__ == '__main__':
unittest.TextTestRunner().run(suite())
If I open a Python console in PyCharm and run the above code chunk (running unittest.TextTestRunner() without the if condition), the tests run successfully.
..
----------------------------------------------------------------------
Ran 2 tests in 0.002s
OK
<unittest.runner.TextTestResult run=2 errors=0 failures=0>
If I run the test by clicking on the green run button for the if __name__ block in PyCharm, I get the following error:
TypeError: __init__() missing 1 required positional argument: 'output'
Process finished with exit code 1
Empty suite
Empty suite
Python version: 3.7
Project structure: (- denotes folder and . denotes file)
-project_folder
-tests
.test_my_code.py
.my_code.py
The problem is that PyCharm by default is running unittest or pytest (whatever you have configured as test runner) on the module if it identifies it as containing tests, ignoring the part in if __name__ == '__main__'.
That basically means that it executes unittest.main() instead of your customized version of running the tests.
The only solution I know to get the correct run configuration is to manually add it:
select Edit configurations... in the configuration list
add a new config using +
select Python as type
fill in the Script path by your test path (or use the browse button)
Maybe someone knows a more convenient way to force PyCharm to use "Run" instead of "Run Test"...

pytest.config.getoption alternative is failing

My setup is such that; pytest test.py executes nothing while pytest --first-test test.py executes the target function test_skip.
In order to determine whether a certain test should be conducted or not, this is what I have been using:
skip_first = pytest.mark.skipif(
not (
pytest.config.getoption("--first-test")
or os.environ.get('FULL_AUTH_TEST')
), reason="Skipping live"
)
#skip_first
def test_skip():
assert_something
Now that, pytest.config.getoption is being deprecated, I am trying to update my code. This is what I have written:
#pytest.fixture
def skip_first(request):
def _skip_first():
return pytest.mark.skipif(
not (
request.config.getoption("--first-test")
or os.environ.get('FULL_AUTH_TEST')),
reason="Skipping"
)
return _skip_first()
# And, to call:
def test_skip(skip_first):
assert 1==2
However, whether I do pytest test.py or pytest --first-test test.py, test_skip will always execute. But, the skip_first seems to be working fine - Inserting a print statement shows skip_first = MarkDecorator(mark=Mark(name='skipif', args=(False,), kwargs={'reason': 'Skipping first'})), when --first-test is given, and args=(True) when given. (Same thing was observed when using the first setup).
Am I missing something?? I even tried to return the function _skip_first instead of it's output in the def skip_first but no difference.
When using a test class, the manual indicates we need to use #pytest.mark.usefixtures("fixturename") but that proved to be of no use either (with classes).
Ideas? This is my system: platform linux -- Python 3.6.7, pytest-4.0.2, py-1.7.0, pluggy-0.8.0
In order to cause a SKIP from a fixture, you must raise pytest.skip. Here's an example using your code above:
import os
import pytest
#pytest.fixture
def skip_first(request):
if (
request.config.getoption("--first-test")
or os.environ.get('FULL_AUTH_TEST')
):
raise pytest.skip('Skipping!')
# And, to call:
def test_skip(skip_first):
assert 1==2
If you want, you can almost replace your original code by doing:
#pytest.fixture
def skip_first_fixture(request): ...
skip_first = pytest.mark.usefixtures('skip_first_fixture')
#skip_first
def test_skip(): ...
Here's the execution showing this working:
$ pytest t.py -q
F [100%]
=================================== FAILURES ===================================
__________________________________ test_skip ___________________________________
skip_first = None
def test_skip(skip_first):
> assert 1==2
E assert 1 == 2
E -1
E +2
t.py:16: AssertionError
1 failed in 0.03 seconds
$ pytest t.py --first-test -q
s [100%]
1 skipped in 0.01 seconds

Show exhaustive information for passed tests in pytest

When a test fails, there's an output indicating the context of the test, e.g.
=================================== FAILURES ===================================
______________________________ Test.test_sum_even ______________________________
numbers = [2, 4, 6]
#staticmethod
def test_sum_even(numbers):
assert sum(numbers) % 2 == 0
> assert False
E assert False
test_preprocessing.py:52: AssertionError
What if I want the same thing for passed tests as well? so that I can have a quick check on the parameters that get passed to the tests are correct?
I tried command line options line --full-trace, -l, --tb long, and -rpP, but none of them works.
Any idea?
Executing pytest with the --verbose flag will cause it to list the fully qualified name of every test as it executes, e.g.,:
tests/dsl/test_ancestor.py::TestAncestor::test_finds_ancestor_nodes
tests/dsl/test_and.py::TestAnd::test_aliased_as_ampersand
tests/dsl/test_and.py::TestAnd::test_finds_all_nodes_in_both_expressions
tests/dsl/test_anywhere.py::TestAnywhere::test_finds_all_nodes_when_no_arguments_given_regardless_of_the_context
tests/dsl/test_anywhere.py::TestAnywhere::test_finds_multiple_kinds_of_nodes_regardless_of_the_context
tests/dsl/test_anywhere.py::TestAnywhere::test_finds_nodes_regardless_of_the_context
tests/dsl/test_axis.py::TestAxis::test_finds_nodes_given_the_xpath_axis
tests/dsl/test_axis.py::TestAxis::test_finds_nodes_given_the_xpath_axis_without_a_specific_tag
If you are just asking for standard output from passed test cases, then you need to pass the -s option to pytest to prevent capturing of standard output. More info about standard output suppression is available in the pytest docs.
pytest doesn't have this functionality. What it does is showing you the error from the exception when an assertion fails.
A workaround is to explicitly include the information you want to see from the passing tests by using python's logging module and then use the caplog fixture from pytest.
For example one version of a func.py could be:
import logging
logging.basicConfig(level=logging.DEBUG)
log = logging.getLogger('my-even-logger')
def is_even(numbers):
res = sum(numbers) % 2 == 0
if res is True:
log.warning('Sum is Even')
else:
log.warning('Sum is Odd')
#... do stuff ...
and then a test_func.py:
import logging
import pytest
from func import is_even
#pytest.fixture
def my_list():
numbers = [2, 4, 6]
return numbers
def test_is_even(caplog, my_list):
with caplog.at_level(logging.DEBUG, logger='my-even-logger'):
is_even(my_list)
assert 'Even' in caplog.text
If you run pytest -s test_even.py and since the test passes, the logger shows you the following message:
test_even.py WARNING:my-sum-logger:Sum is Even

How to save pytest's results/logs to a file?

I am having trouble trying to save -all- of the results shown from pytest to a file (txt, log, doesn't matter). In the test example below, I would like to capture what is shown in console into a text/log file of some sort:
import pytest
import os
def test_func1():
assert True
def test_func2():
assert 0 == 1
if __name__ == '__main__':
pytest.main(args=['-sv', os.path.abspath(__file__)])
Console output I'd like to save to a text file:
test-mbp:hi_world ua$ python test_out.py
================================================= test session starts =================================================
platform darwin -- Python 2.7.6 -- py-1.4.28 -- pytest-2.7.1 -- /usr/bin/python
rootdir: /Users/tester/PycharmProjects/hi_world, inifile:
plugins: capturelog
collected 2 items
test_out.py::test_func1 PASSED
test_out.py::test_func2 FAILED
====================================================== FAILURES =======================================================
_____________________________________________________ test_func2 ______________________________________________________
def test_func2():
> assert 0 == 1
E assert 0 == 1
test_out.py:9: AssertionError
========================================= 1 failed, 1 passed in 0.01 seconds ==========================================
test-mbp:hi_world ua$
It appears that all of your test output is going stdout, so you simply need to “redirect” your python invocation's output there:
python test_out.py >myoutput.log
You can also “tee” the output to multiple places. E.g., you might want to log to the file yet also see the output on your console. The above example then becomes:
python test_out.py | tee myoutput.log
I derive this from pastebin as suggest by Bruno Oliveira :
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Pytest Plugin that save failure or test session information to a file pass as a command line argument to pytest.
It put in a file exactly what pytest return to the stdout.
To use it :
Put this file in the root of tests/ edit your conftest and insert in the top of the file :
pytest_plugins = 'pytest_session_to_file'
Then you can launch your test with the new option --session_to_file= like this :
py.test --session_to_file=FILENAME
Or :
py.test -p pytest_session_to_file --session_to_file=FILENAME
Inspire by _pytest.pastebin
Ref: https://github.com/pytest-dev/pytest/blob/master/_pytest/pastebin.py
Version : 0.1
Date : 30 sept. 2015 11:25
Copyright (C) 2015 Richard Vézina <ml.richard.vezinar # gmail.com>
Licence : Public Domain
"""
import pytest
import sys
import tempfile
def pytest_addoption(parser):
group = parser.getgroup("terminal reporting")
group._addoption('--session_to_file', action='store', metavar='path', default='pytest_session.txt',
help="Save to file the pytest session information")
#pytest.hookimpl(trylast=True)
def pytest_configure(config):
tr = config.pluginmanager.getplugin('terminalreporter')
# if no terminal reporter plugin is present, nothing we can do here;
# this can happen when this function executes in a slave node
# when using pytest-xdist, for example
if tr is not None:
config._pytestsessionfile = tempfile.TemporaryFile('w+')
oldwrite = tr._tw.write
def tee_write(s, **kwargs):
oldwrite(s, **kwargs)
config._pytestsessionfile.write(str(s))
tr._tw.write = tee_write
def pytest_unconfigure(config):
if hasattr(config, '_pytestsessionfile'):
# get terminal contents and delete file
config._pytestsessionfile.seek(0)
sessionlog = config._pytestsessionfile.read()
config._pytestsessionfile.close()
del config._pytestsessionfile
# undo our patching in the terminal reporter
tr = config.pluginmanager.getplugin('terminalreporter')
del tr._tw.__dict__['write']
# write summary
create_new_file(config=config, contents=sessionlog)
def create_new_file(config, contents):
"""
Creates a new file with pytest session contents.
:contents: paste contents
:returns: url to the pasted contents
"""
# import _pytest.config
# path = _pytest.config.option.session_to_file
# path = 'pytest_session.txt'
path = config.option.session_to_file
with open(path, 'w') as f:
f.writelines(contents)
def pytest_terminal_summary(terminalreporter):
import _pytest.config
tr = terminalreporter
if 'failed' in tr.stats:
for rep in terminalreporter.stats.get('failed'):
try:
msg = rep.longrepr.reprtraceback.reprentries[-1].reprfileloc
except AttributeError:
msg = tr._getfailureheadline(rep)
tw = _pytest.config.create_terminal_writer(terminalreporter.config, stringio=True)
rep.toterminal(tw)
s = tw.stringio.getvalue()
assert len(s)
create_new_file(config=_pytest.config, contents=s)
The pastebin internal plugin does exactly that, but sends the output directly to bpaste.net. You can look at the plugin implementation to understand how to reuse it for your needs.
Here is a fixture in order for you to be able to do this, I used the pytest Cache feature in order to leverage a fixture that can be passed around to multiple test files, including distributed tests(xdist), in order to be able to collect and print test results.
conftest.py:
from _pytest.cacheprovider import Cache
from collections import defaultdict
import _pytest.cacheprovider
import pytest
#pytest.hookimpl(tryfirst=True)
def pytest_configure(config):
config.cache = Cache(config)
config.cache.set('record_s', defaultdict(list))
#pytest.fixture(autouse=True)
def record(request):
cache = request.config.cache
record_s = cache.get('record_s', {})
testname = request.node.name
# Tried to avoid the initialization, but it throws errors.
record_s[testname] = []
yield record_s[testname]
cache.set('record_s', record_s)
#pytest.hookimpl(trylast=True)
def pytest_unconfigure(config):
print("====================================================================\n")
print("\t\tTerminal Test Report Summary: \n")
print("====================================================================\n")
r_cache = config.cache.get('record_s',{})
print str(r_cache)
Use:
def test_foo(record):
record.append(('PASS', "reason", { "some": "other_stuff" }))
Output:
====================================================================
Terminal Test Report Summary:
====================================================================
{u'test_foo': [[u'PASS',u'reason', { u'some': u'other_stuff' } ]]}

Writing a pytest function for checking the output on console (stdout)

This link gives a description how to use pytest for capturing console outputs.
I tried on this following simple code, but I get error
import sys
import pytest
def f(name):
print "hello "+ name
def test_add(capsys):
f("Tom")
out,err=capsys.readouterr()
assert out=="hello Tom"
test_add(sys.stdout)
Output:
python test_pytest.py
hello Tom
Traceback (most recent call last):
File "test_pytest.py", line 12, in <module>
test_add(sys.stdout)
File "test_pytest.py", line 8, in test_add
out,err=capsys.readouterr()
AttributeError: 'file' object has no attribute 'readouterr'
what is wrong and what fix needed? thank you
EDIT:
As per the comment, I changed capfd, but I still get the same error
import sys
import pytest
def f(name):
print "hello "+ name
def test_add(capfd):
f("Tom")
out,err=capfd.readouterr()
assert out=="hello Tom"
test_add(sys.stdout)
Use the capfd fixture.
Example:
def test_foo(capfd):
foo() # Writes "Hello World!" to stdout
out, err = capfd.readouterr()
assert out == "Hello World!"
See: http://pytest.org/en/latest/fixture.html for more details
And see: py.test --fixtures for a list of builtin fixtures.
Your example has a few problems. Here is a corrected version:
def f(name):
print "hello {}".format(name)
def test_f(capfd):
f("Tom")
out, err = capfd.readouterr()
assert out == "hello Tom\n"
Note:
Do not use sys.stdout -- Use the capfd fixture as-is as provided by pytest.
Run the test with: py.test foo.py
Test Run Output:
$ py.test foo.py
====================================================================== test session starts ======================================================================
platform linux2 -- Python 2.7.5 -- pytest-2.4.2
plugins: flakes, cache, pep8, cov
collected 1 items
foo.py .
=================================================================== 1 passed in 0.01 seconds ====================================================================
Also Note:
You do not need to run your Test Function(s) in your test modules. py.test (The CLI tool and Test Runner) does this for you.
py.test does mainly three things:
Collect your tests
Run your tests
Display statistics and possibly errors
By default py.test looks for (configurable iirc) test_foo.py test modules and test_foo() test functions in your test modules.
The problem is with your explicit call of your test function at the very end of your first code snippet block:
test_add(sys.stdout)
You should not do this; it is pytest's job to call your test functions.
When it does, it will recognize the name capsys (or capfd, for that matter)
and automatically provide a suitable pytest-internal object for you as a call argument.
(The example given in the pytest documentation is quite complete as it is.)
That object will provide the required readouterr() function.
sys.stdout does not have that function, which is why your program fails.

Categories

Resources