How to set nosetests to only log errors? - python

I have a hundred or so unit tests I'm running with nose. When I change something in my models obviously I get fails, with some errors mixed in. Is there an easy way to tell nose to only log the errors? Then I don't have to go through pages of fails to look for one error log.

nose provides tools for testing exceptions (like unittest does). Try this example (and read about the other tools at Nose Testing Tools
from nose.tools import *
l = []
d = dict()
#raises(Exception)
def test_Exception1():
'''this test should pass'''
l.pop()
#raises(KeyError)
def test_Exception2():
'''this test should pass'''
d[1]

An alternative, is to redirect output to stdout and use grep (adjust the number of lines, 15 in this example, to your liking):
nosetests tests.py 2>&1 | grep "ERROR" -A 15
Another alternative is to use --pdb-errors to stop on every error and open the debugger.
It's not what you asked, but it's what I ended up using.

Related

To print or not to print in pytest

When talking about pytest we know two things:
When a test pass, no output is given in principle
Sometimes the assertion failures can have very cryptic messages.
I took a course that solved this by using print to clarify desired outputs and calling the pytest as pytest -v -s. I think it is a great solution
Another developer in my company thinks that test code should be as free of "side effects" as possible (and considers prints as side effect). He suggests outputting to a file which I think it is not a good practice. (I think that is an undesirable side effect)
So I would like to hear about this from other developers.
How do you solve the two points given in the beginning and do you use prints in your tests?
As someone already pointed out you can provide your own assert message:
def test_something():
i = 2
assert i == 1, "i should be equal to one"
There should be really no difference between using assert messages and prints, but in case of an assert message only it would be visible in pytest report, and not all the stdout calls:
In this case 0-9 would be printed in pytest report
def test_something():
i = 2
for i in range(10):
print(i)
assert i == 1
Logging everything to a file would definitely make working with pytest harder, and would be a pain to debug if your tests fail in CI.
If you need descriptive messages I would prefer using assert messages and, maybe, prints for debug information.
using print() in your test is not a good solution, you need to see data in cli or on a pipeline.
so for assertion you can share custom messages for assertion in pass or fail case or even in raise an exception
here is a basic tutorial for this
https://docs.pytest.org/en/7.1.x/how-to/assert.html
for general test steps the best way to get the info is logging
logging on different levels
import logging as logger
logger.info('what info you want to share')
logger.error('what info you want to share')
logger.debug('what info you want to share')
for more info you can check this
https://docs.python.org/3/howto/logging.html

Pytest cov does not generate any report

I'm trying to run a py.test cov for my program, but I still have an information: testFile.txt sCoverage.py warning: No data was collected.
even when in the code are still non-tested functions (in my example function diff). Below is the example of the code on which I tested the command py.test --cov=testcov.py. I'm using python 2.7.9
def suma(x,y):
z = x + y
return z
def diff(x,y):
return x-y
if __name__ == "__main__":
a = suma(2,3)
b = diff(7,5)
print a
print b
## ------------------------TESTS-----------------------------
import pytest
def testSuma():
assert suma(2,3) == 5
Can someone explain me, what am I doing wrong?
You haven't said what all your files are named, so I'm not sure of the precise answer. But the argument to --cov should be a module name, not a file name. So instead of py.test --cov=testcov.py, you want py.test --cov=testcov.
What worked well for me is:
py.test mytests/test_mytest.py --cov='.'
Specifying the path, '.' in this case, removes unwanted files from the coverage report.
py.test looks for functions that start with test_. You should rename your test functions accordingly. To apply coverage you execute py.test --cov. If you want a nice HTML report that also shows you which lines are not covered you can use py.test --cov --cov-report html.
By default py.test looks for files matching test_*.py. You can customize it with pytest.ini
Btw. According to python style guide PEP 8 it should be test_suma - but it has no impact on py.test.

How to list available tests with python?

How to just list all discovered tests?
I found this command:
python3.4 -m unittest discover -s .
But it's not exactly what I want, because the above command executes tests. I mean let's have a project with a lot of tests. Execution time is a few minutes. This force me to wait until tests are finished.
What I want is something like this (above command's output)
test_choice (test.TestSequenceFunctions) ... ok
test_sample (test.TestSequenceFunctions) ... ok
test_shuffle (test.TestSequenceFunctions) ... ok
or even better, something more like this (after editing above):
test.TestSequenceFunctions.test_choice
test.TestSequenceFunctions.test_sample
test.TestSequenceFunctions.test_shuffle
but without execution, only printing tests "paths" for copy&paste purpose.
Command line command discover is implemented using unittest.TestLoader. Here's the somewhat elegant solution
import unittest
def print_suite(suite):
if hasattr(suite, '__iter__'):
for x in suite:
print_suite(x)
else:
print(suite)
print_suite(unittest.defaultTestLoader.discover('.'))
Running example:
In [5]: print_suite(unittest.defaultTestLoader.discover('.'))
test_accounts (tests.TestAccounts)
test_counters (tests.TestAccounts)
# More of this ...
test_full (tests.TestImages)
This works because TestLoader.discover returns TestSuite objects, that implement __iter__ method and therefore are iterable.
You could do something like:
from your_tests import TestSequenceFunctions
print('\n'.join([f.__name__ for f in dir(TestSequenceFunctions) if f.__name__.startswith('test_')]))
I'm not sure if there is an exposed method for this via unittest.main.

How can I see normal print output created during pytest run?

Sometimes I want to just insert some print statements in my code, and see what gets printed out when I exercise it. My usual way to "exercise" it is with existing pytest tests. But when I run these, I don't seem able to see any standard output (at least from within PyCharm, my IDE).
Is there a simple way to see standard output during a pytest run?
The -s switch disables per-test capturing (only if a test fails).
-s is equivalent to --capture=no.
pytest captures the stdout from individual tests and displays them only on certain conditions, along with the summary of the tests it prints by default.
Extra summary info can be shown using the '-r' option:
pytest -rP
shows the captured output of passed tests.
pytest -rx
shows the captured output of failed tests (default behaviour).
The formatting of the output is prettier with -r than with -s.
When running the test use the -s option. All print statements in exampletest.py would get printed on the console when test is run.
py.test exampletest.py -s
In an upvoted comment to the accepted answer, Joe asks:
Is there any way to print to the console AND capture the output so that it shows in the junit report?
In UNIX, this is commonly referred to as teeing. Ideally, teeing rather than capturing would be the py.test default. Non-ideally, neither py.test nor any existing third-party py.test plugin (...that I know of, anyway) supports teeing – despite Python trivially supporting teeing out-of-the-box.
Monkey-patching py.test to do anything unsupported is non-trivial. Why? Because:
Most py.test functionality is locked behind a private _pytest package not intended to be externally imported. Attempting to do so without knowing what you're doing typically results in the public pytest package raising obscure exceptions at runtime. Thanks alot, py.test. Really robust architecture you got there.
Even when you do figure out how to monkey-patch the private _pytest API in a safe manner, you have to do so before running the public pytest package run by the external py.test command. You cannot do this in a plugin (e.g., a top-level conftest module in your test suite). By the time py.test lazily gets around to dynamically importing your plugin, any py.test class you wanted to monkey-patch has long since been instantiated – and you do not have access to that instance. This implies that, if you want your monkey-patch to be meaningfully applied, you can no longer safely run the external py.test command. Instead, you have to wrap the running of that command with a custom setuptools test command that (in order):
Monkey-patches the private _pytest API.
Calls the public pytest.main() function to run the py.test command.
This answer monkey-patches py.test's -s and --capture=no options to capture stderr but not stdout. By default, these options capture neither stderr nor stdout. This isn't quite teeing, of course. But every great journey begins with a tedious prequel everyone forgets in five years.
Why do this? I shall now tell you. My py.test-driven test suite contains slow functional tests. Displaying the stdout of these tests is helpful and reassuring, preventing leycec from reaching for killall -9 py.test when yet another long-running functional test fails to do anything for weeks on end. Displaying the stderr of these tests, however, prevents py.test from reporting exception tracebacks on test failures. Which is completely unhelpful. Hence, we coerce py.test to capture stderr but not stdout.
Before we get to it, this answer assumes you already have a custom setuptools test command invoking py.test. If you don't, see the Manual Integration subsection of py.test's well-written Good Practices page.
Do not install pytest-runner, a third-party setuptools plugin providing a custom setuptools test command also invoking py.test. If pytest-runner is already installed, you'll probably need to uninstall that pip3 package and then adopt the manual approach linked to above.
Assuming you followed the instructions in Manual Integration highlighted above, your codebase should now contain a PyTest.run_tests() method. Modify this method to resemble:
class PyTest(TestCommand):
.
.
.
def run_tests(self):
# Import the public "pytest" package *BEFORE* the private "_pytest"
# package. While importation order is typically ignorable, imports can
# technically have side effects. Tragicomically, that is the case here.
# Importing the public "pytest" package establishes runtime
# configuration required by submodules of the private "_pytest" package.
# The former *MUST* always be imported before the latter. Failing to do
# so raises obtuse exceptions at runtime... which is bad.
import pytest
from _pytest.capture import CaptureManager, FDCapture, MultiCapture
# If the private method to be monkey-patched no longer exists, py.test
# is either broken or unsupported. In either case, raise an exception.
if not hasattr(CaptureManager, '_getcapture'):
from distutils.errors import DistutilsClassError
raise DistutilsClassError(
'Class "pytest.capture.CaptureManager" method _getcapture() '
'not found. The current version of py.test is either '
'broken (unlikely) or unsupported (likely).'
)
# Old method to be monkey-patched.
_getcapture_old = CaptureManager._getcapture
# New method applying this monkey-patch. Note the use of:
#
# * "out=False", *NOT* capturing stdout.
# * "err=True", capturing stderr.
def _getcapture_new(self, method):
if method == "no":
return MultiCapture(
out=False, err=True, in_=False, Capture=FDCapture)
else:
return _getcapture_old(self, method)
# Replace the old with the new method.
CaptureManager._getcapture = _getcapture_new
# Run py.test with all passed arguments.
errno = pytest.main(self.pytest_args)
sys.exit(errno)
To enable this monkey-patch, run py.test as follows:
python setup.py test -a "-s"
Stderr but not stdout will now be captured. Nifty!
Extending the above monkey-patch to tee stdout and stderr is left as an exercise to the reader with a barrel-full of free time.
According to pytest documentation, version 3 of pytest can temporary disable capture in a test:
def test_disabling_capturing(capsys):
print('this output is captured')
with capsys.disabled():
print('output not captured, going directly to sys.stdout')
print('this output is also captured')
pytest --capture=tee-sys was recently added (v5.4.0). You can capture as well as see the output on stdout/err.
Try pytest -s -v test_login.py for more info in console.
-v it's a short --verbose
-s means 'disable all capturing'
You can also enable live-logging by setting the following in pytest.ini or tox.ini in your project root.
[pytest]
log_cli = True
Or specify it directly on cli
pytest -o log_cli=True
pytest test_name.py -v -s
Simple!
I would suggest using -h command. There're quite interesting commands might be used for.
but, for this particular case: -s shortcut for --capture=no. is enough
pytest <test_file.py> -s
If you are using logging, you need to specify to turn on logging output in addition to -s for generic stdout. Based on Logging within pytest tests, I am using:
pytest --log-cli-level=DEBUG -s my_directory/
If you are using PyCharm IDE, then you can run that individual test or all tests using Run toolbar. The Run tool window displays output generated by your application and you can see all the print statements in there as part of test output.
If anyone wants to run tests from code with output:
if __name__ == '__main__':
pytest.main(['--capture=no'])
The capsys, capsysbinary, capfd, and capfdbinary fixtures allow access to stdout/stderr output created
during test execution. Here is an example test function that performs some output related checks:
def test_print_something_even_if_the_test_pass(self, capsys):
text_to_be_printed = "Print me when the test pass."
print(text_to_be_printed)
p_t = capsys.readouterr()
sys.stdout.write(p_t.out)
# the two rows above will print the text even if the test pass.
Here is the result:
test_print_something_even_if_the_test_pass PASSED [100%]Print me when the test pass.

Turn some print off in python unittest

Im using unittest and it prints ".", "E" or "F" for "ok", "error" and "fail" after each test it does. How do I switch it off ? Im using Python 2.7 and these print come from the runner class which is built in.
It sounds very tough to override the classes because it's all nested.
edit:
I only want to take off the characters E . and F because they don't appear at the same time as some other log in my tests.
The output of unittest is written to the standard error stream, which you can pipe somewhere else. On a *nix box this would be possible like this:
python -m unittest some_module 2> /dev/null
On windows, this should look like this (thanks Karl Knechtel):
python -m unittest some_module 2> NUL
If you run the tests from python, you can simply replace the stderr stream like that:
import sys, os
sys.stderr = open(os.devnull, 'w')
... # do your testing here
sys.stderr = sys.__stderr__ # if you still need the stderr stream
Since you just want to turn off the updates for the ., F, E symbols, you could also create your own TestResult class by overriding the default one. In my case (Python 2.6) this would look like this:
import unittest
class MyTestResult(unittest._TextTestResult):
def addSuccess(self, test):
TestResult.addSuccess(self, test)
def addError(self, test, err):
TestResult.addError(self, test, err)
def addFailure(self, test, err):
TestResult.addFailure(self, test, err)
This effectively turns off the printing of the characters, but maintaining the default functionality.
Now we also need a new TestRunner class and override the _makeResult method:
class MyTestRunner(unittest.TextTestRunner):
def _makeResult(self):
return MyTestResult(self.stream, self.descriptions, self.verbosity)
With this runner you can now enjoy a log free testing.
Just a note: this is not possible from the command line, unfortunately.
A bit late response, but someone may find it useful.
You can turn . E and F off by setting verbosity level to 0:
testRunner = unittest.TextTestRunner( verbosity = 0 )
You will still have the final result and possible errors/exceptions at the end of tests in the stderr.
Tested in Python 2.4 and 2.7.
Depending the unittest framework you're using (standard, nose...), you have multiple way to decrease the verbosity:
python -m unittest -h
...
-q, --quiet Minimal output
...

Categories

Resources