How can I see normal print output created during pytest run? - python

Sometimes I want to just insert some print statements in my code, and see what gets printed out when I exercise it. My usual way to "exercise" it is with existing pytest tests. But when I run these, I don't seem able to see any standard output (at least from within PyCharm, my IDE).
Is there a simple way to see standard output during a pytest run?

The -s switch disables per-test capturing (only if a test fails).
-s is equivalent to --capture=no.

pytest captures the stdout from individual tests and displays them only on certain conditions, along with the summary of the tests it prints by default.
Extra summary info can be shown using the '-r' option:
pytest -rP
shows the captured output of passed tests.
pytest -rx
shows the captured output of failed tests (default behaviour).
The formatting of the output is prettier with -r than with -s.

When running the test use the -s option. All print statements in exampletest.py would get printed on the console when test is run.
py.test exampletest.py -s

In an upvoted comment to the accepted answer, Joe asks:
Is there any way to print to the console AND capture the output so that it shows in the junit report?
In UNIX, this is commonly referred to as teeing. Ideally, teeing rather than capturing would be the py.test default. Non-ideally, neither py.test nor any existing third-party py.test plugin (...that I know of, anyway) supports teeing – despite Python trivially supporting teeing out-of-the-box.
Monkey-patching py.test to do anything unsupported is non-trivial. Why? Because:
Most py.test functionality is locked behind a private _pytest package not intended to be externally imported. Attempting to do so without knowing what you're doing typically results in the public pytest package raising obscure exceptions at runtime. Thanks alot, py.test. Really robust architecture you got there.
Even when you do figure out how to monkey-patch the private _pytest API in a safe manner, you have to do so before running the public pytest package run by the external py.test command. You cannot do this in a plugin (e.g., a top-level conftest module in your test suite). By the time py.test lazily gets around to dynamically importing your plugin, any py.test class you wanted to monkey-patch has long since been instantiated – and you do not have access to that instance. This implies that, if you want your monkey-patch to be meaningfully applied, you can no longer safely run the external py.test command. Instead, you have to wrap the running of that command with a custom setuptools test command that (in order):
Monkey-patches the private _pytest API.
Calls the public pytest.main() function to run the py.test command.
This answer monkey-patches py.test's -s and --capture=no options to capture stderr but not stdout. By default, these options capture neither stderr nor stdout. This isn't quite teeing, of course. But every great journey begins with a tedious prequel everyone forgets in five years.
Why do this? I shall now tell you. My py.test-driven test suite contains slow functional tests. Displaying the stdout of these tests is helpful and reassuring, preventing leycec from reaching for killall -9 py.test when yet another long-running functional test fails to do anything for weeks on end. Displaying the stderr of these tests, however, prevents py.test from reporting exception tracebacks on test failures. Which is completely unhelpful. Hence, we coerce py.test to capture stderr but not stdout.
Before we get to it, this answer assumes you already have a custom setuptools test command invoking py.test. If you don't, see the Manual Integration subsection of py.test's well-written Good Practices page.
Do not install pytest-runner, a third-party setuptools plugin providing a custom setuptools test command also invoking py.test. If pytest-runner is already installed, you'll probably need to uninstall that pip3 package and then adopt the manual approach linked to above.
Assuming you followed the instructions in Manual Integration highlighted above, your codebase should now contain a PyTest.run_tests() method. Modify this method to resemble:
class PyTest(TestCommand):
.
.
.
def run_tests(self):
# Import the public "pytest" package *BEFORE* the private "_pytest"
# package. While importation order is typically ignorable, imports can
# technically have side effects. Tragicomically, that is the case here.
# Importing the public "pytest" package establishes runtime
# configuration required by submodules of the private "_pytest" package.
# The former *MUST* always be imported before the latter. Failing to do
# so raises obtuse exceptions at runtime... which is bad.
import pytest
from _pytest.capture import CaptureManager, FDCapture, MultiCapture
# If the private method to be monkey-patched no longer exists, py.test
# is either broken or unsupported. In either case, raise an exception.
if not hasattr(CaptureManager, '_getcapture'):
from distutils.errors import DistutilsClassError
raise DistutilsClassError(
'Class "pytest.capture.CaptureManager" method _getcapture() '
'not found. The current version of py.test is either '
'broken (unlikely) or unsupported (likely).'
)
# Old method to be monkey-patched.
_getcapture_old = CaptureManager._getcapture
# New method applying this monkey-patch. Note the use of:
#
# * "out=False", *NOT* capturing stdout.
# * "err=True", capturing stderr.
def _getcapture_new(self, method):
if method == "no":
return MultiCapture(
out=False, err=True, in_=False, Capture=FDCapture)
else:
return _getcapture_old(self, method)
# Replace the old with the new method.
CaptureManager._getcapture = _getcapture_new
# Run py.test with all passed arguments.
errno = pytest.main(self.pytest_args)
sys.exit(errno)
To enable this monkey-patch, run py.test as follows:
python setup.py test -a "-s"
Stderr but not stdout will now be captured. Nifty!
Extending the above monkey-patch to tee stdout and stderr is left as an exercise to the reader with a barrel-full of free time.

According to pytest documentation, version 3 of pytest can temporary disable capture in a test:
def test_disabling_capturing(capsys):
print('this output is captured')
with capsys.disabled():
print('output not captured, going directly to sys.stdout')
print('this output is also captured')

pytest --capture=tee-sys was recently added (v5.4.0). You can capture as well as see the output on stdout/err.

Try pytest -s -v test_login.py for more info in console.
-v it's a short --verbose
-s means 'disable all capturing'

You can also enable live-logging by setting the following in pytest.ini or tox.ini in your project root.
[pytest]
log_cli = True
Or specify it directly on cli
pytest -o log_cli=True

pytest test_name.py -v -s
Simple!

I would suggest using -h command. There're quite interesting commands might be used for.
but, for this particular case: -s shortcut for --capture=no. is enough
pytest <test_file.py> -s

If you are using logging, you need to specify to turn on logging output in addition to -s for generic stdout. Based on Logging within pytest tests, I am using:
pytest --log-cli-level=DEBUG -s my_directory/

If you are using PyCharm IDE, then you can run that individual test or all tests using Run toolbar. The Run tool window displays output generated by your application and you can see all the print statements in there as part of test output.

If anyone wants to run tests from code with output:
if __name__ == '__main__':
pytest.main(['--capture=no'])

The capsys, capsysbinary, capfd, and capfdbinary fixtures allow access to stdout/stderr output created
during test execution. Here is an example test function that performs some output related checks:
def test_print_something_even_if_the_test_pass(self, capsys):
text_to_be_printed = "Print me when the test pass."
print(text_to_be_printed)
p_t = capsys.readouterr()
sys.stdout.write(p_t.out)
# the two rows above will print the text even if the test pass.
Here is the result:
test_print_something_even_if_the_test_pass PASSED [100%]Print me when the test pass.

Related

How to configure pytest to raise/trigger BytesWarning?

Maybe I am going about this the wrong way, because my search turned up nothing useful.
Adding the -b (-bb) option when calling the python interpreter will warn (raise) whenever an implicit bytes to string or bytes to int conversion takes place:
Issue a warning when comparing bytes or bytearray with str or bytes with int. Issue an error when the option is given twice (-bb).
I would like to write a unit test around this using pytest. I.e., I'd like to do
# foo.py
import pytest
def test_foo():
with pytest.raises(BytesWarning):
print(b"This is a bytes string.")
When calling the above as pytest foo.py the test will fail (no BytesWarning raised). When I call the above test as python -bb -m pytest foo.py it will pass, because BytesWarning is raised as an exception. So far so good.
What I can't work out (nor do I seem to be able to find anything useful on the internet), is if/how it is possible to configure pytest to do this automatically so that I can run pytest --some_arg foo.py and it will do the intended thing. Is this possible?
When you execute pytest foo.py, your shell will look for the pytest program. You can know which one will be executed with the command which pytest. For me, it's /home/stack_overflow/venv/bin/pytest which looks like that :
#!/home/stack_overflow/venv/bin/python
# -*- coding: utf-8 -*-
import re
import sys
from pytest import console_main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(console_main())
It just calls console_main() from the pytest library.
If you add a print(sys.argv), you can see how it was called. For me, it is ['/home/stack_overflow/venv/bin/pytest', 'so70782647.py']. It matches the path from the first line, which is called a shebang. It instructs how your program should be invoked. Here, it indicates to run the file using the python from my venv.
If I modify the line :
#!/home/stack_overflow/venv/bin/python -bb
# ^^^^
now your test passes.
It may be a solution, although not very elegant.
You may notice that, even now, the -bb do not appear when printing sys.argv. The reason is explained in the doc you linked yourself :
An interface option terminates the list of options consumed by the interpreter, all consecutive arguments will end up in sys.argv [...]
So it is not possible to check if it was activated using sys.argv.
I found a question about how to retrieve them from the interpreter's internal state, in case you are interested to check it as pre-condition for your test : Retrieve the command line arguments of the Python interpreter. Although, checking for sys.flags.bytes_warning is simpler in our case ({0: None, 1: '-b', 2: '-bb'}).
Continuing on your question, how to run pytest with the -bb interpreter option ?
You already have a solution : python -bb -m pytest foo.py.
If you prefer, it is possible to create a file pytest whose content is just python -bb -m pytest $# (don't forget to make it executable). Run it with ./pytest foo.py. Or make it an alias.
You can't tell pytest which Python interpreter options you want, because pytest would already be running in the interpreter itself, which would have already handled its own options.
As far as I know, these options are really not easy to change. I guess if you could write into the PyConfig C struct, it would have the desired effect. For example see function bytes_richcompare which does :
if (_Py_GetConfig()->bytes_warning && (op == Py_EQ || op == Py_NE)) {
if (PyUnicode_Check(a) || PyUnicode_Check(b)) {
if (PyErr_WarnEx(PyExc_BytesWarning,
"Comparison between bytes and string", 1))
return NULL;
}
then you could activate it from within your test, as in :
def test_foo():
if sys.flags.bytes_warning < 2:
# change PyConfig.bytes_warning
assert sys.flags.bytes_warning == 2
with pytest.raises(BytesWarning):
print(b"This is a bytes string.")
# change back the PyConfig.bytes_warning
But I think how to do that should be another question.
As a workaround, you can use pytest.warns like so :
def test_foo():
with pytest.warns(BytesWarning):
print(b"This is a bytes string.")
and it only requires the -b option (although -bb works too).

Cleaner way to do pytest fixture parameterization based on command-line switch?

I've technically already solved the problem I was working on, but I can't help but feel like my solution is ugly:
I've got a pytest suite that I can run in two modes: Local Mode (for developing tests; everything just runs on my dev box through Chrome), and Seriousface Regression Testing Mode (for CI; the suite gets run on a zillion browsers and OSes). I've got a command-line flag to toggle between the two modes, --test-local. If it's there, I run in local mode. If it's not there, I run in seriousface mode. Here's how I do it:
# contents of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption("--test-local", action="store_true", default=False, help="run locally instead of in seriousface mode")
def pytest_generate_tests(metafunc):
if "dummy" in metafunc.fixturenames:
if metafunc.config.getoption("--test-local"):
driverParams = [(True, None)]
else:
driverParams = [(False, "seriousface setting 1"), (False, "seriousface setting 2")]
metafunc.parameterize("dummy", driverParams)
#pytest.fixture(scope="function")
def driver(dummy):
_driver = makeDriverStuff(dummy[0], dummy[1])
yield _driver
_driver.cleanup()
#pytest.fixture
def dummy():
pass
The problem is, that dummy fixture is hideous. I've tried having pytest_generate_tests parameterize the driver fixture directly, but it ends up replacing the fixture rather than just feeding stuff into it, so cleanup() never gets called when the test finishes. Using the dummy lets me replace the dummy with my parameter tuple, so that that gets passed into driver().
But, to reiterate, what I have does work, it just feels like a janky hack.
You can try a different approach: instead of dynamically selecting the parameter set for the test, declare ALL parameter sets on it, but deselect the irrelevant ones at launch time.
# r.py
import pytest
real = pytest.mark.real
mock = pytest.mark.mock
#pytest.mark.parametrize('a, b', [
real((True, 'serious 1')),
real((True, 'serious 2')),
mock((False, 'fancy mock')),
(None, 'always unmarked!'),
])
def test_me(a, b):
print([a, b])
Then run it as follows:
pytest -ra -v -s r.py -m real # strictly marked sets
pytest -ra -v -s r.py -m mock # strictly marked sets
pytest -ra -v -s r.py -m 'not real' # incl. non-marked sets
pytest -ra -v -s r.py -m 'not mock' # incl. non-marked sets
Also, skipif marks can be used for the selected parameter set (different from deselecting). They are natively supported by pytest. But the syntax is quite ugly. See more in the pytest's Skip/xfail with parametrize section.
The official manual also contains exactly the same case as in your question in Custom marker and command line option to control test runs. However, it is also not as elegant as -m test deselection, and is more suitable for complex run-time conditions, not on the apriori known structure of the tests.

Why is my pytest selenium function skipped? [duplicate]

I am using skipIf() from unittest for skipping tests in certain conditions.
#unittest.skipIf(condition), "this is why I skipped them!")
How do I tell py.test to display skipping conditions?
I know that for unittest I need to enable the verbose mode (-v) but the same parameter added to py.test increase the verbosity by still does not display the skip reasons.
When you run py.test, you can pass -rsx to report skipped tests.
From py.test --help:
-r chars show extra test summary info as specified by chars
(f)ailed, (E)error, (s)skipped, (x)failed, (X)passed.
Also see this part of the documentation about skipping: http://doc.pytest.org/en/latest/skipping.html
Short answer:
pytest -rs
This will show extra information of skipped tests.
Detailed answer:
To complement #ToddWilson's answer, the following chars have been added: p and P (2.9.0), a (4.1.0) and A (4.5.0). The detailed information about skipped and xfailed tests is not shown by default in order to avoid cluttering the output. You can use the -r flag among with the following chars:
(f)ailed
(E)rror
(s)kipped
(x)failed
(X)passed
(p)assed
(P)assed with output
(a)ll except passed (p/P)
(A)ll.
Warnings are enabled by default, and the default value is fE.

Can't get Nose to honor the attributes I set on my tests

I'm using Django 1.7 with django-nose 1.4 and nose 1.3.6.
According to the documentation, I should be able to select tests to be run by using attributes. I have a test set like this:
from nose.plugins.attrib import attr
from django_webtest import TransactionWebTest
#attr(isolation="menu")
class MenuTestCase(TransactionWebTest):
def test_home(self):
pass
When I try to run my tests with:
./manage.py test -a isolation
Nose eliminates all tests from the run. In other words, it does not run any test. Note that when I do not use -a, all the tests run fine. I've also tried:
-a=isolation
-a isolation=menu
-a=isolation=menu
-a '!isolation'
The last one should select almost all of my test suite since the isolation attribute is used only on one class but it does not select anything! I'm starting to think I just don't understand how the whole attributes system works.
It is unclear to me what causes the problem. It probably has to do with how Django passes the command line arguments to django-nose, which then passes them to nose. At any rate, using the long form of the command line arguments solves the problem:
$ ./manage.py test --attr=isolation
and similarly:
--attr=isolation=menu
--attr='!isolation' (with the single quotes to prevent the shell form interpreting !)
--eval-attr=isolation
--eval-attr='isolation=="menu"' (the single quotes prevent the shell from removing the double quotes)
etc...

Flask Testing - why does coverage exclude import statements and decorators?

My tests clearly execute each function, and there are no unused imports either. Yet, according to the coverage report, 62% of the code was never executed in the following file:
Can someone please point out what I might be doing wrong?
Here's how I initialise the test suite and the coverage:
cov = coverage(branch=True, omit=['website/*', 'run_test_suite.py'])
cov.start()
try:
unittest.main(argv=[sys.argv[0]])
except:
pass
cov.stop()
cov.save()
print "\n\nCoverage Report:\n"
cov.report()
print "HTML version: " + os.path.join(BASEDIR, "tmp/coverage/index.html")
cov.html_report(directory='tmp/coverage')
cov.erase()
This is the third question in the coverage.py FAQ:
Q: Why do the bodies of functions (or classes) show as executed, but
the def lines do not?
This happens because coverage is started after the functions are
defined. The definition lines are executed without coverage
measurement, then coverage is started, then the function is called.
This means the body is measured, but the definition of the function
itself is not.
To fix this, start coverage earlier. If you use the command line to
run your program with coverage, then your entire program will be
monitored. If you are using the API, you need to call coverage.start()
before importing the modules that define your functions.
The simplest thing to do is run you tests under coverage:
$ coverage run -m unittest discover
Your custom test script isn't doing much beyond what the coverage command line would do, it will be simpler just to use the command line.
For excluding the imports statements, you can add the following lines to .coveragerc
[report]
exclude_lines =
# Ignore imports
from
import
but when I tried to add '#' for decorators, the source code within the scope of decorators was excluded. The coverage rate was wrong.
There may be some other ways to exclude decorators.

Categories

Resources