I generally like the pytest warnings capture hook, as I can use it to force my test suite to not have any warnings triggered. However, I have one test that requires the warnings to print to stderr correctly to work.
How can I disable the warnings capture for just the one test?
For instance, something like
def test_warning():
mystderr = StringIO()
sys.stderr = mystderr
warnings.warn('warning')
assert 'UserWarning: warning' in mystderr.getvalue()
(I know I can use capsys, I just want to show the basic idea)
Thanks to the narrowing down in this discussion, I think the question might better be titled "In pytest, how to capture warnings and their standard error output in a single test?". Given that suggested rewording, I think the answer is "it can't, you need a separate test".
If there were no standard error capture requirement, you should be able to use the #pytest.mark.filterwarnings annotation for this.
#pytest.mark.filterwarnings("ignore")
def test_one():
assert api_v1() == 1
From:
https://docs.pytest.org/en/latest/warnings.html#pytest-mark-filterwarnings
#wim points out in a comment this will not capture the warning, though, and the answer he lays out captures and asserts on the warnings in a standard way.
If there were stderr output but not Python warnings thrown, capsys would be the technique, as you say
https://docs.pytest.org/en/latest/capture.html
I don't think it's meaningful to do both in a pytest test, because of the nature of the pytest implementation.
As previously noted pytest redirects stderr etc to an internal recorder. Secondly, it defines its own warnings handler
https://github.com/pytest-dev/pytest/blob/master/src/_pytest/warnings.py#L59
It is similar in idea to the answer to this question:
https://stackoverflow.com/a/5645133/5729872
I had a little poke around with redefining warnings.showwarning(), which worked fine from vanilla python, but pytest deliberately reinitializes that as well.
won't work in pytest, only straight python -->
def func(x):
warnings.warn('wwarn')
print(warnings.showwarning.__doc__)
# print('ewarn', file=sys.stderr)
return x + 1
sworig = warnings.showwarning
def showwarning_wrapper(message, category, filename, lineno, file=None, line=None):
"""Local override for showwarning()"""
print('swwrapper({})'.format(file) )
sworig(message,category,filename,lineno,file,line)
warnings.showwarning = showwarning_wrapper
<-- won't work in pytest, only straight python
You could probably put a warnings handler in your test case that reoutput to stderr ... but that doesn't prove much about the code under test, at that point.
It is your system at the end of the day. If after consideration of the point made by #wim that testing stderr as such may not prove much, you decide you still need it, I suggest separating the testing of the Python warning object (python caller layer) and the contents of stderr (calling shell layer). The first test would look at Python warning objects only. The new second test case would call the library under test as a script, through popen() or similar, and assert on the resulting standard error and output.
I'll encourage you to think about this problem in a different way.
When you want to assert that some of your code triggers warnings, you should be using a pytest.warns context for that. Check the warning message by using the match keyword, and avoid the extra complications of trying to capture it from stderr.
import re
import warnings
import pytest
def test_warning():
expected_warning_message = "my warning"
match = re.escape(expected_warning_message)
with pytest.warns(UserWarning, match=match):
warnings.warn("my warning", UserWarning)
That should be the edge of your testing responsibility. It is not your responsibility to test that the warnings module itself prints some output to stderr, because that behavior is coming from standard library code and it should be tested by Python itself.
Related
When talking about pytest we know two things:
When a test pass, no output is given in principle
Sometimes the assertion failures can have very cryptic messages.
I took a course that solved this by using print to clarify desired outputs and calling the pytest as pytest -v -s. I think it is a great solution
Another developer in my company thinks that test code should be as free of "side effects" as possible (and considers prints as side effect). He suggests outputting to a file which I think it is not a good practice. (I think that is an undesirable side effect)
So I would like to hear about this from other developers.
How do you solve the two points given in the beginning and do you use prints in your tests?
As someone already pointed out you can provide your own assert message:
def test_something():
i = 2
assert i == 1, "i should be equal to one"
There should be really no difference between using assert messages and prints, but in case of an assert message only it would be visible in pytest report, and not all the stdout calls:
In this case 0-9 would be printed in pytest report
def test_something():
i = 2
for i in range(10):
print(i)
assert i == 1
Logging everything to a file would definitely make working with pytest harder, and would be a pain to debug if your tests fail in CI.
If you need descriptive messages I would prefer using assert messages and, maybe, prints for debug information.
using print() in your test is not a good solution, you need to see data in cli or on a pipeline.
so for assertion you can share custom messages for assertion in pass or fail case or even in raise an exception
here is a basic tutorial for this
https://docs.pytest.org/en/7.1.x/how-to/assert.html
for general test steps the best way to get the info is logging
logging on different levels
import logging as logger
logger.info('what info you want to share')
logger.error('what info you want to share')
logger.debug('what info you want to share')
for more info you can check this
https://docs.python.org/3/howto/logging.html
Python 3.8.0, pytest 5.3.2, logging 0.5.1.2.
My code has an input loop, and to prevent the program crashing entirely, I catch any exceptions that get thrown, log them as critical, reset the program state, and keep going. That means that a test that causes such an exception won't outright fail, so long as the output is still what is expected. This might happen if the error was a side effect of the test code but didn't affect the main tested logic. I would still like to know that the test is exposing an error-causing bug however.
Most of the Googling I have done shows results on how to display logs within pytest, which I am doing, but I can't find out if there is a way to expose the logs within the test, such that I can fail any test with a log at Error or Critical level.
Edit: This is a minimal example of a failing attempt:
test.py:
import subject
import logging
import pytest
#pytest.fixture(autouse=True)
def no_log_errors(caplog):
yield # Run in teardown
print(caplog.records)
# caplog.set_level(logging.INFO)
errors = [record for record in caplog.records if record.levelno >= logging.ERROR]
assert not errors
def test_main():
subject.main()
# assert False
subject.py:
import logging
logger = logging.Logger('s')
def main():
logger.critical("log critical")
Running python3 -m pytest test.py passes with no errors.
Uncommenting the assert statement fails the test without errors, and prints [] to stdout, and log critical to stderr.
Edit 2:
I found why this fails. From the documentation on caplog:
The caplog.records attribute contains records from the current stage only, so inside the setup phase it contains only setup logs, same with the call and teardown phases
However, right underneath is what I should have found the first time:
To access logs from other stages, use the caplog.get_records(when) method. As an example, if you want to make sure that tests which use a certain fixture never log any warnings, you can inspect the records for the setup and call stages during teardown like so:
#pytest.fixture
def window(caplog):
window = create_window()
yield window
for when in ("setup", "call"):
messages = [
x.message for x in caplog.get_records(when) if x.levelno == logging.WARNING
]
if messages:
pytest.fail(
"warning messages encountered during testing: {}".format(messages)
)
However this still doesn't make a difference, and print(caplog.get_records("call")) still returns []
You can build something like this using the caplog fixture
here's some sample code from the docs which does some assertions based on the levels:
def test_baz(caplog):
func_under_test()
for record in caplog.records:
assert record.levelname != "CRITICAL"
assert "wally" not in caplog.text
since the records are the standard logging record types, you can use whatever you need there
here's one way you might do this ~more automatically using an autouse fixture:
#pytest.fixture(autouse=True)
def no_logs_gte_error(caplog):
yield
errors = [record for record in caplog.get_records('call') if record.levelno >= logging.ERROR]
assert not errors
(disclaimer: I'm a core dev on pytest)
You can use the unittest.mock module (even if using pytest) and monkey-patch whatever function / method you use for logging. Then in your test, you can have some assert that fails if, say, logging.error was called.
That'd be a short term solution. But it might also be the case that your design could benefit from more separation, so that you can easily test your application without a zealous try ... except block catching / suppressing just about everything.
Is it possible to change the capturing behavior in pytest for just one test — i.e., within the test script?
I have a bunch of tests that I use with pytest. There are several useful quantities that I like to print during some tests, so I use the -s flag to show them in the pytest output. But I also test for warnings, which also get printed, and look ugly and distracting. I've tried using the warnings.simplefilter as usual to just not show the warnings, but that doesn't seem to do anything. (Maybe pytest hacks it???) Anyway, I'd like some way to quiet the warnings but still check that they are raised, while also being able to see the captured output from my other print statements. Is there any way to do this — e.g., by change the capture for just one test function?
With pytest 3.x there is an easy way to temporarily disable capturing (see the section about capsys.disabled().
There's also the pytest-warnings plugin which shows the warning in a dedicated report section.
I've done it by manually redirecting stderr:
import os
import sys
import warnings
import pytest
def test():
stderr = sys.stderr
sys.stderr = open(os.devnull, 'w')
with pytest.warns(UserWarning):
warnings.warn("Warning!", UserWarning)
sys.stderr = stderr
For good measure, I could similarly redirect stdout to devnull, if other print statements are not wanted.
Sometimes I want to just insert some print statements in my code, and see what gets printed out when I exercise it. My usual way to "exercise" it is with existing pytest tests. But when I run these, I don't seem able to see any standard output (at least from within PyCharm, my IDE).
Is there a simple way to see standard output during a pytest run?
The -s switch disables per-test capturing (only if a test fails).
-s is equivalent to --capture=no.
pytest captures the stdout from individual tests and displays them only on certain conditions, along with the summary of the tests it prints by default.
Extra summary info can be shown using the '-r' option:
pytest -rP
shows the captured output of passed tests.
pytest -rx
shows the captured output of failed tests (default behaviour).
The formatting of the output is prettier with -r than with -s.
When running the test use the -s option. All print statements in exampletest.py would get printed on the console when test is run.
py.test exampletest.py -s
In an upvoted comment to the accepted answer, Joe asks:
Is there any way to print to the console AND capture the output so that it shows in the junit report?
In UNIX, this is commonly referred to as teeing. Ideally, teeing rather than capturing would be the py.test default. Non-ideally, neither py.test nor any existing third-party py.test plugin (...that I know of, anyway) supports teeing – despite Python trivially supporting teeing out-of-the-box.
Monkey-patching py.test to do anything unsupported is non-trivial. Why? Because:
Most py.test functionality is locked behind a private _pytest package not intended to be externally imported. Attempting to do so without knowing what you're doing typically results in the public pytest package raising obscure exceptions at runtime. Thanks alot, py.test. Really robust architecture you got there.
Even when you do figure out how to monkey-patch the private _pytest API in a safe manner, you have to do so before running the public pytest package run by the external py.test command. You cannot do this in a plugin (e.g., a top-level conftest module in your test suite). By the time py.test lazily gets around to dynamically importing your plugin, any py.test class you wanted to monkey-patch has long since been instantiated – and you do not have access to that instance. This implies that, if you want your monkey-patch to be meaningfully applied, you can no longer safely run the external py.test command. Instead, you have to wrap the running of that command with a custom setuptools test command that (in order):
Monkey-patches the private _pytest API.
Calls the public pytest.main() function to run the py.test command.
This answer monkey-patches py.test's -s and --capture=no options to capture stderr but not stdout. By default, these options capture neither stderr nor stdout. This isn't quite teeing, of course. But every great journey begins with a tedious prequel everyone forgets in five years.
Why do this? I shall now tell you. My py.test-driven test suite contains slow functional tests. Displaying the stdout of these tests is helpful and reassuring, preventing leycec from reaching for killall -9 py.test when yet another long-running functional test fails to do anything for weeks on end. Displaying the stderr of these tests, however, prevents py.test from reporting exception tracebacks on test failures. Which is completely unhelpful. Hence, we coerce py.test to capture stderr but not stdout.
Before we get to it, this answer assumes you already have a custom setuptools test command invoking py.test. If you don't, see the Manual Integration subsection of py.test's well-written Good Practices page.
Do not install pytest-runner, a third-party setuptools plugin providing a custom setuptools test command also invoking py.test. If pytest-runner is already installed, you'll probably need to uninstall that pip3 package and then adopt the manual approach linked to above.
Assuming you followed the instructions in Manual Integration highlighted above, your codebase should now contain a PyTest.run_tests() method. Modify this method to resemble:
class PyTest(TestCommand):
.
.
.
def run_tests(self):
# Import the public "pytest" package *BEFORE* the private "_pytest"
# package. While importation order is typically ignorable, imports can
# technically have side effects. Tragicomically, that is the case here.
# Importing the public "pytest" package establishes runtime
# configuration required by submodules of the private "_pytest" package.
# The former *MUST* always be imported before the latter. Failing to do
# so raises obtuse exceptions at runtime... which is bad.
import pytest
from _pytest.capture import CaptureManager, FDCapture, MultiCapture
# If the private method to be monkey-patched no longer exists, py.test
# is either broken or unsupported. In either case, raise an exception.
if not hasattr(CaptureManager, '_getcapture'):
from distutils.errors import DistutilsClassError
raise DistutilsClassError(
'Class "pytest.capture.CaptureManager" method _getcapture() '
'not found. The current version of py.test is either '
'broken (unlikely) or unsupported (likely).'
)
# Old method to be monkey-patched.
_getcapture_old = CaptureManager._getcapture
# New method applying this monkey-patch. Note the use of:
#
# * "out=False", *NOT* capturing stdout.
# * "err=True", capturing stderr.
def _getcapture_new(self, method):
if method == "no":
return MultiCapture(
out=False, err=True, in_=False, Capture=FDCapture)
else:
return _getcapture_old(self, method)
# Replace the old with the new method.
CaptureManager._getcapture = _getcapture_new
# Run py.test with all passed arguments.
errno = pytest.main(self.pytest_args)
sys.exit(errno)
To enable this monkey-patch, run py.test as follows:
python setup.py test -a "-s"
Stderr but not stdout will now be captured. Nifty!
Extending the above monkey-patch to tee stdout and stderr is left as an exercise to the reader with a barrel-full of free time.
According to pytest documentation, version 3 of pytest can temporary disable capture in a test:
def test_disabling_capturing(capsys):
print('this output is captured')
with capsys.disabled():
print('output not captured, going directly to sys.stdout')
print('this output is also captured')
pytest --capture=tee-sys was recently added (v5.4.0). You can capture as well as see the output on stdout/err.
Try pytest -s -v test_login.py for more info in console.
-v it's a short --verbose
-s means 'disable all capturing'
You can also enable live-logging by setting the following in pytest.ini or tox.ini in your project root.
[pytest]
log_cli = True
Or specify it directly on cli
pytest -o log_cli=True
pytest test_name.py -v -s
Simple!
I would suggest using -h command. There're quite interesting commands might be used for.
but, for this particular case: -s shortcut for --capture=no. is enough
pytest <test_file.py> -s
If you are using logging, you need to specify to turn on logging output in addition to -s for generic stdout. Based on Logging within pytest tests, I am using:
pytest --log-cli-level=DEBUG -s my_directory/
If you are using PyCharm IDE, then you can run that individual test or all tests using Run toolbar. The Run tool window displays output generated by your application and you can see all the print statements in there as part of test output.
If anyone wants to run tests from code with output:
if __name__ == '__main__':
pytest.main(['--capture=no'])
The capsys, capsysbinary, capfd, and capfdbinary fixtures allow access to stdout/stderr output created
during test execution. Here is an example test function that performs some output related checks:
def test_print_something_even_if_the_test_pass(self, capsys):
text_to_be_printed = "Print me when the test pass."
print(text_to_be_printed)
p_t = capsys.readouterr()
sys.stdout.write(p_t.out)
# the two rows above will print the text even if the test pass.
Here is the result:
test_print_something_even_if_the_test_pass PASSED [100%]Print me when the test pass.
Today I was thinking about a Python project I wrote about a year back where I used logging pretty extensively. I remember having to comment out a lot of logging calls in inner-loop-like scenarios (the 90% code) because of the overhead (hotshot indicated it was one of my biggest bottlenecks).
I wonder now if there's some canonical way to programmatically strip out logging calls in Python applications without commenting and uncommenting all the time. I'd think you could use inspection/recompilation or bytecode manipulation to do something like this and target only the code objects that are causing bottlenecks. This way, you could add a manipulator as a post-compilation step and use a centralized configuration file, like so:
[Leave ERROR and above]
my_module.SomeClass.method_with_lots_of_warn_calls
[Leave WARN and above]
my_module.SomeOtherClass.method_with_lots_of_info_calls
[Leave INFO and above]
my_module.SomeWeirdClass.method_with_lots_of_debug_calls
Of course, you'd want to use it sparingly and probably with per-function granularity -- only for code objects that have shown logging to be a bottleneck. Anybody know of anything like this?
Note: There are a few things that make this more difficult to do in a performant manner because of dynamic typing and late binding. For example, any calls to a method named debug may have to be wrapped with an if not isinstance(log, Logger). In any case, I'm assuming all of the minor details can be overcome, either by a gentleman's agreement or some run-time checking. :-)
What about using logging.disable?
I've also found I had to use logging.isEnabledFor if the logging message is expensive to create.
Use pypreprocessor
Which can also be found on PYPI (Python Package Index) and be fetched using pip.
Here's a basic usage example:
from pypreprocessor import pypreprocessor
pypreprocessor.parse()
#define nologging
#ifdef nologging
...logging code you'd usually comment out manually...
#endif
Essentially, the preprocessor comments out code the way you were doing it manually before. It just does it on the fly conditionally depending on what you define.
You can also remove all of the preprocessor directives and commented out code from the postprocessed code by adding 'pypreprocessor.removeMeta = True' between the import and
parse() statements.
The bytecode output (.pyc) file will contain the optimized output.
SideNote: pypreprocessor is compatible with python2x and python3k.
Disclaimer: I'm the author of pypreprocessor.
I've also seen assert used in this fashion.
assert logging.warn('disable me with the -O option') is None
(I'm guessing that warn always returns none.. if not, you'll get an AssertionError
But really that's just a funny way of doing this:
if __debug__: logging.warn('disable me with the -O option')
When you run a script with that line in it with the -O option, the line will be removed from the optimized .pyo code. If, instead, you had your own variable, like in the following, you will have a conditional that is always executed (no matter what value the variable is), although a conditional should execute quicker than a function call:
my_debug = True
...
if my_debug: logging.warn('disable me by setting my_debug = False')
so if my understanding of debug is correct, it seems like a nice way to get rid of unnecessary logging calls. The flipside is that it also disables all of your asserts, so it is a problem if you need the asserts.
As an imperfect shortcut, how about mocking out logging in specific modules using something like MiniMock?
For example, if my_module.py was:
import logging
class C(object):
def __init__(self, *args, **kw):
logging.info("Instantiating")
You would replace your use of my_module with:
from minimock import Mock
import my_module
my_module.logging = Mock('logging')
c = my_module.C()
You'd only have to do this once, before the initial import of the module.
Getting the level specific behaviour would be simple enough by mocking specific methods, or having logging.getLogger return a mock object with some methods impotent and others delegating to the real logging module.
In practice, you'd probably want to replace MiniMock with something simpler and faster; at the very least something which doesn't print usage to stdout! Of course, this doesn't handle the problem of module A importing logging from module B (and hence A also importing the log granularity of B)...
This will never be as fast as not running the log statements at all, but should be much faster than going all the way into the depths of the logging module only to discover this record shouldn't be logged after all.
You could try something like this:
# Create something that accepts anything
class Fake(object):
def __getattr__(self, key):
return self
def __call__(self, *args, **kwargs):
return True
# Replace the logging module
import sys
sys.modules["logging"] = Fake()
It essentially replaces (or initially fills in) the space for the logging module with an instance of Fake which simply takes in anything. You must run the above code (just once!) before the logging module is attempted to be used anywhere. Here is a test:
import logging
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)-8s %(message)s',
datefmt='%a, %d %b %Y %H:%M:%S',
filename='/temp/myapp.log',
filemode='w')
logging.debug('A debug message')
logging.info('Some information')
logging.warning('A shot across the bows')
With the above, nothing at all was logged, as was to be expected.
I'd use some fancy logging decorator, or a bunch of them:
def doLogging(logTreshold):
def logFunction(aFunc):
def innerFunc(*args, **kwargs):
if LOGLEVEL >= logTreshold:
print ">>Called %s at %s"%(aFunc.__name__, time.strftime("%H:%M:%S"))
print ">>Parameters: ", args, kwargs if kwargs else ""
try:
return aFunc(*args, **kwargs)
finally:
print ">>%s took %s"%(aFunc.__name__, time.strftime("%H:%M:%S"))
return innerFunc
return logFunction
All you need is to declare LOGLEVEL constant in each module (or just globally and just import it in all modules) and then you can use it like this:
#doLogging(2.5)
def myPreciousFunction(one, two, three=4):
print "I'm doing some fancy computations :-)"
return
And if LOGLEVEL is no less than 2.5 you'll get output like this:
>>Called myPreciousFunction at 18:49:13
>>Parameters: (1, 2)
I'm doing some fancy computations :-)
>>myPreciousFunction took 18:49:13
As you can see, some work is needed for better handling of kwargs, so the default values will be printed if they are present, but that's another question.
You should probably use some logger module instead of raw print statements, but I wanted to focus on the decorator idea and avoid making code too long.
Anyway - with such decorator you get function-level logging, arbitrarily many log levels, ease of application to new function, and to disable logging you only need to set LOGLEVEL. And you can define different output streams/files for each function if you wish. You can write doLogging as:
def doLogging(logThreshold, outStream=sys.stdout):
.....
print >>outStream, ">>Called %s at %s" etc.
And utilize log files defined on a per-function basis.
This is an issue in my project as well--logging ends up on profiler reports pretty consistently.
I've used the _ast module before in a fork of PyFlakes (http://github.com/kevinw/pyflakes) ... and it is definitely possible to do what you suggest in your question--to inspect and inject guards before calls to logging methods (with your acknowledged caveat that you'd have to do some runtime type checking). See http://pyside.blogspot.com/2008/03/ast-compilation-from-python.html for a simple example.
Edit: I just noticed MetaPython on my planetpython.org feed--the example use case is removing log statements at import time.
Maybe the best solution would be for someone to reimplement logging as a C module, but I wouldn't be the first to jump at such an...opportunity :p
:-) We used to call that a preprocessor and although C's preprocessor had some of those capablities, the "king of the hill" was the preprocessor for IBM mainframe PL/I. It provided extensive language support in the preprocessor (full assignments, conditionals, looping, etc.) and it was possible to write "programs that wrote programs" using just the PL/I PP.
I wrote many applications with full-blown sophisticated program and data tracing (we didn't have a decent debugger for a back-end process at that time) for use in development and testing which then, when compiled with the appropriate "runtime flag" simply stripped all the tracing code out cleanly without any performance impact.
I think the decorator idea is a good one. You can write a decorator to wrap the functions that need logging. Then, for runtime distribution, the decorator is turned into a "no-op" which eliminates the debugging statements.
Jon R
I am doing a project currently that uses extensive logging for testing logic and execution times for a data analysis API using the Pandas library.
I found this string with a similar concern - e.g. what is the overhead on the logging.debug statements even if the logging.basicConfig level is set to level=logging.WARNING
I have resorted to writing the following script to comment out or uncomment the debug logging prior to deployment:
import os
import fileinput
comment = True
# exclude files or directories matching string
fil_dir_exclude = ["__","_archive",".pyc"]
if comment :
## Variables to comment
source_str = 'logging.debug'
replace_str = '#logging.debug'
else :
## Variables to uncomment
source_str = '#logging.debug'
replace_str = 'logging.debug'
# walk through directories
for root, dirs, files in os.walk('root/directory') :
# where files exist
if files:
# for each file
for file_single in files :
# build full file name
file_name = os.path.join(root,file_single)
# exclude files with matching string
if not any(exclude_str in file_name for exclude_str in fil_dir_exclude) :
# replace string in line
for line in fileinput.input(file_name, inplace=True):
print "%s" % (line.replace(source_str, replace_str)),
This is a file recursion that excludes files based on a list of criteria and performs an in place replace based on an answer found here: Search and replace a line in a file in Python
I like the 'if __debug_' solution except that putting it in front of every call is a bit distracting and ugly. I had this same problem and overcame it by writing a script which automatically parses your source files and replaces logging statements with pass statements (and commented out copies of the logging statements). It can also undo this conversion.
I use it when I deploy new code to a production environment when there are lots of logging statements which I don't need in a production setting and they are affecting performance.
You can find the script here: http://dound.com/2010/02/python-logging-performance/