We have recently switched to py.test for python testing (which is fantastic btw). However, I'm trying to figure out how to control the log output (i.e. the built-in python logging module). We have pytest-capturelog installed and this works as expected and when we want to see logs we can pass --nologcapture option.
However, how do you control the logging level (e.g. info, debug etc.) and also filter the logging (if you're only interested in a specific module). Is there existing plugins for py.test to achieve this or do we need to roll our own?
Thanks,
Jonny
Installing and using the pytest-capturelog plugin could satisfy most of your pytest/logging needs. If something is missing you should be able to implement it relatively easily.
As Holger said you can use pytest-capturelog:
def test_foo(caplog):
caplog.setLevel(logging.INFO)
pass
If you don't want to use pytest-capturelog you can use a stdout StreamHandler in your logging config so pytest will capture the log output. Here is an example basicConfig
logging.basicConfig(level=logging.DEBUG, stream=sys.stdout)
A bit of a late contribution, but I can recommend pytest-logging for a simple drop-in logging capture solution. After pip install pytest-logging you can control the verbosity of the your logs (displayed on screen) with
$ py.test -s -v tests/your_test.py
$ py.test -s -vv tests/your_test.py
$ py.test -s -vvvv tests/your_test.py
etc... NB - the -s flag is important, without it py.test will filter out all the sys.stderr information.
Pytest now has native support for logging control via the caplog fixture; no need for plugins.
You can specify the logging level for a particular logger or by default for the root logger:
import pytest
def test_bar(caplog):
caplog.set_level(logging.CRITICAL, logger='root.baz')
Pytest also captures log output in caplog.records so you can assert logged levels and messages. For further information see the official documentation here and here.
A bit of an even later contribution: you can try pytest-logger. Novelty of this plugin is logging to filesystem: pytest provides nodeid for each test item, which can be used to organize test session logs directory (with help of pytest tmpdir facility and it's testcase begin/end hooks).
You can configure multiple handlers (with levels) for terminal and filesystem separately and provide own cmdline options for filtering loggers/levels to make it work for your specific test environment - e.g. by default you can log all to filesystem and small fraction to terminal, which can be changed on per-session basis with --log option if needed. Plugin does nothing by default, if user defines no hooks.
Related
using vscode + pytest, while the test cases are written in unittest, for example:
class MyTest(unittest.TestCase):
def testEnvVar(self):
account = os.getenv("ACCOUNT")
It fails to read ACCOUNT. However, os.getenv("ACCOUNT") will get right result if it's executed by python test.py directly:
# test.py
import os
print(os.getenv("ACCOUNT"))
that ensure "ACCOUNT" envar is already set.
If executed by pytest tests/test.py, the envrionment var cannot be read too, so it's caused by pytest. I know pytest will do some tricks (for example, pytest will capture all console/stderr output), but I don't know what exactly it does to envar. And same as tox(in tox, you have to set passenv=*, so the test env can inherit all environment variables from shell where tox lives.
I mean, I totally understand this is trick that all test-related tools could make, I just don't know how to disable it in pytest. So don't suggest that I have forgot set variable, or my code is wrong, etc.
I know how to hack this, by use mock or add .env vars in vscode launch file. However, this will definitely expose account/password in some files/settings. since account and password are secrets, I don't want to expose them in any files.
I guess this is a very common requirement and pytest should have already hornored that. So why this still happend?
The easiest way is to use https://github.com/MobileDynasty/pytest-env. But I don't think you should test the enviroment variables in your unit tests.
What I'm trying to do seems really simple but I can't find a way to do it. I'm trying to use the module Pipreqs in a script, but I have to use subprocess.call() because Pipreqs doesn't have a way to use it in a script. Pipreqs uses logging to log info and I don't want that. Pipreqs also doesn't have a quiet mode. I tried to use logger.setLevel(logging.WARNING) but since I'm calling it through subprocess.call() it still prints info. I've also tried importing Pipreqs and setting the logging level to warning and that also doesn't work. Is there any way to disable this output? My code right now is the following:
import subprocess
import logging
import pipreqs
logger = logging.getLogger("pipreqs")
logger.setLevel(logging.WARNING)
subprocess.call(["pipreqs", "--force","/path/to/dir"])
You won't have access to the logger for an external process. The subprocess module does have flags for disabling output though.
subprocess.call(
["pipreqs", "--force","/path/to/dir"],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL
)
I thought it is for activating debug mode which prints all logging.debug() messages. But apparently that doesn't happen. The documentation simply says:
Turn on parser debugging output (for wizards only, depending on compilation options).
See also PYTHONDEBUG.
Which doesn't explain anything in my eyes. Can somebody give a more verbose explanation and verify that there is really no default CPython Argument that activates debug logging?
From Parser/parser.c:
#ifdef Py_DEBUG
extern int Py_DebugFlag;
#define D(x) if (!Py_DebugFlag); else x
#else
#define D(x)
#endif
This D macro is used with printf() to print debugging messages when the debug flag is supplied and the interpreter is compiled with in debug mode. The debugging messages are intended for the developers of Python, people who work on Python itself (not to be confused with Python programmers, who are people who use Python). I've gone through the Python manual page and none of them activate the logging debug mode. However, one can use the -i flag in conjunction with the -c to achieve the same affect:
python -i -c "import logging;logging.basicConfig(level=logging.DEBUG)"
The -d option enables the python parser debugging flags. Unless you're hacking the Python interpreter and changing how it parses Python code, you're unlikely to ever need that option.
The logging infrastructure is a standard library module, not a builtin feature of the interpreter. It doesn't make much sense to have an interpreter flag that changes such a localized feature of a module.
Also, consider how logging level depends on the logging logger and handler you're using. You can set different levels for different loggers and handlers, for different parts of your application. For instance, when you want all DEBUG lines from anyone to appear in console, but INFO and above from a lib should be logged to a common file, and WARNING and ERROR to be logged to specific files for easier monitoring. You can set a global handler for DEBUG that logs to console, and other handlers that log the different levels to separate files.
We started writing our functional and unit test cases in python using nose framework. We started learning python while writing these tests. Since there are lot of dependencies between our test classes/functions, we decided to use proboscis framework on top of nose to control order of execution.
We have quite a few 'print' statements in our tests and proboscis seems to be ignoring these! Tests are running in expected order and testing them all but not printing our print statement data to console. Any idea what we are missing here?
BTW, we stopped deriving our classes from 'unittest.TestCase' once we moved to proboscis and decorated all classes and their member functions with #test.
Note: According to the Proboscis documentation "unused arguments get passed along to Nose or the unittest module", so the following should apply to Proboscis by replacing nosetests with python run_tests.py.
As #Wooble has mentioned in his comment, by default nose captures stdout and only displays it for failed tests. You can override this behaviour with the nosetests -s or --nocapture switch:
$ nosetests --nocapture
Like #Wooble also mentions in his comment, I recommend using the logging module instead of print. Then you only need to pass nosetests the -l DEBUG or --debug=DEBUG switch, where DEBUG is replaced by a comma separated list of the names of the loggers you want to display, to enable displaying of the logging output from your modules:
$ nosetests --debug=your-logger-name
I'm currently trying to see if a nose plugin is enabled from within my test harness. The specific thing I'm trying to do is propagate the enable status of the coverage module to subprocess executions. Essentially when --with-coverage is used, I want to execute the subprocesses under the coverage tool directly (or propagate a flag down).
Can this be done?
This is one of those cases where you need to work around nose. See Measuring subprocesses in the coverage documentation for ways to ensure that subprocesses automatically invoke coverage.