How can I see the current pytest configuration? - python

pytest gathers configuration settings from various files and command-line options, but there doesn't appear to be a command-line option to show pytest's current settings, and I can't work out how to easily interrogate pytest for this info. pytest --help (or pytest -h) shows what the available options are but doesn't show their current values.
I've tried running pytest with PYTEST_DEBUG set (e.g. $ env PYTEST_DEBUG=1 pytest --setup-only): This generates a huge quantity of debug info but it doesn't appear to include the configuration settings, or at least not in a digestible format.
I see that there is a Config object but I can't work out how to interrogate it to write a small program to output its content (I think it needs higher-level pytest-fu than I possess); I think this object may be the right place to look, assuming there isn't a command-line option to display the current settings that I've missed.

After looking through the pytest docs, there doesn't appear to be a direct way to enumerate all the config options automatically. Here are some ways to get at the config values though so hopefully these are helpful.
If you have a specific unit test and know particular values there is an example in the pytest docs that show how to diagnose if those values are set.
Also, the config docs describe the config file search and precedence.
Configuration settings come from several places for pytest, including cmd line options, ini files, and env variables.
The arguments to pytestconfig are the parts of Config and are described here in the documentation.
import pytest
import os
def test_answer(pytestconfig):
if pytestconfig.getoption("verbose") > 0:
print("verbose")
print(pytestconfig.inipath)
print(pytestconfig.invocation_params.args)
print(os.getenv('PYTEST_ADDOPTS', None))
print(pytestconfig.getini('addopts'))
assert 0 # to see what was printed
Now if I run this in my directory with pytest test_sample.py (no cmd line arguments).
The contents of test_sample.py are given above.
The contents of the pytest.ini file are
[pytest]
addopts = -vq
and the PYTEST_ADDOPTS is not set I see:
---------------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------------
/Users/rlucas/Desktop/pytest.ini
('test_sample.py',)
None
['-vq']
================================================================================================== short test summary info ===================================================================================================
FAILED test_sample.py::test_answer - assert 0
and using a different call of pytest test_sample.py --verbose you'll see
---------------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------------
verbose
/Users/rlucas/Desktop/pytest.ini
('test_sample.py', '--verbose')
None
['-vq']
================================================================================================== short test summary info ===================================================================================================
Here I'm abbreviating the output somewhat to the relevant info (e.g. I'm not showing the test failure info). If you don't have direct access to the filesystem where the unit tests are running you can always read in the file found in pytestconfig.inipath and print the contents to stdout.

Inspired by #Lucas Roberts answer I've taken a closer look at pytest's pytestconfig fixture which has various interesting attributes, particularly option (also known_args_namespace with similar contents). So if I understand pytestconfig correctly, a rough-and-ready solution to print the currently-in-effect options would be,
# test_sample.py
import pytest
import os
def test_answer(pytestconfig):
for item_name in dir(pytestconfig.option):
if not item_name.startswith('_'):
print(f'{item_name}: {getattr(pytestconfig.option,item_name)}')
assert 0 # to see what was printed
then run this as Lucas suggests with pytest (here with -vv for verbosity 2 for example),
$ pytest -vv test_sample.py
...
verbose: 2
version: False
xmlpath: None
==================== 1 failed in 0.10s ===================
$

Related

Pytest Logging messages appear twice

My testing framework has below structure
Master_test_Class.py ---> Holds generic test cases to be run for smoke and regression test suite
Test_Smoke1.py and Test_Reg1.py --> Child classes inherit Master_test_class.py
I have logging enabled in pytest.ini for INFO
[pytest]
log_cli = 1
log_cli_level = INFO
Below is my code in conftest.py
def pytest_generate_tests(metafunc):
.....
logging.info("This is generated during the test collection !!!")
When i run the test either of the test files, logs are printed 2 times once in the formatting specified in pytest.ini and another in red color
pytest -s Test_Reg1.py
I am so lost why is the logging info getting printed twice.
It's probably because you have a logging handler that sends the logs to the standard output, the solution would be, either run pytest without argument -s (this assumes logs have all the information you need) or remove the logging handler that is using standard output.
If you want to see only the output of logging module, use --log-cli-level=INFO as an argument to pytest test run. You are seeing it twice because of -s switch. And, to test whether it's the same log or different, add timestamp to the log message.

How do I get PyCharm to show entire error diffs from pytest?

I am using Pycharm to run my pytest unit tests. I am testing a REST API, so I often have to validate blocks of JSON. When a test fails, I'll see something like this:
FAILED
test_document_api.py:0 (test_create_documents)
{'items': [{'i...ages': 1, ...} != {'items': [{'...ages': 1, ...}
Expected :{'items': [{'...ages': 1, ...}
Actual :{'items': [{'i...ages': 1, ...}
<Click to see difference>
When I click on the "Click to see difference" link, most of the difference is converted to points of ellipses, like so
This is useless since it doesn't show me what is different. I get this behavior for any difference larger than a single string or number.
I assume Pycharm and/or pytest tries to elide uninformative parts of differences for large outputs. However, it's being too aggressive here and eliding everything.
How do I get Pycharm and/or pytest to show me the entire difference?
I've tried adding -vvv to pytest's Additional Arguments, but that has no effect.
Since the original post I verified that I see the same behavior when I run unit tests from the command line. So this is an issue with pytest and not Pycharm.
After looking at the answers I've got so far I guess what I'm really asking is "in pytest is it possible to set maxDiff=None without changing the source code of your tests?" The impression I've gotten from reading about pytest is that the -vv switch is what controls this setting, but this does not appear to be the case.
If you look closely into PyCharm sources, from the whole pytest output, PyCharm uses a single line the to parse the data for displaying in the Click to see difference dialog. This is the AssertionError: <message> line:
def test_spam():
> assert v1 == v2
E AssertionError: assert {'foo': 'bar'} == {'foo': 'baz'}
E Differing items:
E {'foo': 'bar'} != {'foo': 'baz'}
E Use -v to get the full diff
If you want to see the full diff line without truncation, you need to customize this line in the output. For a single test, this can be done by adding a custom message to the assert statement:
def test_eggs():
assert a == b, '{0} != {1}'.format(a, b)
If you want to apply this behaviour to all tests, define custom pytest_assertrepr_compare hook. In the conftest.py file:
# conftest.py
def pytest_assertrepr_compare(config, op, left, right):
if op in ('==', '!='):
return ['{0} {1} {2}'.format(left, op, right)]
The equality comparison of the values will now still be stripped when too long; to show the complete line, you still need to increase the verbosity with -vv flag.
Now the equality comparison of the values in the AssertionError line will not be stripped and the full diff is displayed in the Click to see difference dialog, highlighting the diff parts:
Being that pytest integrates with unittest, as a workaround you may be able to set it up as a unittest and then set Test.maxDiff = None or per each specific test self.maxDiff = None
https://docs.pytest.org/en/latest/index.html
Can run unittest (including trial) and nose test suites out of the box;
These may be helpful as well...
https://stackoverflow.com/a/21615720/9530790
https://stackoverflow.com/a/23617918/9530790
Had a look in the pytest code base and maybe you can try some of these out:
1) Set verbosity level in the test execution:
./app_main --pytest --verbose test-suite/
2) Add environment variable for "CI" or "BUILD_NUMBER". In the link to
the truncate file you can see that these env variables are used to
determine whether or not the truncation block is run.
import os
os.environ["BUILD_NUMBER"] = '1'
os.environ["CI"] = 'CI_BUILD'
3) Attempt to set DEFAULT_MAX_LINES and DEFAULT_MAX_CHARS on the truncate module (Not recommending this since it uses a private module):
from _pytest.assertion import truncate
truncate.DEFAULT_MAX_CHARS = 1000
truncate.DEFAULT_MAX_LINES = 1000
According to the code the -vv option should work so it's strange that it's not for you:
Current default behaviour is to truncate assertion explanations at
~8 terminal lines, unless running in "-vv" mode or running on CI.
Pytest truncation file which are what I'm basing my answers off of: pytest/truncate.py
Hope something here helps you!
I have some getattr in assertion and it never shows anything after AssertionError.
I add -lv in the Additional Arguments field to show the local variables.
I was running into something similar and created a function that returns a string with a nice diff of the two dicts. In pytest style test this looks like:
assert expected == actual, build_diff_string(expected, actual)
And in unittest style
self.assertEqual(expected, actual, build_diff_string(expected, actual)
The downside is that you have to modify the all the tests that have this issue, but it's a Keep It Simple and Stupid solution.
In pycharm you can just put -vv in the run configuration Additional Arguments field and this should solve the issue.
Or at least, it worked on my machine...

Why is my pytest selenium function skipped? [duplicate]

I am using skipIf() from unittest for skipping tests in certain conditions.
#unittest.skipIf(condition), "this is why I skipped them!")
How do I tell py.test to display skipping conditions?
I know that for unittest I need to enable the verbose mode (-v) but the same parameter added to py.test increase the verbosity by still does not display the skip reasons.
When you run py.test, you can pass -rsx to report skipped tests.
From py.test --help:
-r chars show extra test summary info as specified by chars
(f)ailed, (E)error, (s)skipped, (x)failed, (X)passed.
Also see this part of the documentation about skipping: http://doc.pytest.org/en/latest/skipping.html
Short answer:
pytest -rs
This will show extra information of skipped tests.
Detailed answer:
To complement #ToddWilson's answer, the following chars have been added: p and P (2.9.0), a (4.1.0) and A (4.5.0). The detailed information about skipped and xfailed tests is not shown by default in order to avoid cluttering the output. You can use the -r flag among with the following chars:
(f)ailed
(E)rror
(s)kipped
(x)failed
(X)passed
(p)assed
(P)assed with output
(a)ll except passed (p/P)
(A)ll.
Warnings are enabled by default, and the default value is fE.

Pytest cov does not generate any report

I'm trying to run a py.test cov for my program, but I still have an information: testFile.txt sCoverage.py warning: No data was collected.
even when in the code are still non-tested functions (in my example function diff). Below is the example of the code on which I tested the command py.test --cov=testcov.py. I'm using python 2.7.9
def suma(x,y):
z = x + y
return z
def diff(x,y):
return x-y
if __name__ == "__main__":
a = suma(2,3)
b = diff(7,5)
print a
print b
## ------------------------TESTS-----------------------------
import pytest
def testSuma():
assert suma(2,3) == 5
Can someone explain me, what am I doing wrong?
You haven't said what all your files are named, so I'm not sure of the precise answer. But the argument to --cov should be a module name, not a file name. So instead of py.test --cov=testcov.py, you want py.test --cov=testcov.
What worked well for me is:
py.test mytests/test_mytest.py --cov='.'
Specifying the path, '.' in this case, removes unwanted files from the coverage report.
py.test looks for functions that start with test_. You should rename your test functions accordingly. To apply coverage you execute py.test --cov. If you want a nice HTML report that also shows you which lines are not covered you can use py.test --cov --cov-report html.
By default py.test looks for files matching test_*.py. You can customize it with pytest.ini
Btw. According to python style guide PEP 8 it should be test_suma - but it has no impact on py.test.

How can I see normal print output created during pytest run?

Sometimes I want to just insert some print statements in my code, and see what gets printed out when I exercise it. My usual way to "exercise" it is with existing pytest tests. But when I run these, I don't seem able to see any standard output (at least from within PyCharm, my IDE).
Is there a simple way to see standard output during a pytest run?
The -s switch disables per-test capturing (only if a test fails).
-s is equivalent to --capture=no.
pytest captures the stdout from individual tests and displays them only on certain conditions, along with the summary of the tests it prints by default.
Extra summary info can be shown using the '-r' option:
pytest -rP
shows the captured output of passed tests.
pytest -rx
shows the captured output of failed tests (default behaviour).
The formatting of the output is prettier with -r than with -s.
When running the test use the -s option. All print statements in exampletest.py would get printed on the console when test is run.
py.test exampletest.py -s
In an upvoted comment to the accepted answer, Joe asks:
Is there any way to print to the console AND capture the output so that it shows in the junit report?
In UNIX, this is commonly referred to as teeing. Ideally, teeing rather than capturing would be the py.test default. Non-ideally, neither py.test nor any existing third-party py.test plugin (...that I know of, anyway) supports teeing – despite Python trivially supporting teeing out-of-the-box.
Monkey-patching py.test to do anything unsupported is non-trivial. Why? Because:
Most py.test functionality is locked behind a private _pytest package not intended to be externally imported. Attempting to do so without knowing what you're doing typically results in the public pytest package raising obscure exceptions at runtime. Thanks alot, py.test. Really robust architecture you got there.
Even when you do figure out how to monkey-patch the private _pytest API in a safe manner, you have to do so before running the public pytest package run by the external py.test command. You cannot do this in a plugin (e.g., a top-level conftest module in your test suite). By the time py.test lazily gets around to dynamically importing your plugin, any py.test class you wanted to monkey-patch has long since been instantiated – and you do not have access to that instance. This implies that, if you want your monkey-patch to be meaningfully applied, you can no longer safely run the external py.test command. Instead, you have to wrap the running of that command with a custom setuptools test command that (in order):
Monkey-patches the private _pytest API.
Calls the public pytest.main() function to run the py.test command.
This answer monkey-patches py.test's -s and --capture=no options to capture stderr but not stdout. By default, these options capture neither stderr nor stdout. This isn't quite teeing, of course. But every great journey begins with a tedious prequel everyone forgets in five years.
Why do this? I shall now tell you. My py.test-driven test suite contains slow functional tests. Displaying the stdout of these tests is helpful and reassuring, preventing leycec from reaching for killall -9 py.test when yet another long-running functional test fails to do anything for weeks on end. Displaying the stderr of these tests, however, prevents py.test from reporting exception tracebacks on test failures. Which is completely unhelpful. Hence, we coerce py.test to capture stderr but not stdout.
Before we get to it, this answer assumes you already have a custom setuptools test command invoking py.test. If you don't, see the Manual Integration subsection of py.test's well-written Good Practices page.
Do not install pytest-runner, a third-party setuptools plugin providing a custom setuptools test command also invoking py.test. If pytest-runner is already installed, you'll probably need to uninstall that pip3 package and then adopt the manual approach linked to above.
Assuming you followed the instructions in Manual Integration highlighted above, your codebase should now contain a PyTest.run_tests() method. Modify this method to resemble:
class PyTest(TestCommand):
.
.
.
def run_tests(self):
# Import the public "pytest" package *BEFORE* the private "_pytest"
# package. While importation order is typically ignorable, imports can
# technically have side effects. Tragicomically, that is the case here.
# Importing the public "pytest" package establishes runtime
# configuration required by submodules of the private "_pytest" package.
# The former *MUST* always be imported before the latter. Failing to do
# so raises obtuse exceptions at runtime... which is bad.
import pytest
from _pytest.capture import CaptureManager, FDCapture, MultiCapture
# If the private method to be monkey-patched no longer exists, py.test
# is either broken or unsupported. In either case, raise an exception.
if not hasattr(CaptureManager, '_getcapture'):
from distutils.errors import DistutilsClassError
raise DistutilsClassError(
'Class "pytest.capture.CaptureManager" method _getcapture() '
'not found. The current version of py.test is either '
'broken (unlikely) or unsupported (likely).'
)
# Old method to be monkey-patched.
_getcapture_old = CaptureManager._getcapture
# New method applying this monkey-patch. Note the use of:
#
# * "out=False", *NOT* capturing stdout.
# * "err=True", capturing stderr.
def _getcapture_new(self, method):
if method == "no":
return MultiCapture(
out=False, err=True, in_=False, Capture=FDCapture)
else:
return _getcapture_old(self, method)
# Replace the old with the new method.
CaptureManager._getcapture = _getcapture_new
# Run py.test with all passed arguments.
errno = pytest.main(self.pytest_args)
sys.exit(errno)
To enable this monkey-patch, run py.test as follows:
python setup.py test -a "-s"
Stderr but not stdout will now be captured. Nifty!
Extending the above monkey-patch to tee stdout and stderr is left as an exercise to the reader with a barrel-full of free time.
According to pytest documentation, version 3 of pytest can temporary disable capture in a test:
def test_disabling_capturing(capsys):
print('this output is captured')
with capsys.disabled():
print('output not captured, going directly to sys.stdout')
print('this output is also captured')
pytest --capture=tee-sys was recently added (v5.4.0). You can capture as well as see the output on stdout/err.
Try pytest -s -v test_login.py for more info in console.
-v it's a short --verbose
-s means 'disable all capturing'
You can also enable live-logging by setting the following in pytest.ini or tox.ini in your project root.
[pytest]
log_cli = True
Or specify it directly on cli
pytest -o log_cli=True
pytest test_name.py -v -s
Simple!
I would suggest using -h command. There're quite interesting commands might be used for.
but, for this particular case: -s shortcut for --capture=no. is enough
pytest <test_file.py> -s
If you are using logging, you need to specify to turn on logging output in addition to -s for generic stdout. Based on Logging within pytest tests, I am using:
pytest --log-cli-level=DEBUG -s my_directory/
If you are using PyCharm IDE, then you can run that individual test or all tests using Run toolbar. The Run tool window displays output generated by your application and you can see all the print statements in there as part of test output.
If anyone wants to run tests from code with output:
if __name__ == '__main__':
pytest.main(['--capture=no'])
The capsys, capsysbinary, capfd, and capfdbinary fixtures allow access to stdout/stderr output created
during test execution. Here is an example test function that performs some output related checks:
def test_print_something_even_if_the_test_pass(self, capsys):
text_to_be_printed = "Print me when the test pass."
print(text_to_be_printed)
p_t = capsys.readouterr()
sys.stdout.write(p_t.out)
# the two rows above will print the text even if the test pass.
Here is the result:
test_print_something_even_if_the_test_pass PASSED [100%]Print me when the test pass.

Categories

Resources