pytest.config.getoption alternative is failing - python

My setup is such that; pytest test.py executes nothing while pytest --first-test test.py executes the target function test_skip.
In order to determine whether a certain test should be conducted or not, this is what I have been using:
skip_first = pytest.mark.skipif(
not (
pytest.config.getoption("--first-test")
or os.environ.get('FULL_AUTH_TEST')
), reason="Skipping live"
)
#skip_first
def test_skip():
assert_something
Now that, pytest.config.getoption is being deprecated, I am trying to update my code. This is what I have written:
#pytest.fixture
def skip_first(request):
def _skip_first():
return pytest.mark.skipif(
not (
request.config.getoption("--first-test")
or os.environ.get('FULL_AUTH_TEST')),
reason="Skipping"
)
return _skip_first()
# And, to call:
def test_skip(skip_first):
assert 1==2
However, whether I do pytest test.py or pytest --first-test test.py, test_skip will always execute. But, the skip_first seems to be working fine - Inserting a print statement shows skip_first = MarkDecorator(mark=Mark(name='skipif', args=(False,), kwargs={'reason': 'Skipping first'})), when --first-test is given, and args=(True) when given. (Same thing was observed when using the first setup).
Am I missing something?? I even tried to return the function _skip_first instead of it's output in the def skip_first but no difference.
When using a test class, the manual indicates we need to use #pytest.mark.usefixtures("fixturename") but that proved to be of no use either (with classes).
Ideas? This is my system: platform linux -- Python 3.6.7, pytest-4.0.2, py-1.7.0, pluggy-0.8.0

In order to cause a SKIP from a fixture, you must raise pytest.skip. Here's an example using your code above:
import os
import pytest
#pytest.fixture
def skip_first(request):
if (
request.config.getoption("--first-test")
or os.environ.get('FULL_AUTH_TEST')
):
raise pytest.skip('Skipping!')
# And, to call:
def test_skip(skip_first):
assert 1==2
If you want, you can almost replace your original code by doing:
#pytest.fixture
def skip_first_fixture(request): ...
skip_first = pytest.mark.usefixtures('skip_first_fixture')
#skip_first
def test_skip(): ...
Here's the execution showing this working:
$ pytest t.py -q
F [100%]
=================================== FAILURES ===================================
__________________________________ test_skip ___________________________________
skip_first = None
def test_skip(skip_first):
> assert 1==2
E assert 1 == 2
E -1
E +2
t.py:16: AssertionError
1 failed in 0.03 seconds
$ pytest t.py --first-test -q
s [100%]
1 skipped in 0.01 seconds

Related

How do I run a fixture only when the test fails?

I have the following example:
conftest.py:
#pytest.fixture:
def my_fixture_1(main_device)
yield
if FAILED:
-- code lines --
else:
pass
main.py:
def my_test(my_fixture_1):
main_device = ...
-- code lines --
assert 0
-- code lines --
assert 1
When assert 0, for example, the test should fail and execute my_fixture_1. If the test pass, the fixture must not execute. I tried using hookimpl but didn't found a solution, the fixture is always executing even if the test pass.
Note that main_device is the device connected where my test is running.
You could use request as an argument to your fixture. From that, you can check the status of the corresponding tests, i.e. if it has failed or not. In case it failed, you can execute the code you want to get executed on failure. In code that reads as
#pytest.fixture
def my_fixture_1(request):
yield
if request.session.testsfailed:
print("Only print if failed")
Of course, the fixture will always run but the branch will only be executed if the corresponding test failed.
In Simon Hawe's answer, request.session.testsfailed denotes the number of test failures in that particular test run.
Here is an alternative solution that I can think of.
import os
#pytest.fixture(scope="module")
def main_device():
return None
#pytest.fixture(scope='function', autouse=True)
def my_fixture_1(main_device):
yield
if os.environ["test_result"] == "failed":
print("+++++++++ Test Failed ++++++++")
elif os.environ["test_result"] == "passed":
print("+++++++++ Test Passed ++++++++")
elif os.environ["test_result"] == "skipped":
print("+++++++++ Test Skipped ++++++++")
def pytest_runtest_logreport(report):
if report.when == 'call':
os.environ["test_result"] = report.outcome
You can do your implementations directly in the pytest_runtest_logreport hook itself. But the drawback is that you won't get access to the fixtures other than the report.
So, if you need main_device, you have to go with a custom fixture like as shown above.
Use #pytest.fixture(scope='function', autouse=True) which will automatically run it for every test case. you don't have to give main_device in all test functions as an argument.

How to execute Python functions using Hypothesis' composite strategy?

I am trying to execute a function decorated with Hypothesis' #strategy.composite decorator.
I know I can test functions using the #given decorator, such as -
from hypothesis import given
from hypothesis import strategies as st
#given(st.integers(min_value=1))
def test_bar(x):
assert x > 0
with pytest using - pytest <filename.py>.
But in the case of a function with the #strategy.composite decorator like -
from hypothesis import strategies as st
from hypothesis import given
import pytest
#st.composite
def test_foo(draw):
arg1 = draw(st.integers(min_value=1))
arg2 = draw(st.lists(st.integers(), min_size=arg1, max_size=arg1))
print(arg1, " ", arg2)
assert(len(arg2) == arg1)
I am unable to execute the tests in a similar way.
When using pytest I am unable to execute the tests (using python to execute the python file does nothing) -
[reik#reik-msi tests]$ pytest testhypo.py
==================================== test session starts ====================================
platform linux -- Python 3.8.3, pytest-5.4.3, py-1.8.1, pluggy-0.13.1
rootdir: /home/reik/tests
plugins: hypothesis-5.16.0, lazy-fixture-0.6.3
collected 1 item
testhypo.py F [100%]
========================================= FAILURES ==========================================
_________________________________________ test_foo __________________________________________
item = <Function test_foo>
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_call(item):
if not hasattr(item, "obj"):
yield
elif not is_hypothesis_test(item.obj):
# If #given was not applied, check whether other hypothesis
# decorators were applied, and raise an error if they were.
if getattr(item.obj, "is_hypothesis_strategy_function", False):
> raise InvalidArgument(
"%s is a function that returns a Hypothesis strategy, but pytest "
"has collected it as a test function. This is useless as the "
"function body will never be executed. To define a test "
"function, use #given instead of #composite." % (item.nodeid,)
)
E hypothesis.errors.InvalidArgument: testhypo.py::test_foo is a function that returns a Hypothesis strategy, but pytest has collected it as a test function. This is useless as the function body will never be executed. To define a test function, use #given instead of #composite.
/usr/lib/python3.8/site-packages/hypothesis/extra/pytestplugin.py:132: InvalidArgument
================================== short test summary info ==================================
FAILED testhypo.py::test_foo - hypothesis.errors.InvalidArgument: testhypo.py::test_foo is...
===================================== 1 failed in 0.06s =====================================
I tried adding the function call test_foo() but I got the same error.
Then I tried adding #given above the function and got a different error -
========================================== ERRORS ===========================================
________________________________ ERROR at setup of test_foo _________________________________
file /usr/lib/python3.8/site-packages/hypothesis/core.py, line 903
def run_test_as_given(test):
E fixture 'test' not found
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
/usr/lib/python3.8/site-packages/hypothesis/core.py:903
================================== short test summary info ==================================
ERROR testhypo.py::test_foo
if I do #given() - note the extra braces, instead, I get another error -
========================================= FAILURES ==========================================
_________________________________________ test_foo __________________________________________
arguments = (), kwargs = {}
def wrapped_test(*arguments, **kwargs):
> raise InvalidArgument(message)
E hypothesis.errors.InvalidArgument: given must be called with at least one argument
/usr/lib/python3.8/site-packages/hypothesis/core.py:234: InvalidArgument
================================== short test summary info ==================================
FAILED testhypo.py::test_foo - hypothesis.errors.InvalidArgument: given must be called wit...
I tried wrapping the code inside another function -
from hypothesis import strategies as st
from hypothesis import given
import pytest
def demo():
#st.composite
def test_foo(draw):
arg1 = draw(st.integers(min_value=1))
arg2 = draw(st.lists(st.integers(), min_size=arg1, max_size=arg1))
print(arg1, " ", arg2)
assert(len(arg2) == arg1)
but that did not work either -
[reik#reik-msi tests]$ python testhypo.py
[reik#reik-msi tests]$ pytest testhypo.py
==================================== test session starts ====================================
platform linux -- Python 3.8.3, pytest-5.4.3, py-1.8.1, pluggy-0.13.1
rootdir: /home/reik/tests
plugins: hypothesis-5.16.0, lazy-fixture-0.6.3
collected 0 items
=================================== no tests ran in 0.00s ===================================
(I also tried putting the demo function call at the end of the file, but that did not change the testing behaviour in any way)
The Hypothesis quick start guide says that calling the function would work, but it clearly does not. (To be fair, the documentation does not specify how to run tests with #composite)
How do I test the functions that are decorated with #strategy.composite? I do not have to use pytest - I would prefer not having to use it actually, but it seemed the easiest way to test the functions (which were decorated with #given) so I decided to go that route.
#st.composite is a helper function for defining custom strategies, not tests.
What you're trying to do can be accomplished by using #given and st.data():
#given(st.data())
def test_foo(data):
x = data.draw(st.integers())
...
https://hillelwayne.com/post/property-testing-complex-inputs/ gives a good overview of how these techniques are used.
I had the same issue. Here is a minimal and complete working example of using #composite.
#dataclass
class Container():
id: int
value: str
#st.composite
def generate_containers(draw):
_id = draw(st.integers())
_value = draw(st.characters())
return Container(id=_id, value=_value)
#given(generate_containers())
def test_container(container):
assert isinstance(container, Container)
assert isinstance(container.id, int)
assert isinstance(container.value, str)

Show exhaustive information for passed tests in pytest

When a test fails, there's an output indicating the context of the test, e.g.
=================================== FAILURES ===================================
______________________________ Test.test_sum_even ______________________________
numbers = [2, 4, 6]
#staticmethod
def test_sum_even(numbers):
assert sum(numbers) % 2 == 0
> assert False
E assert False
test_preprocessing.py:52: AssertionError
What if I want the same thing for passed tests as well? so that I can have a quick check on the parameters that get passed to the tests are correct?
I tried command line options line --full-trace, -l, --tb long, and -rpP, but none of them works.
Any idea?
Executing pytest with the --verbose flag will cause it to list the fully qualified name of every test as it executes, e.g.,:
tests/dsl/test_ancestor.py::TestAncestor::test_finds_ancestor_nodes
tests/dsl/test_and.py::TestAnd::test_aliased_as_ampersand
tests/dsl/test_and.py::TestAnd::test_finds_all_nodes_in_both_expressions
tests/dsl/test_anywhere.py::TestAnywhere::test_finds_all_nodes_when_no_arguments_given_regardless_of_the_context
tests/dsl/test_anywhere.py::TestAnywhere::test_finds_multiple_kinds_of_nodes_regardless_of_the_context
tests/dsl/test_anywhere.py::TestAnywhere::test_finds_nodes_regardless_of_the_context
tests/dsl/test_axis.py::TestAxis::test_finds_nodes_given_the_xpath_axis
tests/dsl/test_axis.py::TestAxis::test_finds_nodes_given_the_xpath_axis_without_a_specific_tag
If you are just asking for standard output from passed test cases, then you need to pass the -s option to pytest to prevent capturing of standard output. More info about standard output suppression is available in the pytest docs.
pytest doesn't have this functionality. What it does is showing you the error from the exception when an assertion fails.
A workaround is to explicitly include the information you want to see from the passing tests by using python's logging module and then use the caplog fixture from pytest.
For example one version of a func.py could be:
import logging
logging.basicConfig(level=logging.DEBUG)
log = logging.getLogger('my-even-logger')
def is_even(numbers):
res = sum(numbers) % 2 == 0
if res is True:
log.warning('Sum is Even')
else:
log.warning('Sum is Odd')
#... do stuff ...
and then a test_func.py:
import logging
import pytest
from func import is_even
#pytest.fixture
def my_list():
numbers = [2, 4, 6]
return numbers
def test_is_even(caplog, my_list):
with caplog.at_level(logging.DEBUG, logger='my-even-logger'):
is_even(my_list)
assert 'Even' in caplog.text
If you run pytest -s test_even.py and since the test passes, the logger shows you the following message:
test_even.py WARNING:my-sum-logger:Sum is Even

PyTest : Allow Failure Rate

I am currently working on a project where we are running a large suite of parameterized tests (>1M). The tests are randomly generated use-cases and in this large test space, it is expected that in each run certain edge cases will fail, ~1-2%. Is there a implementation for Pytest where you can pass a failure-rate argument, or handle this behavior?
I guess what you want is modify exit status of pytest command, there is a nonpublic hook, named pytest_sessionfinish, could do this.
consider you have following tests:
def test_spam():
assert 0
def test_ham():
pass
def test_eggs():
pass
and a hook in conftest.py:
import pytest, _pytest
ACCEPTABLE_FAILURE_RATE = 50
#pytest.hookimpl()
def pytest_sessionfinish(session, exitstatus):
if exitstatus != _pytest.main.EXIT_TESTSFAILED:
return
failure_rate = (100.0 * session.testsfailed) / session.testscollected
if failure_rate <= ACCEPTABLE_FAILURE_RATE:
session.exitstatus = 0
then invoke pytest:
$ pytest --tb=no -q tests.py
F.. [100%]
1 failed, 2 passed in 0.06 seconds
here the failure rate is 1 / 3 == 33.3%, below 50%:
$ echo $?
0
you could see the exit status of pytest is 0.

Writing a pytest function for checking the output on console (stdout)

This link gives a description how to use pytest for capturing console outputs.
I tried on this following simple code, but I get error
import sys
import pytest
def f(name):
print "hello "+ name
def test_add(capsys):
f("Tom")
out,err=capsys.readouterr()
assert out=="hello Tom"
test_add(sys.stdout)
Output:
python test_pytest.py
hello Tom
Traceback (most recent call last):
File "test_pytest.py", line 12, in <module>
test_add(sys.stdout)
File "test_pytest.py", line 8, in test_add
out,err=capsys.readouterr()
AttributeError: 'file' object has no attribute 'readouterr'
what is wrong and what fix needed? thank you
EDIT:
As per the comment, I changed capfd, but I still get the same error
import sys
import pytest
def f(name):
print "hello "+ name
def test_add(capfd):
f("Tom")
out,err=capfd.readouterr()
assert out=="hello Tom"
test_add(sys.stdout)
Use the capfd fixture.
Example:
def test_foo(capfd):
foo() # Writes "Hello World!" to stdout
out, err = capfd.readouterr()
assert out == "Hello World!"
See: http://pytest.org/en/latest/fixture.html for more details
And see: py.test --fixtures for a list of builtin fixtures.
Your example has a few problems. Here is a corrected version:
def f(name):
print "hello {}".format(name)
def test_f(capfd):
f("Tom")
out, err = capfd.readouterr()
assert out == "hello Tom\n"
Note:
Do not use sys.stdout -- Use the capfd fixture as-is as provided by pytest.
Run the test with: py.test foo.py
Test Run Output:
$ py.test foo.py
====================================================================== test session starts ======================================================================
platform linux2 -- Python 2.7.5 -- pytest-2.4.2
plugins: flakes, cache, pep8, cov
collected 1 items
foo.py .
=================================================================== 1 passed in 0.01 seconds ====================================================================
Also Note:
You do not need to run your Test Function(s) in your test modules. py.test (The CLI tool and Test Runner) does this for you.
py.test does mainly three things:
Collect your tests
Run your tests
Display statistics and possibly errors
By default py.test looks for (configurable iirc) test_foo.py test modules and test_foo() test functions in your test modules.
The problem is with your explicit call of your test function at the very end of your first code snippet block:
test_add(sys.stdout)
You should not do this; it is pytest's job to call your test functions.
When it does, it will recognize the name capsys (or capfd, for that matter)
and automatically provide a suitable pytest-internal object for you as a call argument.
(The example given in the pytest documentation is quite complete as it is.)
That object will provide the required readouterr() function.
sys.stdout does not have that function, which is why your program fails.

Categories

Resources