Using #mark.incremental and metafunc.parametrize in a pytest test class - python

the purpose of #mark.incremental is that if one test fails, the tests afterwards are marked as expected to fail.
However, when I use this in conjuction with parametrization I get undesired behavior.
For example, in the case of this fake code:
//conftest.py:
def pytest_generate_tests(metafunc):
metafunc.parametrize("input", [True, False, None, False, True])
def pytest_runtest_makereport(item, call):
if "incremental" in item.keywords:
if call.excinfo is not None:
parent = item.parent
parent._previousfailed = item
def pytest_runtest_setup(item):
if "incremental" in item.keywords:
previousfailed = getattr(item.parent, "_previousfailed", None)
if previousfailed is not None:
pytest.xfail("previous test failed (%s)" %previousfailed.name)
//test.py:
#pytest.mark.incremental
class TestClass:
def test_input(self, input):
assert input is not None
def test_correct(self, input):
assert input==True
I'd expect the test class to run
test_input on True,
followed by test_correct on True,
followed by test_input on False,
followed by test_correct on False,
folowed by test_input on None,
followed by (xfailed) test_correct on None, etc etc.
Instead, what happens is that the test class
runs test_input on True,
then runs test_input on False,
then runs test_input on None,
then marks everything from that point onwards as xfailed (including the test_corrects).
What I am assuming is happening is that parametrization takes priority over proceeding through functions in a class. The question is if it is possible to override this behaviour or work around it somehow, as the current situation makes marking a class as incremental completely useless to me.
(is the only way to handle this to copy-paste the code for the class over and over, each time with different parameters? The thought is repulsive to me)

The solution to this is described in https://docs.pytest.org/en/latest/example/parametrize.html under the header A quick port of “testscenarios”
This is the code listed there and what the code in conftest.py is doing is it is looking for variable scenarios in the test class. When it finds the variable it iterates over each item of scenarios and expects an id string with which to label the test and a dictionary of 'argnames:argvalues'
# content of conftest.py
def pytest_generate_tests(metafunc):
idlist = []
argvalues = []
for scenario in metafunc.cls.scenarios:
idlist.append(scenario[0])
items = scenario[1].items()
argnames = [x[0] for x in items]
argvalues.append(([x[1] for x in items]))
metafunc.parametrize(argnames, argvalues, ids=idlist, scope="class")
# content of test_scenarios.py
scenario1 = ('basic', {'attribute': 'value'})
scenario2 = ('advanced', {'attribute': 'value2'})
class TestSampleWithScenarios(object):
scenarios = [scenario1, scenario2]
def test_demo1(self, attribute):
assert isinstance(attribute, str)
def test_demo2(self, attribute):
assert isinstance(attribute, str)
You can also modify the function pytest_generate_tests to accept different datatype inputs. For example if you have a list that you usually pass to
#pytest.mark.parametrize("varname", varval_list)
you can use that same list in the following way:
# content of conftest.py
def pytest_generate_tests(metafunc):
idlist = []
argvalues = []
argnames = metafunc.cls.scenario_keys
for idx, scenario in enumerate(metafunc.cls.scenario_parameters):
idlist.append(str(idx))
argvalues.append([scenario])
metafunc.parametrize(argnames, argvalues, ids=idlist, scope="class")
# content of test_scenarios.py
varval_list = [a, b, c, d]
class TestSampleWithScenarios(object):
scenario_parameters = varval_list
scenario_keys = ['varname']
def test_demo1(self, attribute):
assert isinstance(attribute, str)
def test_demo2(self, attribute):
assert isinstance(attribute, str)
The id will be an autogenerated number (you can change that to using something you specify) and in this implementation it won't handle multiple parameterization variables so you have to compile those in a single list (or cater pytest_generate_tests to handle that for you)

The following solution does not ask to change your test class
_test_failed_incremental = defaultdict(dict)
def pytest_runtest_makereport(item, call):
if "incremental" in item.keywords:
if call.excinfo is not None and call.excinfo.typename != "Skipped":
param = tuple(item.callspec.indices.values()) if hasattr(item, "callspec") else ()
_test_failed_incremental[str(item.cls)].setdefault(param, item.originalname or item.name)
def pytest_runtest_setup(item):
if "incremental" in item.keywords:
param = tuple(item.callspec.indices.values()) if hasattr(item, "callspec") else ()
originalname = _test_failed_incremental[str(item.cls)].get(param)
if originalname:
pytest.xfail("previous test failed ({})".format(originalname))
It works by keeping a dictionary with the failed test per class and per index of parametrized input as key (and the name of the test method that failed as value).
In your example, the dictionary _test_failed_incremental will be
defaultdict(<class 'dict'>, {"<class 'test.TestClass'>": {(2,): 'test_input'}})
showing that the 3rd run (index=2) has failed for the class test.TestClass.
Before running a test method in the class for a given parameter, it checks if any previous test method in the class has not failed for the given parameter and if so xfail the test with info on the name of the method that first failed.
Not 100% tested but in use and working for my needs.

Related

Indirect fixture doesn't exist when parametrising tests with fixtures

I'm using pytest and I'm trying to pass a fixture as a value in a parametrised test. I read the documentation for parametrised tests and the indirect = True parameter, but when I try to run it I keep getting an error message. I require the fixtures of different objects for different tests and I have the code below:
#in user.py
class User(object):
def __init__(self, first_name: str, last_name: str):
self.first_name = first_name
self.last_name = last_name
def create_full_name(self):
return f'{self.first_name} {self.last_name}'
#in conftest.py
#pytest.fixture(scope='function')
def normal_user():
'''Returns a normal user'''
normal_user = user.User(first_name = 'katherine', last_name = 'rose')
yield normal_user
#pytest.fixture(scope='function')
def missing_details():
'''Returns a user without a last name'''
missing_details = user.User(first_name = ' ', last_name = 'rose')
yield missing_details
#in test_user.py
#pytest.mark.parametrize('user, expected', [('normal_user', 'katherine rose'), ('missing_details', TypeError)], indirect= ['normal_user', 'missing_details'])
def test_parametrize_user_full_name(user, expected):
assert user.create_full_name(user) == expected
The error message I keep getting is:
In test_parametrize_user_full_name: indirect fixture 'normal_user' doesn't exist
Is it necessary to specify which fixtures should be indirect in conftest.py or am I writing the code for the parametrised test incorrectly?
This is not how indirect parametrization works. You have to reference one fixture with the indirect parameter, and the fixture will return the actual value based on the value in the parameter value list. You can find an example in the documentation.
In your case you would need something like:
#pytest.fixture
def user(request):
if request.param == 'normal_user':
# Returns a normal user
yield User(first_name='katherine', last_name='rose')
if request.param == 'missing_details':
# Returns a user without a last name
yield User(first_name=' ', last_name='rose')
#pytest.mark.parametrize('user, expected', [('normal_user', 'katherine rose'),
('missing_details', TypeError)],
indirect=['user'])
def test_parametrize_user_full_name(user, expected):
...
As an aside: comparing the result against an exception will not work the way you do it, but I guess this is only a dumbed down example that is not expected to work.

Reset class and class variables for each test in Python via pytest

I created a class to make my life easier while doing some integration tests involving workers and their contracts. The code looks like this:
class ContractID(str):
contract_counter = 0
contract_list = list()
def __new__(cls):
cls.contract_counter += 1
new_entry = super().__new__(cls, f'Some_internal_name-{cls.contract_counter:10d}')
cls.contract_list.append(new_entry)
return new_entry
#classmethod
def get_contract_no(cls, worker_number):
return cls.contract_list[worker_number-1] # -1 so WORKER1 has contract #1 and not #0 etc.
When I'm unit-testing the class, I'm using the following code:
from test_helpers import ContractID
#pytest.fixture
def get_contract_numbers():
test_string_1 = ContractID()
test_string_2 = ContractID()
test_string_3 = ContractID()
return test_string_1, test_string_2, test_string_3
def test_contract_id(get_contract_numbers):
assert get_contract_ids[0] == 'Some_internal_name-0000000001'
assert get_contract_ids[1] == 'Some_internal_name-0000000002'
assert get_contract_ids[2] == 'Some_internal_name-0000000003'
def test_contract_id_get_contract_no(get_contract_numbers):
assert ContractID.get_contract_no(1) == 'Some_internal_name-0000000001'
assert ContractID.get_contract_no(2) == 'Some_internal_name-0000000002'
assert ContractID.get_contract_no(3) == 'Some_internal_name-0000000003'
with pytest.raises(IndexError) as py_e:
ContractID.get_contract_no(4)
assert py_e.type == IndexError
However, when I try to run these tests, the second one (test_contract_id_get_contract_no) fails, because it does not raise the error as there are more than three values. Furthermore, when I try to run all my tests in my folder test/, it fails even the first test (test_contract_id), which is probably because I'm trying to use this function in other tests that run before this test.
After reading this book, my understanding of fixtures was that it provides objects as if they were never called before, which is obviously not the case here. Is there a way how to tell the tests to use the class as if it hasn't been used before anywhere else?
If I understand that correctly, you want to run the fixture as setup code, so that your class has exactly 3 instances. If the fixture is function-scoped (the default) it is indeed run before each test, which will each time create 3 new instances for your class. If you want to reset your class after the test, you have to do this yourself - there is no way pytest can guess what you want to do here.
So, a working solution would be something like this:
#pytest.fixture(autouse=True)
def get_contract_numbers():
test_string_1 = ContractID()
test_string_2 = ContractID()
test_string_3 = ContractID()
yield
ContractID.contract_counter = 0
ContractID.contract_list.clear()
def test_contract_id():
...
Note that I did not yield the test strings, as you don't need them in the shown tests - if you need them, you can yield them, of course. I also added autouse=True, which makes sense if you need this for all tests, so you don't have to reference the fixture in each test.
Another possibility would be to use a session-scoped fixture. In this case the setup would be done only once. If that is what you need, you can use this instead:
#pytest.fixture(autouse=True, scope="session")
def get_contract_numbers():
test_string_1 = ContractID()
test_string_2 = ContractID()
test_string_3 = ContractID()
yield

Is it possible to globally set a default `ids` function for pytest's parametrize?

pytest.mark.parametrize accepts an ids argument which can be a callable, like this:
def test_id_builder(arg):
if isinstance(arg, int):
return str(arg)
... # more logic
#pytest.mark.parametrize('value', [1, 2], ids=test_id_builder)
def test_whatever(value):
assert value > 0
This will generate two test cases, with the ids "1" and "2" respectively. The problem is that I have a lot of tests, organized in multiple classes and files. Because of that, I'd like to globally set test_id_builder as the ids function for all parametrized tests in my project. Is there a way to do this?
Simply implement a custom pytest_make_parametrize_id hook. In your conftest.py:
def pytest_make_parametrize_id(config, val, argname):
if isinstance(val, int):
return f'{argname}={val}'
if isinstance(val, str):
return f'text is {val}'
# return None to let pytest handle the formatting
return None
Example tests:
import pytest
#pytest.mark.parametrize('n', range(3))
def test_int(n):
assert True
#pytest.mark.parametrize('s', ('fizz', 'buzz'))
def test_str(s):
assert True
#pytest.mark.parametrize('c', (tuple(), list(), set()))
def test_unhandled(c):
assert True
Check the test parametrizing:
$ pytest -q --collect-only
test_spam.py::test_int[n=0]
test_spam.py::test_int[n=1]
test_spam.py::test_int[n=2]
test_spam.py::test_str[text is fizz]
test_spam.py::test_str[text is buzz]
test_spam.py::test_unhandled[c0]
test_spam.py::test_unhandled[c1]
test_spam.py::test_unhandled[c2]
no tests ran in 0.06 seconds
You can make your custom parametrize:
import pytest
def id_builder(arg):
if isinstance(arg, int):
return str(arg) * 2
def custom_parametrize(*args, **kwargs):
kwargs.setdefault('ids', id_builder)
return pytest.mark.parametrize(*args, **kwargs)
#custom_parametrize('value', [1, 2])
def test_whatever(value):
assert value > 0
And to avoid rewriting pytest.mark.parametrize to custom_parametrize everywhere use this well-known workaround:
old_parametrize = pytest.mark.parametrize
def custom_parametrize(*args, **kwargs):
kwargs.setdefault('ids', id_builder)
return old_parametrize(*args, **kwargs)
pytest.mark.parametrize = custom_parametrize
There is no way to globally set ids. but yo can use pytest-generate-tests to generate tests from some other fixture. that other fixture could be scoped to session which overall will mimic the intended behaviour.

Pytest: Running all tests based on the number of times specified in config.ini

Here's how I have the tests written:
**config.ini**
idlist: 1
Class MyConfig:
def __init__(self):
self.id = config.idlist
....
**conftest.py**
#pytest.fixture(scope='module')
def obj()
myobj = new MyConfig()
yield myobj
#pytest.fixture(scope='module')
def get_id(obj)
yield obj.id
**test_mytests.py**
def test_a_sample_test(get_id):
assert get_id == 1
def test_a_sample_even test(get_id):
assert get_id % 2 == 0
Now, I want to change idlist (from config.ini) to a list of numbers as below
idlist = [1, 2, 3, 4, ....]
I want to be able to automatically trigger a run to run all the tests that begins with test_ based on the number of id's in idlist. as depicted below
new config.ini
idlist: id1, id2, id3, id4, ... idN
def get_id(obj):
for anId in obj.id
yield anId **<--- notice that the id's change.**
finally the tests..
**test_mytests.py**
def test_a_sample_test(get_id):
assert get_id == 1
def test_a_sample_even test(get_id):
assert get_id % 2 == 0
i want to:
Invoke get_id to yield me a different id each time
The 2 tests should run for each id "yielded" by get_id, since the id changed. (basically repeat the entire test suite/session for each id)
How can I do that?
I don't know the list of ids in order to do a pytest.mark.parameterize() before each test since the id's change and are not constant.
pytest.fixtures takes a params list:
params – an optional list of parameters which will cause multiple invocations of the fixture function and all of the tests using it.
Examples Here
You can use the #pytest.mark.parametrize parametrizing test functions:
The builtin pytest.mark.parametrize decorator enables parametrization of arguments for a test function. Here is a typical example of a test function that implements checking that a certain input leads to an expected output
# take the following example and adjust to your needs
import pytest
#pytest.mark.parametrize("_id,expected", [
(1, False),
(2, True),
(3, False),
])
def test_a_sample_even(_id, expected):
assert expected == is_even(_id)

running tests against a json string python

I'm currently trying to run a number of tests against a JSON string there are however a few difficulties that I am encountering.
Here's what I have so far.
class PinpyTests(jsonstr, campaign):
data = json.loads(jsonstr)
test = False
def dwellTest(self):
if self.data.get('dwellTime', None) is not None:
if self.data.get('dwellTime') >= self.campaign.needed_dwellTime:
# Result matches, dwell time test passed.
self.test = True
def proximityTest(self):
if self.data.get('proximity', None) is not None:
if self.data.get('proximity') == self.campaign.needed_proximity:
# Result matches, proximity passed.
self.test = True
Basically, I need the tests to be run, only if they exist in the json string. so if proximity is present in the string, it will run the proximity test, etc etc. (there could be more tests, not just these two)
The issue seems to arise when both tests are present, and need to both return true. If they both return true then the test has passed and the class can return true, However, if dwell fails, and proximity passes I still need it to fail because not all the tests pass. (where proximity makes it pass). I'm slightly baffled as how to continue.
For starters, your class is defined incorrectly. What you probably want is an __init__ function. To achieve your desired result, I would suggest adding a testAll method that checks for each test in your json then runs that test.
class PinpyTests(Object):
test = False
def __init__(self, jsonstr, campaign):
self.data = json.loads(jsonstr)
self.campaign = campaign
def testAll(self):
passed = True
if self.data.get('dwellTime') is not None:
passed = passed and self.dwellTest()
if self.data.get('proximity') is not None:
passed = passed and self.proximityTest()
return passed
def dwellTest(self):
if self.data.get('dwellTime') >= self.campaign.needed_dwellTime:
# Result matches, dwell time test passed.
return True
return False
def proximityTest(self):
if self.data.get('proximity') == self.campaign.needed_proximity:
# Result matches, proximity passed.
return True
return False

Categories

Resources