I am trying to teach myself Pytest and am trying to understand the differences between paramatrizing test data with #pytest.fixture(params=[]) and #pytest.mark.parametrization(). I've set up the code below to see how both work and they both return the same result. However, I'm not sure if there are use cases where one method is preferred over the other. Are there benefits to using one over the other?
import pytest
#pytest.fixture(params=["first parameter", "second parameter", "third parameter"])
def param_fixture(request):
return request.param
def parametrize_list():
return ["first parameter", "second parameter", "third parameter"]
def test_using_fixture_params(param_fixture):
"""Tests parametrization with fixture(params=[])"""
assert "parameter" in param_fixture
#pytest.mark.parametrize("param", parametrize_list())
def test_using_mark_parametrize(param):
"""Tests parametrization with mark.parametrize()"""
assert "parameter" in param
The above code has the following result.
test_parametrization.py::test_using_fixture_params[first parameter] PASSED
test_parametrization.py::test_using_fixture_params[second parameter] PASSED
test_parametrization.py::test_using_fixture_params[third parameter] PASSED
test_parametrization.py::test_using_mark_parametrize[first parameter] PASSED
test_parametrization.py::test_using_mark_parametrize[second parameter] PASSED
test_parametrization.py::test_using_mark_parametrize[third parameter] PASSED
Fixtures are typically used to load data structures into the test and pass to testing functions. #pytest.mark.parametrize is the preferred way to test lots of iterations with different inputs (as you have above).
This was a handy resource when starting : https://realpython.com/pytest-python-testing/
from data_module import Data
class TestData:
"""
Load and Test data
"""
#pytest.fixture(autouse=True)
def data(test):
return Data()
def test_fixture(data):
result = do_test(data)
assert result
#pytest.mark.parametrize('options', ['option1', 'option2', 'option3'])
def test_with_parameterisation(data, options)
result_with_paramaterisation(data, options)
assert result_with_paramaterisation
``
Related
I'm trying to test the order of the sub-functions inside of the main function:
def get_data():
pass
def process_data(data):
pass
def notify_admin(action):
pass
def save_data(data):
pass
def main_func():
notify_admin('start')
data = get_data()
processed_data = process_data(data)
save_data(processed_data)
notify_admin('finish')
I'm using pytest, so far I've come up with this:
import pytest
from unittest.mock import patch, Mock, call
from main_func import main_func
#patch('main_func.notify_admin')
#patch('main_func.get_data')
#patch('main_func.process_data')
#patch('main_func.save_data')
def test_main_func(mock_4, mock_3, mock_2, mock_1):
execution_order = [mock_1, mock_2, mock_3, mock_4]
order_mock = Mock()
for order, mock in enumerate(execution_order):
order_mock.attach_mock(mock, f'f_{order}')
main_func()
order_mock.assert_has_calls([
call.f_1(),
call.f_2(),
call.f_3(),
call.f_4(),
call.f_1(),
])
This is an error, which I'm not sure how to resolve:
E AssertionError: Calls not found.
E Expected: [call.f_1(), call.f_2(), call.f_3(), call.f_4(), call.f_1()]
E Actual: [call.f_1('start'),
E call.f_2(),
E call.f_3(<MagicMock name='mock.f_3()' id='2049968460848'>),
E call.f_4(<MagicMock name='mock.f_2()' id='2049968489424'>),
E call.f_1('finish')]
Could you please suggest ways to resolve it or maybe implement it in a different way?
I've read documentation of assert_has_calls but I'm still not sure how to use it for this particular case.
If you want to check the call order without the argument list, you can use the method_calls attribute of the mock, which contains a list of calls in the order they are made, and only check their name:
...
main_func()
assert len(order_mock.method_calls) == 4
assert order_mock.method_calls[0][0] == "f_1"
assert order_mock.method_calls[1][0] == "f_2"
assert order_mock.method_calls[2][0] == "f_3"
assert order_mock.method_calls[3][0] == "f_4"
Each method call is a tuple of name, positional arguments and keyword arguments, so if you want to check only the name you can just use the first index.
Note that the output of your test does not seem to match this, but this is a matter of your actual application logic.
If you are using has_calls, you have to provide each argument, which is also possible. This time taking the actual result of your test, something like this should work:
...
main_func()
order_mock.assert_has_calls([
call.f_1('start'),
call.f_2(),
call.f_3(mock1),
call.f_4(mock2),
call.f_1('finish')
])
I have a mark, let say, specific_case = pytest.mark.skipif(<CONDITION>) which I need to apply to some test-cases. I want property value to return different value in case mark applied. This is my simplified code:
module.py:
import pytest
class A():
#property
def value(self):
_marks = pytest.mark._markers # current code to get applied marks list
if 'specific_case' in _marks:
return 1
else:
return 2
test_1.py:
import pytest
from module import A
pytestmark = [pytest.mark.test_id.TC_1, pytest.mark.specific_case]
def test_1():
a = A()
assert a.value == 1
But that doesn't work as pytest.mark._markers returns set(['TC_1', 'skipif']) but not exact pytestmark list (I expect set(['TC_1', 'specific_case']) or at least pytestmark as it is - [pytest.mark.test_id.TC_1, pytest.mark.specific_case]).
So is there any way I can access exact pytestmark list outside test function?
P.S. I also found some tips of how to get mark list using fixtures, but I should stick to current implementation of module.py and test_1.py, so cannot use fixture.
Also there are many other marks with skip conditions (specific_case_2 = pytest.mark.skipif(<CONDITION_2>), specific_case_3 = pytest.mark.skipif(<CONDITION_3>),...), so I cannot use just if 'skipif' in _marks solution
Since your module.py accesses pytest marks, then it is safe to assume that it is part of the test code.
With that said, in case you are you open to changing the class property A.value into a pytest fixture, then this alternative solution might work fine for you. Otherwise, this wouldn't suffice.
Alternative Solution
Instead of using pytest.mark._markers to retrieve the marks list, use request.keywords.
class FixtureRequest
keywords
Keywords/markers dictionary for the underlying node.
import pytest
# Data
class A():
#property
def value(self):
_marks = pytest.mark._markers # Current code to get applied marks list
print("Using class property A.value:", list(_marks))
if 'specific_case' in _marks:
return 1
else:
return 2
#pytest.fixture
def a_value(request): # This fixture can be in conftest.py so all test files can see it. Or use pytest_plugins to include the file containing this.
_marks = request.keywords # Alternative style of getting applied marks list
print("Using pytest fixture a_value:", list(_marks))
if 'specific_case' in _marks:
return 1
else:
return 2
# Tests
pytestmark = [pytest.mark.test_id, pytest.mark.specific_case]
def test_first():
a = A()
assert a.value != 1 # 'specific_case' was not recognized as a marker
def test_second(a_value):
assert a_value == 1 # 'specific_case' was recognized as a marker
Output:
pytest -q -rP --disable-pytest-warnings
.. [100%]
================================================================================================= PASSES ==================================================================================================
_______________________________________________________________________________________________ test_first ________________________________________________________________________________________________
------------------------------------------------------------------------------------------ Captured stdout call -------------------------------------------------------------------------------------------
Using class property A.value: ['parametrize', 'skipif', 'skip', 'trylast', 'filterwarnings', 'tryfirst', 'usefixtures', 'xfail']
_______________________________________________________________________________________________ test_second _______________________________________________________________________________________________
------------------------------------------------------------------------------------------ Captured stdout setup ------------------------------------------------------------------------------------------
Using pytest fixture a_value: ['specific_case', '2', 'test_1.py', 'test_second', 'test_id']
2 passed, 2 warnings in 0.01s
How do you write a fixture (a method) that yields/returns parameterized test parameters?
For instance, I have a test as the following:
#pytest.mark.parametrize(
"input,expected",
[("hello", "hello"),
("world", "world")])
def test_get_message(self, input, expected):
assert expected == MyClass.get_message(input)
Instead of having input and expected to be passed via #pytest.mark.parametrize, I am interested in an approach as the following:
#pytest.fixture(scope="session")
def test_messages(self):
# what should I write here to return multiple
# test case with expected value for each?
pass
def test_get_message(self, test_messages):
expected = test_messages["expected"] # somehow extracted from test_messages?
input = test_messages["input"] # somehow extracted from test message?
assert expected == MyClass.get_message(input)
To move the parameters into a fixture, you can use fixture params:
#pytest.fixture(params=[("hello", "hello"),
("world", "world")], scope="session")
def test_messages(self, request):
return request.param
def test_get_message(self, test_messages):
input = test_messages[0]
expected = test_messages[1]
assert expected == MyClass.get_message(input)
You can also put the params into a separate function (same as wih parametrize), e.g.
def get_test_messages():
return [("hello", "hello"), ("world", "world")]
#pytest.fixture(params=get_test_messages(), scope="session")
def test_messages(self, request):
return request.param
To me, it seems you want to return an array of dicts:
#pytest.fixture(scope="session")
def test_messages():
return [
{
"input": "hello",
"expected": "world"
},
{
"input": "hello",
"expected": "hello"
}
]
To use it in a test case, you would need to iterate over the array:
def test_get_message(self, test_messages):
for test_data in test_messages:
input = test_data["input"]
expected = test_data["expected"]
assert input == expected
But I not sure if this is the best approach, because it's still considered as only one test case, and so it will show up as only one test case in the output/report.
Lets say I have the following test class in a tests.py:
class MyTest(unittest.TestCase):
#classmethod
def setUpClass(cls, ip="11.111.111.111",
browserType="Chrome",
port="4444",
h5_client_url="https://somelink.com/",
h5_username="username",
h5_password="pass"):
cls.driver = get_remote_webdriver(ip, port, browserType)
cls.driver.implicitly_wait(30)
cls.h5_client_url = h5_client_url
cls.h5_username = h5_username
cls.h5_password = h5_password
#classmethod
def tearDownClass(cls):
cls.driver.quit()
def test_01(self):
# test code
def test_02(self):
# test code
...
def test_N(self):
# test code
All my tests (test_01 to test_N) use the parameters, provided in the setUpClass. Those parameters have default values:
ip="11.111.111.111",
browserType="Chrome",
port="4444",
h5_client_url="https://somelink.com/",
h5_username="username",
h5_password="pass"
So I wonder if I can inject new values for those parameters. And I want to do it from another python script so there will be no changes or just minor changes to code of the tests.
Note: I want to run my tests by a batch/shell command and save the output of the test to a log file (to redirect the standard output to that log file)
One think I did was to create a function decorator, that passes a dictionary with key=parameter_name and value=parameter_new_value, but I had to write to much additional code in the tests.py:
I defined the function_decorator logic
I put that #function_decorator annotation above every function I want to decorate
That function decorator needs that dictionary as a parameter, so I made a main, that looks something like that:
if __name__ == '__main__':
# terminal command to run tests should look like this /it is executed by the run-test PARROT command/
# python [this_module_name] [dictionary_containing_parameters] [log_file.log] *[tests]
parser = argparse.ArgumentParser()
# add testbeds_folder as scripts' first parameter, test_log_file as second and tests as the rest
parser.add_argument('dictionary_containing_parameters')
parser.add_argument('test_log_file')
parser.add_argument('unittest_args', nargs='*')
args = parser.parse_args()
dictionary_containing_parameters = sys.argv[1]
test_log_file = sys.argv[2]
# removes the "dictionary_containing_parameters" and "test_log_file" from sys.args - otherwise an error occurs unittest TestRunner
sys.argv[1:] = args.unittest_args
# executes the test/tests and save the output to the test_log_file
with open(test_log_file, "w") as f:
runner = unittest.TextTestRunner(f)
unittest.main(defaultTest=sys.argv[1:], exit=False, testRunner=runner)
Here is one possible solution:
You run your test from a different module this way:
if __name__ == '__main__':
testbed_dict = {"ip": "11.111.111.112",
"browserType": "Chrome",
"port": "4444",
"h5_client_url": "https://new_somelink.com/",
"h5_username": "new_username",
"h5_password": "new_pass"}
sys.argv.append(testbed_dict)
from your_tests_module import *
with open("test.log", "w") as f:
runner = unittest.TextTestRunner(f)
unittest.main(argv=[sys.argv[0]], defaultTest='test_class.test_name', exit=False, testRunner=runner)
You can nottice that argv=[sys.argv[0]] in unittest.main(argv=[sys.argv[0]], defaultTest='test_class.test_name', exit=False, testRunner=runner). Doing that you change the unittests argument to one (no error occurs) to a list with your real arguments. Note that at the end of this list is the dictionary with the new values of test parameters.
Ok, now you write an function decorator, that should look like this:
def load_params(system_arguments_list):
def decorator(func_to_decorate):
#wraps(func_to_decorate)
def wrapper(self, *args, **kwargs):
kwargs = system_arguments_list[-1]
return func_to_decorate(self, **kwargs)
return wrapper
return decorator
And use this decorator this way:
#classmethod
#load_params(sys.argv)
def setUpClass(cls, ip="11.111.111.111",
browserType="Chrome",
port="4444",
h5_client_url="https://somelink.com/",
h5_username="username",
h5_password="pass"):
cls.driver = get_remote_webdriver(ip, port, browserType)
cls.driver.implicitly_wait(30)
cls.h5_client_url = h5_client_url
cls.h5_username = h5_username
the purpose of #mark.incremental is that if one test fails, the tests afterwards are marked as expected to fail.
However, when I use this in conjuction with parametrization I get undesired behavior.
For example, in the case of this fake code:
//conftest.py:
def pytest_generate_tests(metafunc):
metafunc.parametrize("input", [True, False, None, False, True])
def pytest_runtest_makereport(item, call):
if "incremental" in item.keywords:
if call.excinfo is not None:
parent = item.parent
parent._previousfailed = item
def pytest_runtest_setup(item):
if "incremental" in item.keywords:
previousfailed = getattr(item.parent, "_previousfailed", None)
if previousfailed is not None:
pytest.xfail("previous test failed (%s)" %previousfailed.name)
//test.py:
#pytest.mark.incremental
class TestClass:
def test_input(self, input):
assert input is not None
def test_correct(self, input):
assert input==True
I'd expect the test class to run
test_input on True,
followed by test_correct on True,
followed by test_input on False,
followed by test_correct on False,
folowed by test_input on None,
followed by (xfailed) test_correct on None, etc etc.
Instead, what happens is that the test class
runs test_input on True,
then runs test_input on False,
then runs test_input on None,
then marks everything from that point onwards as xfailed (including the test_corrects).
What I am assuming is happening is that parametrization takes priority over proceeding through functions in a class. The question is if it is possible to override this behaviour or work around it somehow, as the current situation makes marking a class as incremental completely useless to me.
(is the only way to handle this to copy-paste the code for the class over and over, each time with different parameters? The thought is repulsive to me)
The solution to this is described in https://docs.pytest.org/en/latest/example/parametrize.html under the header A quick port of “testscenarios”
This is the code listed there and what the code in conftest.py is doing is it is looking for variable scenarios in the test class. When it finds the variable it iterates over each item of scenarios and expects an id string with which to label the test and a dictionary of 'argnames:argvalues'
# content of conftest.py
def pytest_generate_tests(metafunc):
idlist = []
argvalues = []
for scenario in metafunc.cls.scenarios:
idlist.append(scenario[0])
items = scenario[1].items()
argnames = [x[0] for x in items]
argvalues.append(([x[1] for x in items]))
metafunc.parametrize(argnames, argvalues, ids=idlist, scope="class")
# content of test_scenarios.py
scenario1 = ('basic', {'attribute': 'value'})
scenario2 = ('advanced', {'attribute': 'value2'})
class TestSampleWithScenarios(object):
scenarios = [scenario1, scenario2]
def test_demo1(self, attribute):
assert isinstance(attribute, str)
def test_demo2(self, attribute):
assert isinstance(attribute, str)
You can also modify the function pytest_generate_tests to accept different datatype inputs. For example if you have a list that you usually pass to
#pytest.mark.parametrize("varname", varval_list)
you can use that same list in the following way:
# content of conftest.py
def pytest_generate_tests(metafunc):
idlist = []
argvalues = []
argnames = metafunc.cls.scenario_keys
for idx, scenario in enumerate(metafunc.cls.scenario_parameters):
idlist.append(str(idx))
argvalues.append([scenario])
metafunc.parametrize(argnames, argvalues, ids=idlist, scope="class")
# content of test_scenarios.py
varval_list = [a, b, c, d]
class TestSampleWithScenarios(object):
scenario_parameters = varval_list
scenario_keys = ['varname']
def test_demo1(self, attribute):
assert isinstance(attribute, str)
def test_demo2(self, attribute):
assert isinstance(attribute, str)
The id will be an autogenerated number (you can change that to using something you specify) and in this implementation it won't handle multiple parameterization variables so you have to compile those in a single list (or cater pytest_generate_tests to handle that for you)
The following solution does not ask to change your test class
_test_failed_incremental = defaultdict(dict)
def pytest_runtest_makereport(item, call):
if "incremental" in item.keywords:
if call.excinfo is not None and call.excinfo.typename != "Skipped":
param = tuple(item.callspec.indices.values()) if hasattr(item, "callspec") else ()
_test_failed_incremental[str(item.cls)].setdefault(param, item.originalname or item.name)
def pytest_runtest_setup(item):
if "incremental" in item.keywords:
param = tuple(item.callspec.indices.values()) if hasattr(item, "callspec") else ()
originalname = _test_failed_incremental[str(item.cls)].get(param)
if originalname:
pytest.xfail("previous test failed ({})".format(originalname))
It works by keeping a dictionary with the failed test per class and per index of parametrized input as key (and the name of the test method that failed as value).
In your example, the dictionary _test_failed_incremental will be
defaultdict(<class 'dict'>, {"<class 'test.TestClass'>": {(2,): 'test_input'}})
showing that the 3rd run (index=2) has failed for the class test.TestClass.
Before running a test method in the class for a given parameter, it checks if any previous test method in the class has not failed for the given parameter and if so xfail the test with info on the name of the method that first failed.
Not 100% tested but in use and working for my needs.