I'm using pytest and I'm trying to pass a fixture as a value in a parametrised test. I read the documentation for parametrised tests and the indirect = True parameter, but when I try to run it I keep getting an error message. I require the fixtures of different objects for different tests and I have the code below:
#in user.py
class User(object):
def __init__(self, first_name: str, last_name: str):
self.first_name = first_name
self.last_name = last_name
def create_full_name(self):
return f'{self.first_name} {self.last_name}'
#in conftest.py
#pytest.fixture(scope='function')
def normal_user():
'''Returns a normal user'''
normal_user = user.User(first_name = 'katherine', last_name = 'rose')
yield normal_user
#pytest.fixture(scope='function')
def missing_details():
'''Returns a user without a last name'''
missing_details = user.User(first_name = ' ', last_name = 'rose')
yield missing_details
#in test_user.py
#pytest.mark.parametrize('user, expected', [('normal_user', 'katherine rose'), ('missing_details', TypeError)], indirect= ['normal_user', 'missing_details'])
def test_parametrize_user_full_name(user, expected):
assert user.create_full_name(user) == expected
The error message I keep getting is:
In test_parametrize_user_full_name: indirect fixture 'normal_user' doesn't exist
Is it necessary to specify which fixtures should be indirect in conftest.py or am I writing the code for the parametrised test incorrectly?
This is not how indirect parametrization works. You have to reference one fixture with the indirect parameter, and the fixture will return the actual value based on the value in the parameter value list. You can find an example in the documentation.
In your case you would need something like:
#pytest.fixture
def user(request):
if request.param == 'normal_user':
# Returns a normal user
yield User(first_name='katherine', last_name='rose')
if request.param == 'missing_details':
# Returns a user without a last name
yield User(first_name=' ', last_name='rose')
#pytest.mark.parametrize('user, expected', [('normal_user', 'katherine rose'),
('missing_details', TypeError)],
indirect=['user'])
def test_parametrize_user_full_name(user, expected):
...
As an aside: comparing the result against an exception will not work the way you do it, but I guess this is only a dumbed down example that is not expected to work.
Related
I created a class to make my life easier while doing some integration tests involving workers and their contracts. The code looks like this:
class ContractID(str):
contract_counter = 0
contract_list = list()
def __new__(cls):
cls.contract_counter += 1
new_entry = super().__new__(cls, f'Some_internal_name-{cls.contract_counter:10d}')
cls.contract_list.append(new_entry)
return new_entry
#classmethod
def get_contract_no(cls, worker_number):
return cls.contract_list[worker_number-1] # -1 so WORKER1 has contract #1 and not #0 etc.
When I'm unit-testing the class, I'm using the following code:
from test_helpers import ContractID
#pytest.fixture
def get_contract_numbers():
test_string_1 = ContractID()
test_string_2 = ContractID()
test_string_3 = ContractID()
return test_string_1, test_string_2, test_string_3
def test_contract_id(get_contract_numbers):
assert get_contract_ids[0] == 'Some_internal_name-0000000001'
assert get_contract_ids[1] == 'Some_internal_name-0000000002'
assert get_contract_ids[2] == 'Some_internal_name-0000000003'
def test_contract_id_get_contract_no(get_contract_numbers):
assert ContractID.get_contract_no(1) == 'Some_internal_name-0000000001'
assert ContractID.get_contract_no(2) == 'Some_internal_name-0000000002'
assert ContractID.get_contract_no(3) == 'Some_internal_name-0000000003'
with pytest.raises(IndexError) as py_e:
ContractID.get_contract_no(4)
assert py_e.type == IndexError
However, when I try to run these tests, the second one (test_contract_id_get_contract_no) fails, because it does not raise the error as there are more than three values. Furthermore, when I try to run all my tests in my folder test/, it fails even the first test (test_contract_id), which is probably because I'm trying to use this function in other tests that run before this test.
After reading this book, my understanding of fixtures was that it provides objects as if they were never called before, which is obviously not the case here. Is there a way how to tell the tests to use the class as if it hasn't been used before anywhere else?
If I understand that correctly, you want to run the fixture as setup code, so that your class has exactly 3 instances. If the fixture is function-scoped (the default) it is indeed run before each test, which will each time create 3 new instances for your class. If you want to reset your class after the test, you have to do this yourself - there is no way pytest can guess what you want to do here.
So, a working solution would be something like this:
#pytest.fixture(autouse=True)
def get_contract_numbers():
test_string_1 = ContractID()
test_string_2 = ContractID()
test_string_3 = ContractID()
yield
ContractID.contract_counter = 0
ContractID.contract_list.clear()
def test_contract_id():
...
Note that I did not yield the test strings, as you don't need them in the shown tests - if you need them, you can yield them, of course. I also added autouse=True, which makes sense if you need this for all tests, so you don't have to reference the fixture in each test.
Another possibility would be to use a session-scoped fixture. In this case the setup would be done only once. If that is what you need, you can use this instead:
#pytest.fixture(autouse=True, scope="session")
def get_contract_numbers():
test_string_1 = ContractID()
test_string_2 = ContractID()
test_string_3 = ContractID()
yield
Given a class with class methods that contain only self input:
class ABC():
def __init__(self, input_dict)
self.variable_0 = input_dict['variable_0']
self.variable_1 = input_dict['variable_1']
self.variable_2 = input_dict['variable_2']
self.variable_3 = input_dict['variable_3']
def some_operation_0(self):
return self.variable_0 + self.variable_1
def some_operation_1(self):
return self.variable_2 + self.variable_3
First question: Is this very bad practice? Should I just refactor some_operation_0(self) to explicitly take the necessary inputs, some_operation_0(self, variable_0, variable_1)? If so, the testing is very straightforward.
Second question: What is the correct way to setup my unit test on the method some_operation_0(self)?
Should I setup a fixture in which I initialize input_dict, and then instantiate the class with a mock object?
#pytest.fixture
def generator_inputs():
f = open('inputs.txt', 'r')
input_dict = eval(f.read())
f.close()
mock_obj = ABC(input_dict)
def test_some_operation_0():
assert mock_obj.some_operation_0() == some_value
(I am new to both python and general unit testing...)
Those methods do take an argument: self. There is no need to mock anything. Instead, you can simply create an instance, and verify that the methods return the expected value when invoked.
For your example:
def test_abc():
a = ABC({'variable_0':0, 'variable_1':1, 'variable_2':2, 'variable_3':3))
assert a.some_operation_0() == 1
assert a.some_operation_1() == 5
If constructing an instance is very difficult, you might want to change your code so that the class can be instantiated from standard in-memory data structures (e.g. a dictionary). In that case, you could create a separate function that reads/parses data from a file and uses the "data-structure-based" __init__ method, e.g. make_abc() or a class method.
If this approach does not generalize to your real problem, you could imagine providing programmatic access to the key names or other metadata that ABC recognizes or cares about. Then, you could programmatically construct a "defaulted" instance, e.g. an instance where every value in the input dict is a default-constructed value (such as 0 for int):
class ABC():
PROPERTY_NAMES = ['variable_0', 'variable_1', 'variable_2', 'variable_3']
def __init__(self, input_dict):
# implementation omitted for brevity
pass
def some_operation_0(self):
return self.variable_0 + self.variable_1
def some_operation_1(self):
return self.variable_2 + self.variable_3
def test_abc():
a = ABC({name: 0 for name in ABC.PROPERTY_NAMES})
assert a.some_operation_0() == 0
assert a.some_operation_1() == 0
I have some trouble with integration test. I'm using python 3.5, SQLAlchemy 1.2.0b3, latest docker image of postgresql. So, I wrote test:
# tests/integration/usecases/test_users_usecase.py
class TestGetUsersUsecase(unittest.TestCase):
def setUp(self):
Base.metadata.reflect(_pg)
Base.metadata.drop_all(_pg)
Base.metadata.create_all(_pg)
self._session = sessionmaker(bind=_pg, autoflush=True, autocommit=False, expire_on_commit=True)
self.session = self._session()
self.session.add(User(id=1, username='user1'))
self.session.commit()
self.pg = PostgresService(session=self.session)
def test_get_user(self):
expected = User(id=1, username='user1')
boilerplates.get_user_usecase(storage_service=self.pg, id=1, expected=expected)
# tests/boilerplates/usecases/user_usecases.py
def get_user_usecase(storage_service, id, expected):
u = GetUser(storage_service=storage_service)
actual = u.apply(id=id)
assert expected == actual
In usecase I did next:
# usecases/users.py
class GetUser(object):
"""
Usecase for getting user from storage service by Id
"""
def __init__(self, storage_service):
self.storage_service = storage_service
def apply(self, id):
user = self.storage_service.get_user_by_id(id=id)
if user is None:
raise UserDoesNotExists('User with id=\'%s\' does not exists' % id)
return user
storage_service.get_user_by_id looks like:
# infrastructure/postgres.py (method of Postgres class)
def get_user_by_id(self, id):
from anna.domain.user import User
return self.session.query(User).filter(User.id == id).one_or_none()
And it does not work in my integration test. But if I add print(actual) in test before assert - all is OK. I thought that my test is bad, I try many variants and all does not works. Also I tried return generator from storage_service.get_user_by_id() and it also does not work. What I did wrong? It works good only if print() was called in test.
I have a file that is saved in a particular format, and a class that will create an object based on the data in the file.
I want to ensure that all values in the file/string were extracted correctly by testing each attribute in the object.
Here is a simplified version of what I'm doing:
classlist.py
import re
class ClassList:
def __init__(self, data):
values = re.findall('name=(.*?)\$age=(.*?)\$', data)
self.students = [Student(name, int(age)) for name, age in values]
class Student:
def __init__(self, name, age):
self.name = name
self.age = age
test_classlist.py
import pytest
from classlist import ClassList
def single_data():
text = 'name=alex$age=20$'
return ClassList(text)
def double_data():
text = 'name=taylor$age=23$' \
'name=morgan$age=25$'
return ClassList(text)
#pytest.mark.parametrize('classinfo, expected', [
(single_data(), ['alex']),
(double_data(), ['taylor', 'morgan'])
])
def test_name(classinfo, expected):
result = [student.name for student in classinfo.students]
assert result == expected
#pytest.mark.parametrize('classinfo, expected', [
(single_data(), [20]),
(double_data(), [23, 25])
])
def test_age(classinfo, expected):
result = [student.age for student in classinfo.students]
assert result == expected
I want to create objects based on different data and use them as a parametrized value.
My current setup works, although there is the unnecessary overheard of creating the object for each test. I'd want them to be created once.
If I try doing the following:
...
#pytest.fixture(scope='module') # fixture added
def double_data():
text = 'name=taylor$age=23$' \
'name=morgan$age=25$'
return ClassList(text)
#pytest.mark.parametrize('classinfo, expected', [
(single_data, ['alex']),
(double_data, ['taylor', 'morgan']) # () removed
])
def test_name(classinfo, expected):
result = [student.name for student in classinfo.students]
assert result == expected
...
AttributeError: 'function' object has no attribute 'students'
...it doesn't work as it references the function rather than the fixture.
Furthermore, the code in test_name and test_age is almost identical. In my actual code, I'm doing this for about 12 attributes. Should/can this be merged into a single function? How?
How can I clean up my test code?
Thanks!
Edit:
I feel this is relevant, but I'm unsure how make it work for my situation: Can params passed to pytest fixture be passed in as a variable?
My current setup works, although there is the unnecessary overheard of creating the object for each test. I'd want them to be created once.
This smells like unnecessary pre-optimization to me, but if you care about this, then run the functions that create your data to test at module level, so they only run once.
For example:
...
def single_data():
text = 'name=alex$age=20$'
return ClassList(text)
def double_data():
text = 'name=taylor$age=23$' \
'name=morgan$age=25$'
return ClassList(text)
double_data_object = double_data()
single_data_object = single_data()
#pytest.mark.parametrize('classinfo, expected', [
(single_data_object, ['alex']),
(double_data_object, ['taylor', 'morgan'])
])
def test_name(classinfo, expected):
result = [student.name for student in classinfo.students]
assert result == expected
#pytest.mark.parametrize('classinfo, expected', [
(single_data_object, [20]),
(double_data_object, [23, 25])
])
def test_age(classinfo, expected):
...
Furthermore, the code in test_name and test_age is almost identical.
In my actual code, I'm doing this for about 12 attributes. Should/can
this be merged into a single function? How?
How can I clean up my test code?
A couple of ways to do this, but from your example, provide an equality magic method to the Student class and use that to test your code (also add a repr for sane representation of your object):
class Student:
def __init__(self, name, age):
self.name = name
self.age = age
def __eq__(self, other):
return (self.name, self.age) == (other.name, other.age)
def __repr__(self):
return 'Student(name={}, age={})'.format(self.name, self.age)
Then your test can look like this:
#pytest.mark.parametrize('classinfo, expected', [
(single_data(), [Student('alex', 20)]),
(double_data(), [Student('taylor', 23), Student('morgan', 25)]),
])
def test_student(classinfo, expected):
assert classinfo.students == expected
You can add one fixture which returns object of that class and call that fixture before every test. I have done some changes and create a fixture get_object in test_classlist.py while classlist.py is as it is.
get_object will give you an object of that class and you can use that object in test function via request module. I have assigned that class object in request.instance.cobj. The same you can access in test function.
What I am getting from your description is you want to create object of ClassList. If i am not getting wrong ,the below solution should work for you. Try this.
import pytest
from classlist import ClassList
def single_data():
text = 'name=alex$age=20$'
print text
return ClassList(text)
def double_data():
text = 'name=taylor$age=23$' \
'name=morgan$age=25$'
return ClassList(text)
#pytest.fixture
def get_object(request):
classobj= request.getfuncargvalue('classinfo')()
request.instance.cobj = classobj
class Test_clist:
#pytest.mark.parametrize('classinfo, expected', [
(single_data, ['alex']),
(double_data, ['taylor', 'morgan']) # () removed
])
#pytest.mark.usefixtures('get_object')
def test_name(self,classinfo,expected,request):
result = [student.name for student in request.instance.cobj.students]
print result
print expected
assert result == expected
the purpose of #mark.incremental is that if one test fails, the tests afterwards are marked as expected to fail.
However, when I use this in conjuction with parametrization I get undesired behavior.
For example, in the case of this fake code:
//conftest.py:
def pytest_generate_tests(metafunc):
metafunc.parametrize("input", [True, False, None, False, True])
def pytest_runtest_makereport(item, call):
if "incremental" in item.keywords:
if call.excinfo is not None:
parent = item.parent
parent._previousfailed = item
def pytest_runtest_setup(item):
if "incremental" in item.keywords:
previousfailed = getattr(item.parent, "_previousfailed", None)
if previousfailed is not None:
pytest.xfail("previous test failed (%s)" %previousfailed.name)
//test.py:
#pytest.mark.incremental
class TestClass:
def test_input(self, input):
assert input is not None
def test_correct(self, input):
assert input==True
I'd expect the test class to run
test_input on True,
followed by test_correct on True,
followed by test_input on False,
followed by test_correct on False,
folowed by test_input on None,
followed by (xfailed) test_correct on None, etc etc.
Instead, what happens is that the test class
runs test_input on True,
then runs test_input on False,
then runs test_input on None,
then marks everything from that point onwards as xfailed (including the test_corrects).
What I am assuming is happening is that parametrization takes priority over proceeding through functions in a class. The question is if it is possible to override this behaviour or work around it somehow, as the current situation makes marking a class as incremental completely useless to me.
(is the only way to handle this to copy-paste the code for the class over and over, each time with different parameters? The thought is repulsive to me)
The solution to this is described in https://docs.pytest.org/en/latest/example/parametrize.html under the header A quick port of “testscenarios”
This is the code listed there and what the code in conftest.py is doing is it is looking for variable scenarios in the test class. When it finds the variable it iterates over each item of scenarios and expects an id string with which to label the test and a dictionary of 'argnames:argvalues'
# content of conftest.py
def pytest_generate_tests(metafunc):
idlist = []
argvalues = []
for scenario in metafunc.cls.scenarios:
idlist.append(scenario[0])
items = scenario[1].items()
argnames = [x[0] for x in items]
argvalues.append(([x[1] for x in items]))
metafunc.parametrize(argnames, argvalues, ids=idlist, scope="class")
# content of test_scenarios.py
scenario1 = ('basic', {'attribute': 'value'})
scenario2 = ('advanced', {'attribute': 'value2'})
class TestSampleWithScenarios(object):
scenarios = [scenario1, scenario2]
def test_demo1(self, attribute):
assert isinstance(attribute, str)
def test_demo2(self, attribute):
assert isinstance(attribute, str)
You can also modify the function pytest_generate_tests to accept different datatype inputs. For example if you have a list that you usually pass to
#pytest.mark.parametrize("varname", varval_list)
you can use that same list in the following way:
# content of conftest.py
def pytest_generate_tests(metafunc):
idlist = []
argvalues = []
argnames = metafunc.cls.scenario_keys
for idx, scenario in enumerate(metafunc.cls.scenario_parameters):
idlist.append(str(idx))
argvalues.append([scenario])
metafunc.parametrize(argnames, argvalues, ids=idlist, scope="class")
# content of test_scenarios.py
varval_list = [a, b, c, d]
class TestSampleWithScenarios(object):
scenario_parameters = varval_list
scenario_keys = ['varname']
def test_demo1(self, attribute):
assert isinstance(attribute, str)
def test_demo2(self, attribute):
assert isinstance(attribute, str)
The id will be an autogenerated number (you can change that to using something you specify) and in this implementation it won't handle multiple parameterization variables so you have to compile those in a single list (or cater pytest_generate_tests to handle that for you)
The following solution does not ask to change your test class
_test_failed_incremental = defaultdict(dict)
def pytest_runtest_makereport(item, call):
if "incremental" in item.keywords:
if call.excinfo is not None and call.excinfo.typename != "Skipped":
param = tuple(item.callspec.indices.values()) if hasattr(item, "callspec") else ()
_test_failed_incremental[str(item.cls)].setdefault(param, item.originalname or item.name)
def pytest_runtest_setup(item):
if "incremental" in item.keywords:
param = tuple(item.callspec.indices.values()) if hasattr(item, "callspec") else ()
originalname = _test_failed_incremental[str(item.cls)].get(param)
if originalname:
pytest.xfail("previous test failed ({})".format(originalname))
It works by keeping a dictionary with the failed test per class and per index of parametrized input as key (and the name of the test method that failed as value).
In your example, the dictionary _test_failed_incremental will be
defaultdict(<class 'dict'>, {"<class 'test.TestClass'>": {(2,): 'test_input'}})
showing that the 3rd run (index=2) has failed for the class test.TestClass.
Before running a test method in the class for a given parameter, it checks if any previous test method in the class has not failed for the given parameter and if so xfail the test with info on the name of the method that first failed.
Not 100% tested but in use and working for my needs.