I have a fixture (lets call it class_fixture) that dynamically returns a class I wish to run tests on.
I have another fixture (lets call it methods_fixture) that uses class_fixture and returns the names of all methods (that fit a specific criteria) of that class.
I also have a test that uses both fixtures and performs certain checks all methods:
def test_methods(class_fixture, methods_fixture):
for method in methods_fixture:
# DO TESTS ON class_fixture.method, for our example, lets test method name starts with DUMMY
assert getattr(class_fixture, method).__name__.startswith("DUMMY")
I would like to convert that test to be parameterized and be similar to the following:
def test(class_fixture, methods_fixture_as_parameter):
# This test function should generate multiple tests, one for each method returned by `methods_fixture`
# DO TESTS ON class_fixture.method, for our example, lets test method name starts with DUMMY
assert getattr(class_fixture, methods_fixture_as_parameter).__name__.startswith("DUMMY")
I tried going over pytest's parametrize documentation and couldn't find anything that fits. Since methods_fixture depends on a different fixture, I can't seem to implement what I want (because I assume I don't have the list of methods at test-creation time).
I couldn't get it to work with indirect, although it might be possible.
I also tried adding a pytest_generate_tests hook, but couldn't reach the values of methods_fixture or class_fixture to actually set the parameter values.
You can replace the fixtures with a function that iterate over the classes and their methods from parametrize decorator
def class_fixture_imp():
return []
#pytest.fixture
def class_fixture():
return class_fixture_imp()
def methods_data():
c = class_fixture_imp():
for f in methods_fixture(c):
yield c, f
#pytest.mark.parametrize('data', methods_data())
def test_something(data):
cl, method = data
assert getattr(cl, method).__name__.startswith("DUMMY")
Related
I am adding some tests to existing not so test friendly code, as title suggest, I need to test if the complex method actually calls another method, eg.
class SomeView(...):
def verify_permission(self, ...):
# some logic to verify permission
...
def get(self, ...):
# some codes here I am not interested in this test case
...
if some condition:
self.verify_permission(...)
# some other codes here I am not interested in this test case
...
I need to write some test cases to verify self.verify_permission is called when condition is met.
Do I need to mock all the way to the point of where self.verify_permission is executed? Or I need to refactor the def get() function to abstract out the code to become more test friendly?
There are a number of points made in the comments that I strongly disagree with, but to your actual question first.
This is a very common scenario. The suggested approach with the standard library's unittest package is to utilize the Mock.assert_called... methods.
I added some fake logic to your example code, just so that we can actually test it.
code.py
class SomeView:
def verify_permission(self, arg: str) -> None:
# some logic to verify permission
print(self, f"verify_permission({arg=}=")
def get(self, arg: int) -> int:
# some codes here I am not interested in this test case
...
some_condition = True if arg % 2 == 0 else False
...
if some_condition:
self.verify_permission(str(arg))
# some other codes here I am not interested in this test case
...
return arg * 2
test.py
from unittest import TestCase
from unittest.mock import MagicMock, patch
from . import code
class SomeViewTestCase(TestCase):
def test_verify_permission(self) -> None:
...
#patch.object(code.SomeView, "verify_permission")
def test_get(self, mock_verify_permission: MagicMock) -> None:
obj = code.SomeView()
# Odd `arg`:
arg, expected_output = 3, 6
output = obj.get(arg)
self.assertEqual(expected_output, output)
mock_verify_permission.assert_not_called()
# Even `arg`:
arg, expected_output = 2, 4
output = obj.get(arg)
self.assertEqual(expected_output, output)
mock_verify_permission.assert_called_once_with(str(arg))
You use a patch variant as a decorator to inject a MagicMock instance to replace the actual verify_permission method for the duration of the entire test method. In this example that method has no return value, just a side effect (the print). Thus, we just need to check if it was called under the correct conditions.
In the example, the condition depends directly on the arg passed to get, but this will obviously be different in your actual use case. But this can always be adapted. Since the fake example of get has exactly two branches, the test method calls it twice to traverse both of them.
When doing unit tests, you should always isolate the unit (i.e. function) under testing from all your other functions. That means, if your get method calls other methods of SomeView or any other functions you wrote yourself, those should be mocked out during test_get.
You want your test of get to be completely agnostic to the logic inside verify_permission or any other of your functions used inside get. Those are tested separately. You assume they work "as advertised" for the duration of test_get and by replacing them with Mock instances you control exactly how they behave in relation to get.
Note that the point about mocking out "network requests" and the like is completely unrelated. That is an entirely different but equally valid use of mocking.
Basically, you 1.) always mock your own functions and 2.) usually mock external/built-in functions with side effects (like e.g. network or disk I/O). That is it.
Also, writing tests for existing code absolutely has value. Of course it is better to write tests alongside your code. But sometimes you are just put in charge of maintaining a bunch of existing code that has no tests. If you want/can/are allowed to, you can refactor the existing code and write your tests in sync with that. But if not, it is still better to add tests retroactively than to have no tests at all for that code.
And if you write your unit tests properly, they still do their job, if you or someone else later decides to change something about the code. If the change breaks your tests, you'll notice.
As for the exception hack to interrupt the tested method early... Sure, if you want. It's lazy and calls into question the whole point of writing tests, but you do you.
No, seriously, that is a horrible approach. Why on earth would you test just part of a function? If you are already writing a test for it, you may as well cover it to the end. And if it is so complex that it has dozens of branches and/or calls 10 or 20 other custom functions, then yes, you should definitely refactor it.
I want to run through a collection of test functions with different fixtures for each run. Generally, the solutions suggested on Stack Overflow, documentation and in blog posts fall under two categories. One is by parametrizing the fixture:
#pytest.fixture(params=list_of_cases)
def some_case(request):
return request.param
The other is by calling metafunc.parametrize in order to generate multiple tests:
def pytest_generate_tests(metafunc):
metafunc.parametrize('some_case', list_of_cases)
The problem with both approaches is the order in which the cases are run. Basically it runs each test function with each parameter, instead of going through all test functions for a given parameter and then continuing with the next parameter. This is a problem when some of my fixtures are comparatively expensive database calls.
To illustrate this, assume that dataframe_x is another fixture that belongs to case_x. Pytest does this
test_01(dataframe_1)
test_01(dataframe_2)
...
test_50(dataframe_1)
test_50(dataframe_2)
instead of
test_01(dataframe_1)
...
test_50(dataframe_1)
test_01(dataframe_2)
...
test_50(dataframe_2)
The result is that I will fetch each dataset from the DB 50 times instead of just once. Since I can only define the fixture scope as 'session', 'module' or 'function', I couldn't figure out how to group my tests to that they are run together in chunks.
Is there a way to structure my tests so that I can run through all my test functions in sequence for each dataset?
If you only want to load the dataframes once you could use the scope parameter with 'module' or 'session'.
#pytest.fixture(scope="module", params=[1, 2])
def dataframe(request):
if request.param == 1:
return #load datagrame_1
if request.param == 2:
return #load datagrame_2
The tests will still be run alternately but the dataframe will only be loaded once per module or session.
So i have a python class say
class Nested(object):
def method_test(self):
#do_something
The above class is being maintained by some other group so i can't change it. Hence we have a wrapper around it such that
class NestedWrapper(object):
self.nested = Nested()
def call_nested(object):
self.nested.method_test()
Now, I am writing test cases to test my NestedWrapper. How can i test that in one of my test, the underlying method of Nested.method_test is being called? Is it even possible?
I am using python Mock for testing.
UPDATE I guess I was implicitly implying that I want to do Unit Testing not a one off testing. Since most of the responses are suggesting me to use debugger, I just want to point out that I want it to be unit tested.
I think you can just mock Nested.method_test and make sure it was called...
with mock.patch.object(Nested, 'method_test') as method_test_mock:
nw = NestedWrapper()
nw.call_nested()
method_test_mock.called # Should be `True`
If using unittest, you could do something like self.assertTrue(method_test_mock.called), or you could make a more detailed assertion by calling one of the more specific assertions on Mock objects.
I have seen from the pytest docs that we can apply multiple markers at once on the Class or module level. I didn't find documentation for doing it at the test function level. Has anybody done this before with success?
I would like to ideally do this as a list of markers as being done in the above doc for Classes, for example (quoting from the docs):
class TestClass:
pytestmark = [pytest.mark.webtest, pytest.mark.slowtest]
So, the pytest documentation talks about using pytestmark to specify the markers at the class and module level. However, it doesn't talk about having something similar at the test function level. I would have to specify the markers individually on top of test functions to get them marked with each one of them. This makes the test code look a little clunky with the increasing number of markers on top of test functions.
test_example.py:
pytestmark = [class1, class2]
class TestFeature(TestCase):
#pytest.mark.marker1
#pytest.mark.marker2
#pytest.mark.marker3
def test_function(self):
assert True
For functions you just repeat the decorator:
#pytest.mark.webtest
#pytest.mark.slowtest
def test_something(...):
...
If you want to reuse that for several tests you should remember that decorators are just functions returning decorated thing, so several decorators is just a composition:
def compose_decos(decos):
def composition(func):
for deco in reversed(decos):
func = deco(func)
return func
return composition
all_marks = compose_decos(pytest.mark.webtest, pytest.mark.slowtest)
#all_marks
def test_something(...):
...
Or you can use general purpose composition such as my funcy library has:
from funcy import compose
all_marks = compose(pytest.mark.webtest, pytest.mark.slowtest)
Note that this way you can compose any decorators, not only pytest marks.
Haven't tried this myself. However, from a quick look at the source, I think class MarkDecorator is what you want. Try:
mark_names=["marker1", "marker2", "marker3"]
my_marks = pytest.MarkDecorator(*mark_names)
marked_test_function = my_marks(test_function)
The *mark_names just unpacks mark_names into the constructor arguments of MarkDecorator. MarkDecorator.__call__ then applies the stored marks (self.args) to the parameter, here test_function, to provide a marked test function.
You can also use def unmarked_test_function() ... and test_function=my_marks(unmarked_test_function) so you don't have to change names.
Added explanation: I got this from pytest.mark, which turns out to be a MarkGenerator singleton. MarkGenerator creates MarkDecorator classes, which are then applied as decorators. The above code simulates that process manually, stuffing multiple markers.
When I have a parametrized pytest test like in the following case:
#parametrize('repetition', range(3))
#parametrize('name', ['test', '42'])
#parametrize('number', [3,7,54])
def test_example(repetition, name, number):
assert 1 = 1
the test runner prints out lines like follows:
tests\test_example.py:12: test_example[1-test-7]
where the parametrized values show up in the rectangular bracket next to the test functions name (test_example[0]). How can I access the content of the rectangular bracket (i.e. the string 0) inside the test? I have looked at the request
fixture, but could not find a suitable method for my needs.
To be clear: Inside the test method, I want to print the string 1-test-7, which pytest prints on the console.
How can I access the string 1-test-7, which pytest print out during a test?
I do not want to create this string by myself using something like
print str(repetition)+"-"+name+"-"+str(number)
since this would change every time I add a new parametrized argument to the test method.
In addition, if more complex objects are used in the parametrize list (like namedtuple), these objects are just references by a shortcut (e.g. object1, object2, ...).
Addendum: If I use the request fixture as an argument in my test method, I 'see' the string I would like to access when I use the following command
print request.keywords.node.__repr__()
which prints out something like
<Function 'test_example[2-test-3]'>
I am trying to find out how this method __repr__ is defined, in order to access directly the string test_example[2-test-3] from which I easily can extract the string I want, 2-test-3 in this example.
The solution makes use of the built-in request fixture which can be accessed as follows:
def test_example(repetition, name, number, request):
s = request.keywords.node.name
print s[s.find("[")+1:s.find("]")]
which will print the parameter string for each single parametrized test, so each parametrized test can be identified.