I am not talking about the "Fixture Parametrizing" as defined by pytest, I am talking about real parameters that you pass to a function (the fixture function in this case) to make code more modular.
To demonstrate, this is my fixture
#yield_fixture
def a_fixture(a_dependency):
do_setup_work()
yield
do_teardown_work()
a_dependency.teardown()
As you see, my fixture depends on a_dependency whose teardown() needs to be called as well. I know in the naive use-case, I could do this:
#yield_fixture
def a_dependency():
yield
teardown()
#yield_fixture
def a_fixture(a_dependency):
do_setup_work()
yield
do_teardown_work()
However, while the a_fixture code can be put in a central place and re-used by all tests, the a_dependecy code is test-specific and each test possibly needs to create a new a_dependency object.
I want to avoid copy-pasting both fixture and dependency to all my tests. If this was regular python code, I could just pass the a_dependecy as a function argument. How can I pass this object to my shared fixture?
It seems to me like maybe you don't want a_dependency to be a fixture, you just want it to be a regular function. Are you after something like this?
def a_dependency():
# returns a context manager
#yield_fixture
def a_fixture():
with a_dependency() as dependency:
do_setup_work()
yield
do_teardown_work()
OK, well if a_dependency really has to be a fixture, why not the best of both worlds? Decorators are just syntactic sugar after all.
def a_dependency():
# returns a context manager
a_dependency_fixture = yield_fixture(a_dependency)
#yield_fixture
def a_fixture():
# here use a_dependency as a regular function
with a_dependency() as dependency:
do_setup_work()
yield
do_teardown_work()
def test_foo(a_dependency_fixture):
# here use a_dependency as a fixture
pass
I haven't checked that this actually works because the information in the question is too generic for me to make a working case out of. It may be easier to give a more useful answer if you can provide more specifics.
Related
I am adding some tests to existing not so test friendly code, as title suggest, I need to test if the complex method actually calls another method, eg.
class SomeView(...):
def verify_permission(self, ...):
# some logic to verify permission
...
def get(self, ...):
# some codes here I am not interested in this test case
...
if some condition:
self.verify_permission(...)
# some other codes here I am not interested in this test case
...
I need to write some test cases to verify self.verify_permission is called when condition is met.
Do I need to mock all the way to the point of where self.verify_permission is executed? Or I need to refactor the def get() function to abstract out the code to become more test friendly?
There are a number of points made in the comments that I strongly disagree with, but to your actual question first.
This is a very common scenario. The suggested approach with the standard library's unittest package is to utilize the Mock.assert_called... methods.
I added some fake logic to your example code, just so that we can actually test it.
code.py
class SomeView:
def verify_permission(self, arg: str) -> None:
# some logic to verify permission
print(self, f"verify_permission({arg=}=")
def get(self, arg: int) -> int:
# some codes here I am not interested in this test case
...
some_condition = True if arg % 2 == 0 else False
...
if some_condition:
self.verify_permission(str(arg))
# some other codes here I am not interested in this test case
...
return arg * 2
test.py
from unittest import TestCase
from unittest.mock import MagicMock, patch
from . import code
class SomeViewTestCase(TestCase):
def test_verify_permission(self) -> None:
...
#patch.object(code.SomeView, "verify_permission")
def test_get(self, mock_verify_permission: MagicMock) -> None:
obj = code.SomeView()
# Odd `arg`:
arg, expected_output = 3, 6
output = obj.get(arg)
self.assertEqual(expected_output, output)
mock_verify_permission.assert_not_called()
# Even `arg`:
arg, expected_output = 2, 4
output = obj.get(arg)
self.assertEqual(expected_output, output)
mock_verify_permission.assert_called_once_with(str(arg))
You use a patch variant as a decorator to inject a MagicMock instance to replace the actual verify_permission method for the duration of the entire test method. In this example that method has no return value, just a side effect (the print). Thus, we just need to check if it was called under the correct conditions.
In the example, the condition depends directly on the arg passed to get, but this will obviously be different in your actual use case. But this can always be adapted. Since the fake example of get has exactly two branches, the test method calls it twice to traverse both of them.
When doing unit tests, you should always isolate the unit (i.e. function) under testing from all your other functions. That means, if your get method calls other methods of SomeView or any other functions you wrote yourself, those should be mocked out during test_get.
You want your test of get to be completely agnostic to the logic inside verify_permission or any other of your functions used inside get. Those are tested separately. You assume they work "as advertised" for the duration of test_get and by replacing them with Mock instances you control exactly how they behave in relation to get.
Note that the point about mocking out "network requests" and the like is completely unrelated. That is an entirely different but equally valid use of mocking.
Basically, you 1.) always mock your own functions and 2.) usually mock external/built-in functions with side effects (like e.g. network or disk I/O). That is it.
Also, writing tests for existing code absolutely has value. Of course it is better to write tests alongside your code. But sometimes you are just put in charge of maintaining a bunch of existing code that has no tests. If you want/can/are allowed to, you can refactor the existing code and write your tests in sync with that. But if not, it is still better to add tests retroactively than to have no tests at all for that code.
And if you write your unit tests properly, they still do their job, if you or someone else later decides to change something about the code. If the change breaks your tests, you'll notice.
As for the exception hack to interrupt the tested method early... Sure, if you want. It's lazy and calls into question the whole point of writing tests, but you do you.
No, seriously, that is a horrible approach. Why on earth would you test just part of a function? If you are already writing a test for it, you may as well cover it to the end. And if it is so complex that it has dozens of branches and/or calls 10 or 20 other custom functions, then yes, you should definitely refactor it.
I have a fairly large test suite written with pytest which is meant to perform system tests on an application that includes communication between server side and client side. The tests have a huge fixture which is initialized to contain information about the server which is then used to create a client object and run the tests. The server side may support different feature sets and it is reflected via attributes that may or may not be present inside the server object initialized by the fixture.
Now, quite similarly to this question, I need to skip certain tests if the required attributes are not present in the server object.
The way we have been doing this so far is by adding a decorator to the tests which checks for the attributes and uses pytest.skip if they aren't there.
Example:
import functools
import pytest
def skip_if_not_feature(feature):
def _skip_if_not_feature(func):
#functools.wraps(func)
def wrapper(server, *args, **kwargs):
if not server.supports(feature):
pytest.skip("Server does not support {}".format(feature))
return func(server, *args, **kwargs)
return wrapper
return _skip_if_not_feature
#skip_if_not_feature("feature_A")
def test_feature_A(server, args...):
...
The problem arises when some of these tests have more fixtures, some of which have relatively time consuming setup, and due to how pytest works, the decorator code which skips them runs after the fixture setup, wasting precious time.
Example:
#skip_if_not_feature("sync_db")
def test_sync_db(server, really_slow_db_fixture, args...):
...
I'm looking to optimize the test suite to run faster by making these tests get skipped faster.
I can only think of two ways to do it:
Re-write the decorator not to use the server fixture, and make it run before the fixtures.
Run code which decides to skip the tests between the initialization of the server fixture and the initialization of the rest of the fixtures.
I'm having trouble figuring out if the parts in bold are possible, and how to do them if they are. I've already gone through pytest documentation and google / stackoverflow results for similar questions and came up with nothing.
You can add a custom marker with the feature name to your tests, and add a skip marker in pytest_collection_modifyitems if needed.
In this case, the test is skipped without loading the fixtures first.
conftest.py
def pytest_configure(config):
config.addinivalue_line(
"markers",
"feature: mark test with the needed feature"
)
def pytest_collection_modifyitems(config, items):
for item in items:
feature_mark = item.get_closest_marker("feature")
if feature_mark and feature_mark.args:
feature = feature_mark.args[0]
if not server.supports(feature):
item.add_marker("skip")
test_db.py
#pytest.mark.feature("sync_db")
def test_sync_db(server, really_slow_db_fixture, args...):
...
The problem at the current decorator is that after the decorator do it thing, the module contains a test functions which one or more of its arguemnt are features. Those are identified as feature by the pytest mechanism, and evaluated.
It all lies in your *args
What you can do is to make your decorator spit out a func(server) instead of func(server,*args, **kwargs) when he recognizes that it is going to skip this test.
This way the skipped function won't have other fixtures, hence they will not be evaluated.
In a matter of fact you can even return instead of func a simpler empty lambda: None, as it is not going to be tested anyway.
For example...
There are several methods that I'd like to treat as 'events', and fire my own functions once they've been called.
I do not manually invoke these.
As someone that's not well-versed with Python, but familiar with C#, I'd ideally like to be able patch into a module method, and either alter functionality, or just callback my own methods.
edit: example added
def my_own_callback_method():
# do something here
# imagine in a large code base there's a method I'd like to target and fire my own callback ...
#
# ... something else invokes a method ('not_my_method') in a third-party module ('core').
def not_my_method():
# the orginial function executes as it would
#
# but I'd like to pre/post callback my own method from my module
my_own_callback_method()
Alternatively, it'd be nice to be able to 'patch' a method and alter its functionality. Example below -
# again, imagine in a large code base there's a method I'd like to target ...
# ... but I'd like to alter the way this method works in my own module.
#
# kind of like...
def my_method(something:str, something_else:int):
# my own method patch of how the original 'not_my_method' should work
def not_my_method(something:str, something_else:int):
return my_method(something, something_else)
If you don't have any control over not_my_method's code it will be (almost?) impossible since you want to actually change its source code.
I believe that the best you can achieve is wrapping it in your own function that calls my_method after it calls not_my_method, but that would be pretty much it.
Perhaps you are looking at it from the wrong angle. It might be easier to patch the actual event that calls not_my_method than patching not_my_method itself.
So i have a python class say
class Nested(object):
def method_test(self):
#do_something
The above class is being maintained by some other group so i can't change it. Hence we have a wrapper around it such that
class NestedWrapper(object):
self.nested = Nested()
def call_nested(object):
self.nested.method_test()
Now, I am writing test cases to test my NestedWrapper. How can i test that in one of my test, the underlying method of Nested.method_test is being called? Is it even possible?
I am using python Mock for testing.
UPDATE I guess I was implicitly implying that I want to do Unit Testing not a one off testing. Since most of the responses are suggesting me to use debugger, I just want to point out that I want it to be unit tested.
I think you can just mock Nested.method_test and make sure it was called...
with mock.patch.object(Nested, 'method_test') as method_test_mock:
nw = NestedWrapper()
nw.call_nested()
method_test_mock.called # Should be `True`
If using unittest, you could do something like self.assertTrue(method_test_mock.called), or you could make a more detailed assertion by calling one of the more specific assertions on Mock objects.
I have seen from the pytest docs that we can apply multiple markers at once on the Class or module level. I didn't find documentation for doing it at the test function level. Has anybody done this before with success?
I would like to ideally do this as a list of markers as being done in the above doc for Classes, for example (quoting from the docs):
class TestClass:
pytestmark = [pytest.mark.webtest, pytest.mark.slowtest]
So, the pytest documentation talks about using pytestmark to specify the markers at the class and module level. However, it doesn't talk about having something similar at the test function level. I would have to specify the markers individually on top of test functions to get them marked with each one of them. This makes the test code look a little clunky with the increasing number of markers on top of test functions.
test_example.py:
pytestmark = [class1, class2]
class TestFeature(TestCase):
#pytest.mark.marker1
#pytest.mark.marker2
#pytest.mark.marker3
def test_function(self):
assert True
For functions you just repeat the decorator:
#pytest.mark.webtest
#pytest.mark.slowtest
def test_something(...):
...
If you want to reuse that for several tests you should remember that decorators are just functions returning decorated thing, so several decorators is just a composition:
def compose_decos(decos):
def composition(func):
for deco in reversed(decos):
func = deco(func)
return func
return composition
all_marks = compose_decos(pytest.mark.webtest, pytest.mark.slowtest)
#all_marks
def test_something(...):
...
Or you can use general purpose composition such as my funcy library has:
from funcy import compose
all_marks = compose(pytest.mark.webtest, pytest.mark.slowtest)
Note that this way you can compose any decorators, not only pytest marks.
Haven't tried this myself. However, from a quick look at the source, I think class MarkDecorator is what you want. Try:
mark_names=["marker1", "marker2", "marker3"]
my_marks = pytest.MarkDecorator(*mark_names)
marked_test_function = my_marks(test_function)
The *mark_names just unpacks mark_names into the constructor arguments of MarkDecorator. MarkDecorator.__call__ then applies the stored marks (self.args) to the parameter, here test_function, to provide a marked test function.
You can also use def unmarked_test_function() ... and test_function=my_marks(unmarked_test_function) so you don't have to change names.
Added explanation: I got this from pytest.mark, which turns out to be a MarkGenerator singleton. MarkGenerator creates MarkDecorator classes, which are then applied as decorators. The above code simulates that process manually, stuffing multiple markers.