unittest.mock.patch: Context manager vs setUp/tearDown in unittest - python

There seems to be 2 ways to use unittest.mock.patch: is one way better?
Using a context manager and with statement:
class MyTest(TestCase):
def test_something(self):
with patch('package.module.Class') as MockClass:
assert package.module.Class is MockClass
Or calling start and stop from setup and tearDown/cleanup:
class MyTest(TestCase):
def setUp(self):
patcher = patch('package.module.Class')
self.MockClass = patcher.start()
self.addCleanup(patcher.stop)
def test_something(self):
assert package.module.Class is self.MockClass
The context manager version is less code, and so arguable easier to read. I there any reason why I should prefer to use the TestCase setUp/tearDown infrastructure?

The main reason to prefer patching in the setUp would be if you had more than one test that needed that class patched. In that case, you'd need to duplicate the with statement in each test.
If you only have one test that needs the patch, I'd prefer the with statement for readability.

There is yet a third way to use it, as a decorator:
class MyTest(testcase):
#unittest.mock.patch('package.module.Class')
def test_something(self):
assert package.module.Class is self.MockClass
This is even less code, but that may not be relevant.
There are a few considerations: (1) (as pointed out by babbageclunk) if you will need to reuse the patch, then a plain, boring call to construct one in setUp is best, and is easily readable. (2) if you want to create any metaprogramming facilities so that you can turn on or off the patching when you run tests, then the decorator approach will save you a lot of trouble. In that case, you can write an additional decorator, or use a global variable (ick) to control whether the patch decorators get applied to the test functions or not. If they are embedded inside the function definitions, then you have to manually deal with them if you ever want to turn off patching when running tests. One simple reason why you might want this is merely to run the tests with no patching to induce lots of failures and observe which pieces you have not implemented yet (your decorator for metaprogramming the patches, in fact, could catch these issues and print nice NotImplemented exceptions for you, or even generate a report containing such things). There could be many more reasons to want fine control over whether (and to what degree) patching is "dispatched" in the test suite at a given time.
The decorator approach is also nice in that (a) it lets you isolate which patches go to which test functions in a manner that is outside of that function, but without committing it to setUp, and (b) it makes it very clear to the reader when a given function requires a given patch.
The context manager version does not seem to have many benefits in this case since it is hardly more readable than the decorator version. But, if there really is only a single case, or a very small set of specific cases, where this is used, then the context manager version would be perfectly fine.

Related

Does Python's unittest have a global setUp for an entire TestSuite?

I'm a bit new to Python's unittest library and I'm currently looking at setting up my Flask server before running any of my integration tests. I know that the unittest.TestCase class allows you to use setUp() before every test cases in the class. I also know that the same class has another method called setUpClass() that runs only once for the entire class.
What I'm actually interested is trying to figure out how to do something similar like setUpClass(), but done on an entire unittest.TestSuite. However, I'm having no luck at it.
Sure, I could set up the server for every TestCase, but I would like to avoid doing this.
There is an answer on a separate question that suggests that by overriding unittest.TestResult's startTestRun(), you could have a set up function that covers the entire test suite. However, I've tried to passed in the custom TestResult object into unittest. TextTestRunner with no success.
So, how exactly can I do a set up for an entire test suite?
This is not well documented, but I recently needed to do this as well.
The docs mention that TestResult.startTestRun is "Called once before any tests are executed."
As you can see, in the implementation, the method doesn't do anything.
I tried subclassing TestResult and all kinds of things. I couldn't make any of that work, so I ended up monkey patching the class.
In the __init__.py of my test package, I did the following:
import unittest
OLD_TEST_RUN = unittest.result.TestResult.startTestRun
def startTestRun(self):
# whatever custom code you want to run exactly once before
# any of your tests runs goes here:
...
# just in case future versions do something in this method
# we'll call the existing method
OLD_TEST_RUN(self)
unittest.result.TestResult.startTestRun = startTestRun
There is also a stopTestRun method if you need to run cleanup code after all tests have run.
Note that this does not make a separate version of TestResult. The existing one is used by the unittest module as usual. The only thing we've done is surgically graft on our custom implementation of startTestRun

Parameterized skipping for Python unittests: Best practices?

I have the following scenario:
I have a list of "dangerous patterns"; each is a string that contains dangerous characters, such as "%s with embedded ' single quote", "%s with embedded \t horizontal tab" and similar (it's about a dozen patterns).
Each test is to be run
once with a vanilla pattern that does not inject dangerous characters (i.e. "%s"), and
once for each dangerous pattern.
If the vanilla test fails, we skip the dangerous pattern tests on the assumption that they don't make much sense if the vanilla case fails.
Various other constraints:
Stick with the batteries included in Python as far as possible (i.e. unittest is hopefully enough, even if nose would work).
I'd like to keep the contract of unittest.TestCase as far as possible. I.e. the solution should not affect test discovery (which might find everything that starts with test, but then there's also runTest which may be overridden in the construction, and more variation).
I tried a few solutions, and I am not happy with any of them:
Writing an abstract class causes unittest to try and run it as a test case (because it quacks like a test case). This can be worked around, but the code is getting ugly fast. Also, a whole lot of functions needs to be overridden, and for several of them the documentation is a bit unclear about what properties need to be implemented in the base class. Plus test discovery would have to be replicated, which means duplicating code from inside unittest.
Writing a function that executes the tests as SubTests, to be called from each test function. Requires boilerplate in every test function, and gives just a single test result for the entire series of tests.
Write a decorator. Avoids the test case discovery problem of the abstract class approach, but has all the other problems.
Write a decorator that takes a TestCase and returns a TestSuite. This worked best so far, but I don't know whether I can add the same TestCase object multiple times. Or whether TestCase objects can be copied or not (I have control over all of them but I don't know what the base class does or expects in this regard).
What's the best approach?

How to avoid this python dependency?

I have a python Presenter class that has a method which creates an instance of a different Presenter class:
class MainPresenter(object):
def showPartNumberSelectionDialog(self, pn):
view = self.view.getPartNumberSelectionDialog(pn)
dialog = SecondPresenter(self.model, view)
dialog.show()
my intent was to write a separate Presenter class for each window in my application, in order to keep things orderly. Unfortunately, I find it difficult to test the showPartNuberSelectionDialog method, particularly to test that dialog.show() was called, because the instance is created within the method call. So, even if I patch SecondPresenter using python's mock framework, it still doesn't catch the call to the local dialog instance.
So, I have a two questions:
How can I change my approach in order to make this code more testable?
Is it considered good practice to test simple code blocks such as this?
Is it possible to patch SecondPresenter and check both how you call it and if your code call show() too.
By use mock framework and patch you should replace SecondPresenter class instance by a Mock object. Take care the dialog instance will be the return value of the mock used to replace the original class instance. Moreover you should take care of where to patch, now I can just guess how to do it but it will not far from the final test version:
#patch("mainpresentermodule.SecondPresenter", autospec=True)
def test_part_number_selection_dialog(self, mock_second_presenter_class):
main = MainPresenter()
main.showPartNumberSelectionDialog(123456)
dialog = mock_second_presenter_class.return_value
dialog.show.assert_called_with()
I used autospec=True just because I consider it a best practice, take a look to Autospeccing for more details.
You can also patch main.view and main.model to test how your code call dialog's constructor... but you should use mock and not abuse of it, more things you mock and patch and more your tests will be tangled to the code.
For the second question I think it is a good practice test these kind of block also, but try to patch and mock too far as possible and just what you can not use in testing environment: you will have a more flexible tests and you'll can refactor your code by rewriting less test code.

pytest setup_class() after fixture initialization

I am experimenting with pytest and got stuck with some unobvious behavior for me. I have session-scope fixture and use it like this:
#pytest.mark.usefixtures("myfixt")
class TestWithMyFixt(object):
#classmethod
def setup_class(cls):
...
And when I run test I see that setup_class() comes before fixture myfixt() call. What is the purpose behind such a behaviour? For me, it should be run after fixture initialization because it uses fixture. How can I use setup_class() after session fixture initialization?
Thanks in advance!
I found this question while looking for similar issue, and here is what I came up with.
You can create fixtures with different scope. Possible scopes are session, module, class and function.
Fixture can be defined as a method in the class.
Fixture can have autouse=True flag.
If we combine these, we will get this nice approach which doesn't use setup_class at all (in py.test, setup_class is considered an obsolete approach):
class TestWithMyFixt(object):
#pytest.fixture(autouse=True, scope='class')
def _prepare(self, myfixt):
...
With such implementation, _prepare will be started just once before first of tests within the class, and will be finalized after last test within the class.
At the time _prepare fixture is started, myfixt is already applied as a dependency of _prepare.
Looking at the code behind usefixtures it appears the fixtures are handled by FixtureManager [1] which works on a per instance level. By marking a class with usefixtures, it appears to denote to the test framework that each of the callables contained within the scope should use those fixtures, but it does not apply the fixtures at that point. Instead, it waits for the callable to be called at which point it checks all of the managers for any updates it should do (including the FixtureManager) and applies the pieces appropriately. I believe this means that for each test within your class, it will reapply the fixture so that each test starts from a common base point which would be better behavior than the contrary.
Thus, your setup_class is called because that's the earliest order of operations. It sounds like you should put your setup_class logic into your fixture which will cause it to be called at the time the fixture is implemented.
[1] - https://github.com/pytest-dev/pytest/blob/master/_pytest/python.py#L1628
I know is an old post but the answer is pretty simple, maybe it will be useful for others.
You have to call super(...).setUpClass method in current setUpClass.
setUpClass is called once before the test methods and perform all actions. when you call super(...).setUpClass it applies the fixtures.
Note 1: i made some small changes to the code to be prettier.
Note 2: this method should be called setUpClass, it is mandatory.
class TestWithMyFixt(APITestCase):
fixture = ["myfixt"]
#classmethod
def setUpClass(cls):
super(TestWithMyFixt, cls).setUpClass()
...
This is my opinion, but maybe the purpose of this behavior is let the developer to choose when he want to load the fixture.

Unittest in Django. What is relationship between TestCase class and method

I am doing some unit testing stuff in Django. What is the relationship between TestCase class and the actual method in this class? What is the best practice for organizing these stuff?
For example, I have
class Test(TestCase):
def __init__(self):
...
def testTestA(self):
#test code
def testTestB(self):
#test code
If I organize in this way:
class Test1(TestCase):
def __init__(self):
...
def testTestA(self):
#test code
class Test2(TestCase):
def __init__(self):
...
def testTestB(self):
...
Which is better and what is the difference?
Thanks!
You rarely write __init__ for a TestCase. So strike that from your mental model of unit testing.
You sometimes write a setUp and tearDown. Django automates much of this, however, and you often merely provide a static fixtures= variable that's used to populate the test database.
More fundamentally, what's a test case?
A test case is a "fixture" -- a configuration of a unit under test -- that you can then exercise. Ideally each TestCase has a setUp method that creates one fixture. Each method will perform a manipulation on that fixture and assert that the manipulation worked.
However. There's nothing dogmatic about that.
In many cases -- particularly when exercising Django models -- where there just aren't that many interesting manipulations.
If you don't override save in a model, you don't really need to do CRUD testing. You can (and should) trust the ORM. [If you don't trust it, then get a new framework that you do trust.]
If you have a few properties in a models class, you might not want to create a distinct method to test each property. You might want to simply test them sequentially in a single method of a single TestCase.
If, OTOH, you have really complex class with lots of state changes, you will need a distinct TestCase to configure an object is one state, manipulate it into another state and assert that the changes all behaved correctly.
View Functions, since they aren't -- technically -- stateful, don't match the Unit Test philosophy perfectly. When doing setUp to create a unit in a known state, you're using the client interface to step through some interactions to create a session in a known state. Once the session has reached as desired state, then your various test methods will exercise that session, and assert that things worked.
Summary
Think of TestCase as a "Setup" or "Context" in which tests will be run.
Think of each method as "when_X_should_Y" statement. Some folks suggest that kind of name ("test_when_x_should_y") So the method will perform "X" and assert that "Y" was the response.
It's kind of hard to answer this question regarding the proper organization of cases A and B and test methods 1, 2 and 3...
However splitting the tests to test cases serves two major purposes:
1) Organizing the tests around some logical groups, such as CustomerViewTests, OrdersAggregationTests, etc.
2) Sharing the same setUp() and tearDown() methods, for tests which require the same, well, setup and tear down.
More information and examples can be found at unitTest documentation.

Categories

Resources