I am doing some unit testing stuff in Django. What is the relationship between TestCase class and the actual method in this class? What is the best practice for organizing these stuff?
For example, I have
class Test(TestCase):
def __init__(self):
...
def testTestA(self):
#test code
def testTestB(self):
#test code
If I organize in this way:
class Test1(TestCase):
def __init__(self):
...
def testTestA(self):
#test code
class Test2(TestCase):
def __init__(self):
...
def testTestB(self):
...
Which is better and what is the difference?
Thanks!
You rarely write __init__ for a TestCase. So strike that from your mental model of unit testing.
You sometimes write a setUp and tearDown. Django automates much of this, however, and you often merely provide a static fixtures= variable that's used to populate the test database.
More fundamentally, what's a test case?
A test case is a "fixture" -- a configuration of a unit under test -- that you can then exercise. Ideally each TestCase has a setUp method that creates one fixture. Each method will perform a manipulation on that fixture and assert that the manipulation worked.
However. There's nothing dogmatic about that.
In many cases -- particularly when exercising Django models -- where there just aren't that many interesting manipulations.
If you don't override save in a model, you don't really need to do CRUD testing. You can (and should) trust the ORM. [If you don't trust it, then get a new framework that you do trust.]
If you have a few properties in a models class, you might not want to create a distinct method to test each property. You might want to simply test them sequentially in a single method of a single TestCase.
If, OTOH, you have really complex class with lots of state changes, you will need a distinct TestCase to configure an object is one state, manipulate it into another state and assert that the changes all behaved correctly.
View Functions, since they aren't -- technically -- stateful, don't match the Unit Test philosophy perfectly. When doing setUp to create a unit in a known state, you're using the client interface to step through some interactions to create a session in a known state. Once the session has reached as desired state, then your various test methods will exercise that session, and assert that things worked.
Summary
Think of TestCase as a "Setup" or "Context" in which tests will be run.
Think of each method as "when_X_should_Y" statement. Some folks suggest that kind of name ("test_when_x_should_y") So the method will perform "X" and assert that "Y" was the response.
It's kind of hard to answer this question regarding the proper organization of cases A and B and test methods 1, 2 and 3...
However splitting the tests to test cases serves two major purposes:
1) Organizing the tests around some logical groups, such as CustomerViewTests, OrdersAggregationTests, etc.
2) Sharing the same setUp() and tearDown() methods, for tests which require the same, well, setup and tear down.
More information and examples can be found at unitTest documentation.
Related
I'm a bit new to Python's unittest library and I'm currently looking at setting up my Flask server before running any of my integration tests. I know that the unittest.TestCase class allows you to use setUp() before every test cases in the class. I also know that the same class has another method called setUpClass() that runs only once for the entire class.
What I'm actually interested is trying to figure out how to do something similar like setUpClass(), but done on an entire unittest.TestSuite. However, I'm having no luck at it.
Sure, I could set up the server for every TestCase, but I would like to avoid doing this.
There is an answer on a separate question that suggests that by overriding unittest.TestResult's startTestRun(), you could have a set up function that covers the entire test suite. However, I've tried to passed in the custom TestResult object into unittest. TextTestRunner with no success.
So, how exactly can I do a set up for an entire test suite?
This is not well documented, but I recently needed to do this as well.
The docs mention that TestResult.startTestRun is "Called once before any tests are executed."
As you can see, in the implementation, the method doesn't do anything.
I tried subclassing TestResult and all kinds of things. I couldn't make any of that work, so I ended up monkey patching the class.
In the __init__.py of my test package, I did the following:
import unittest
OLD_TEST_RUN = unittest.result.TestResult.startTestRun
def startTestRun(self):
# whatever custom code you want to run exactly once before
# any of your tests runs goes here:
...
# just in case future versions do something in this method
# we'll call the existing method
OLD_TEST_RUN(self)
unittest.result.TestResult.startTestRun = startTestRun
There is also a stopTestRun method if you need to run cleanup code after all tests have run.
Note that this does not make a separate version of TestResult. The existing one is used by the unittest module as usual. The only thing we've done is surgically graft on our custom implementation of startTestRun
I am experimenting with pytest and got stuck with some unobvious behavior for me. I have session-scope fixture and use it like this:
#pytest.mark.usefixtures("myfixt")
class TestWithMyFixt(object):
#classmethod
def setup_class(cls):
...
And when I run test I see that setup_class() comes before fixture myfixt() call. What is the purpose behind such a behaviour? For me, it should be run after fixture initialization because it uses fixture. How can I use setup_class() after session fixture initialization?
Thanks in advance!
I found this question while looking for similar issue, and here is what I came up with.
You can create fixtures with different scope. Possible scopes are session, module, class and function.
Fixture can be defined as a method in the class.
Fixture can have autouse=True flag.
If we combine these, we will get this nice approach which doesn't use setup_class at all (in py.test, setup_class is considered an obsolete approach):
class TestWithMyFixt(object):
#pytest.fixture(autouse=True, scope='class')
def _prepare(self, myfixt):
...
With such implementation, _prepare will be started just once before first of tests within the class, and will be finalized after last test within the class.
At the time _prepare fixture is started, myfixt is already applied as a dependency of _prepare.
Looking at the code behind usefixtures it appears the fixtures are handled by FixtureManager [1] which works on a per instance level. By marking a class with usefixtures, it appears to denote to the test framework that each of the callables contained within the scope should use those fixtures, but it does not apply the fixtures at that point. Instead, it waits for the callable to be called at which point it checks all of the managers for any updates it should do (including the FixtureManager) and applies the pieces appropriately. I believe this means that for each test within your class, it will reapply the fixture so that each test starts from a common base point which would be better behavior than the contrary.
Thus, your setup_class is called because that's the earliest order of operations. It sounds like you should put your setup_class logic into your fixture which will cause it to be called at the time the fixture is implemented.
[1] - https://github.com/pytest-dev/pytest/blob/master/_pytest/python.py#L1628
I know is an old post but the answer is pretty simple, maybe it will be useful for others.
You have to call super(...).setUpClass method in current setUpClass.
setUpClass is called once before the test methods and perform all actions. when you call super(...).setUpClass it applies the fixtures.
Note 1: i made some small changes to the code to be prettier.
Note 2: this method should be called setUpClass, it is mandatory.
class TestWithMyFixt(APITestCase):
fixture = ["myfixt"]
#classmethod
def setUpClass(cls):
super(TestWithMyFixt, cls).setUpClass()
...
This is my opinion, but maybe the purpose of this behavior is let the developer to choose when he want to load the fixture.
There seems to be 2 ways to use unittest.mock.patch: is one way better?
Using a context manager and with statement:
class MyTest(TestCase):
def test_something(self):
with patch('package.module.Class') as MockClass:
assert package.module.Class is MockClass
Or calling start and stop from setup and tearDown/cleanup:
class MyTest(TestCase):
def setUp(self):
patcher = patch('package.module.Class')
self.MockClass = patcher.start()
self.addCleanup(patcher.stop)
def test_something(self):
assert package.module.Class is self.MockClass
The context manager version is less code, and so arguable easier to read. I there any reason why I should prefer to use the TestCase setUp/tearDown infrastructure?
The main reason to prefer patching in the setUp would be if you had more than one test that needed that class patched. In that case, you'd need to duplicate the with statement in each test.
If you only have one test that needs the patch, I'd prefer the with statement for readability.
There is yet a third way to use it, as a decorator:
class MyTest(testcase):
#unittest.mock.patch('package.module.Class')
def test_something(self):
assert package.module.Class is self.MockClass
This is even less code, but that may not be relevant.
There are a few considerations: (1) (as pointed out by babbageclunk) if you will need to reuse the patch, then a plain, boring call to construct one in setUp is best, and is easily readable. (2) if you want to create any metaprogramming facilities so that you can turn on or off the patching when you run tests, then the decorator approach will save you a lot of trouble. In that case, you can write an additional decorator, or use a global variable (ick) to control whether the patch decorators get applied to the test functions or not. If they are embedded inside the function definitions, then you have to manually deal with them if you ever want to turn off patching when running tests. One simple reason why you might want this is merely to run the tests with no patching to induce lots of failures and observe which pieces you have not implemented yet (your decorator for metaprogramming the patches, in fact, could catch these issues and print nice NotImplemented exceptions for you, or even generate a report containing such things). There could be many more reasons to want fine control over whether (and to what degree) patching is "dispatched" in the test suite at a given time.
The decorator approach is also nice in that (a) it lets you isolate which patches go to which test functions in a manner that is outside of that function, but without committing it to setUp, and (b) it makes it very clear to the reader when a given function requires a given patch.
The context manager version does not seem to have many benefits in this case since it is hardly more readable than the decorator version. But, if there really is only a single case, or a very small set of specific cases, where this is used, then the context manager version would be perfectly fine.
This is an extension of: Unit Testing Interfaces in Python
My problem is the number of classes that satisfy an interface will eventually run into the thousands. Different developers work on different sub classes.
We can't have a failing unit test for one subclass fail tests for other sub classes. Essentially, I need to create a new unittest.TestCase type for each subclass satisfying the interface.
It would be nice to be able to do this without having to modify the test module. (I'd like to avoid updating the unit test module every time a new subclass satisfying the interface is added).
I want to be able to create a unittest.TestCase class type automatically for a class satisfying interface. This can be done using meta classes.
But these classes need to be added to the test module for testing. Can this be done during class definition without requiring modifications to the test module?
If you are writing separate subclasses, there must be differences among them that will need to be tested in addition to their successful implementation of the interface. Write a method called satisfies_the_thousand_class_interface that tests the interface, add it to your custom TestCase class, then have each of your thousand test cases (one for each subclass) also invoke that method in addition to all of the specialized testing they do.
Can anyone explain the use of Python's setUp and tearDown methods while writing test cases apart from that setUp is called immediately before calling the test method and tearDown is called immediately after it has been called?
In general you add all prerequisite steps to setUp and all clean-up steps to tearDown.
You can read more with examples here.
When a setUp() method is defined, the test runner will run that method
prior to each test. Likewise, if a tearDown() method is defined, the
test runner will invoke that method after each test.
For example you have a test that requires items to exist, or certain state - so you put these actions(creating object instances, initializing db, preparing rules and so on) into the setUp.
Also as you know each test should stop in the place where it was started - this means that we have to restore app state to it's initial state - e.g close files, connections, removing newly created items, calling transactions callback and so on - all these steps are to be included into the tearDown.
So the idea is that test itself should contain only actions that to be performed on the test object to get the result, while setUp and tearDown are the methods to help you to leave your test code clean and flexible.
You can create a setUp and tearDown for a bunch of tests and define them in a parent class - so it would be easy for you to support such tests and update common preparations and clean ups.
If you are looking for an easy example please use the following link with example
You can use these to factor out code common to all tests in the test suite.
If you have a lot of repeated code in your tests, you can make them shorter by moving this code to setUp/tearDown.
You might use this for creating test data (e.g. setting up fakes/mocks), or stubbing out functions with fakes.
If you're doing integration testing, you can use check environmental pre-conditions in setUp, and skip the test if something isn't set up properly.
For example:
class TurretTest(unittest.TestCase):
def setUp(self):
self.turret_factory = TurretFactory()
self.turret = self.turret_factory.CreateTurret()
def test_turret_is_on_by_default(self):
self.assertEquals(True, self.turret.is_on())
def test_turret_turns_can_be_turned_off(self):
self.turret.turn_off()
self.assertEquals(False, self.turret.is_on())
Suppose you have a suite with 10 tests. 8 of the tests share the same setup/teardown code. The other 2 don't.
setup and teardown give you a nice way to refactor those 8 tests. Now what do you do with the other 2 tests? You'd move them to another testcase/suite. So using setup and teardown also helps give a natural way to break the tests into cases/suites