Recently, Ned Batchelder during his talk at PyCon 2016 noted:
If you are using unittest to write your tests, definitely use
addCleanup, it's much better than tearDown.
Up until now, I've never used addCleanup() and got used to setUp()/tearDown() pair of methods for test "set up" and "tear down" phases.
Why should I switch to addCleanup() instead of tearDown()?
It was also recently discussed in the Python unittest with Robert Collins podcast.
Per the addCleanup doc string:
Cleanup items are called even if setUp fails (unlike tearDown)
addCleanup can be used to register multiple functions, so you could use
separate functions for each resource you wish to clean up. That would allow your
code to be a bit more reusable/modular.
addCleanup() methods will run even if one of them fails, and will run even if setUp() fails. You should also consider using pytest.
Another good thing about addCleanup is that it just works as you'd expect.
For example, if you call it in a setUp function, then all test methods will call the cleanup function in the end.
If you call it in a test method, only this method calls the cleanup function.
Related
I'm a bit new to Python's unittest library and I'm currently looking at setting up my Flask server before running any of my integration tests. I know that the unittest.TestCase class allows you to use setUp() before every test cases in the class. I also know that the same class has another method called setUpClass() that runs only once for the entire class.
What I'm actually interested is trying to figure out how to do something similar like setUpClass(), but done on an entire unittest.TestSuite. However, I'm having no luck at it.
Sure, I could set up the server for every TestCase, but I would like to avoid doing this.
There is an answer on a separate question that suggests that by overriding unittest.TestResult's startTestRun(), you could have a set up function that covers the entire test suite. However, I've tried to passed in the custom TestResult object into unittest. TextTestRunner with no success.
So, how exactly can I do a set up for an entire test suite?
This is not well documented, but I recently needed to do this as well.
The docs mention that TestResult.startTestRun is "Called once before any tests are executed."
As you can see, in the implementation, the method doesn't do anything.
I tried subclassing TestResult and all kinds of things. I couldn't make any of that work, so I ended up monkey patching the class.
In the __init__.py of my test package, I did the following:
import unittest
OLD_TEST_RUN = unittest.result.TestResult.startTestRun
def startTestRun(self):
# whatever custom code you want to run exactly once before
# any of your tests runs goes here:
...
# just in case future versions do something in this method
# we'll call the existing method
OLD_TEST_RUN(self)
unittest.result.TestResult.startTestRun = startTestRun
There is also a stopTestRun method if you need to run cleanup code after all tests have run.
Note that this does not make a separate version of TestResult. The existing one is used by the unittest module as usual. The only thing we've done is surgically graft on our custom implementation of startTestRun
I'm a bit confused regarding unittests. I have an embedded system I am testing from outside with Python.
The issue is that after each test is passed I need to reset the system state. However if a test fails it could leave the system in an arbitrary state I need to reset. After each test I go back to the initial state but if an assertion fails it will skip that part.
Therefore, what's the proper way to handle this situation? Some ideas I have are:
Put each test in a try, catch, finally but that doesn't seem so right (unittest already handles test exceptions).
Put each test in a different class and invoke tearDown() method at the end of it
Call initSystemState() at the beggining of each test to go back to init state (but it is slower than resetting only what needs to be reset at the end of the test)
Any suggestions? Ideally if I have testSpeed() test there should be a testSpeedOnStop() function to be called at the end. Perhaps unittest is not the right tool for this job also as all the functions have side-effects and are working together so maybe I should lean more towards integration tests libraries which I haven't explored.
Setting state is done in the setUp(self) method of the test class.
This method is automatically called prior to each test and provides a fresh state for each instance of the tests.
After each test, the tearDown method is run to possibly clean up remnants of failed tests.
You can write a setUp / tearDowm to be executed befora and after all tests; more elaborate tests may require stubbing or mocking objects to be build.
I have the following scenario:
I have a list of "dangerous patterns"; each is a string that contains dangerous characters, such as "%s with embedded ' single quote", "%s with embedded \t horizontal tab" and similar (it's about a dozen patterns).
Each test is to be run
once with a vanilla pattern that does not inject dangerous characters (i.e. "%s"), and
once for each dangerous pattern.
If the vanilla test fails, we skip the dangerous pattern tests on the assumption that they don't make much sense if the vanilla case fails.
Various other constraints:
Stick with the batteries included in Python as far as possible (i.e. unittest is hopefully enough, even if nose would work).
I'd like to keep the contract of unittest.TestCase as far as possible. I.e. the solution should not affect test discovery (which might find everything that starts with test, but then there's also runTest which may be overridden in the construction, and more variation).
I tried a few solutions, and I am not happy with any of them:
Writing an abstract class causes unittest to try and run it as a test case (because it quacks like a test case). This can be worked around, but the code is getting ugly fast. Also, a whole lot of functions needs to be overridden, and for several of them the documentation is a bit unclear about what properties need to be implemented in the base class. Plus test discovery would have to be replicated, which means duplicating code from inside unittest.
Writing a function that executes the tests as SubTests, to be called from each test function. Requires boilerplate in every test function, and gives just a single test result for the entire series of tests.
Write a decorator. Avoids the test case discovery problem of the abstract class approach, but has all the other problems.
Write a decorator that takes a TestCase and returns a TestSuite. This worked best so far, but I don't know whether I can add the same TestCase object multiple times. Or whether TestCase objects can be copied or not (I have control over all of them but I don't know what the base class does or expects in this regard).
What's the best approach?
I am working on a project where it would be very handy if I could mock out urlopen during testing. Someone pointed out to me that this is possible (and easy) by mocking out an opener and using urllib2.install_opener.
However, I'm concerned because of this note in the documentation:
Install an OpenerDirector instance as the default global opener.
Doesn't this mean that my program could unexpectedly break if other code that I rely on is using urlopen?
The implications are exactly what you'd expect. All subsequent calls to urllib2.urlopen in your program, until your either exit or call install_opener again, will use your opener.
Whether that's "dangerous" depends on your use case. If there are other parts of your code that are using urllib2.open and you don't want them mocked, then yes, this is a bad idea, because they will be mocked.
In that case, you'll have to get the to-be-mocked code to call my_opener.open instead of urllib2.open. If you design your code to be tested, this should be easy. If you need to monkeypatch code after the fact, it's a little trickier, but there are all kinds of possibilities. For example, if you want to mock all calls in a given module, just replace foomodule.urllib2 = my_opener and set my_opener.urlopen = my_opener.open.
I'm currently writing a set of unit tests for a Python microblogging library, and following advice received here have begun to use mock objects to return data as if from the service (identi.ca in this case).
However, surely by mocking httplib2 - the module I am using to request data - I am tying the unit tests to a specific implementation of my library, and removing the ability for them to function after refactoring (which is obviously one primary benefit of unit testing in the firt place).
Is there a best of both worlds scenario? The only one I can think of is to set up a microblogging server to use only for testing, but this would clearly be a large amount of work.
You are right that if you refactor your library to use something other than httplib2, then your unit tests will break. That isn't such a horrible dependency, since when that time comes it will be a simple matter to change your tests to mock out the new library.
If you want to avoid that, then write a very minimal wrapper around httplib2, and your tests can mock that. Then if you ever shift away from httplib2, you only have to change your wrapper. But notice the number of lines you have to change is the same either way, all that changes is whether they are in "test code" or "non-test code".
Not sure what your problem is. The mock class is part of the tests, conceptually at least. It is ok for the tests to depend on particular behaviour of the mock objects that they inject into the code being tested. Of course the injection itself should be shared across unit tests, so that it is easy to change the mockup implementation.