Execute system state reset upon unittest assertation fail - python

I'm a bit confused regarding unittests. I have an embedded system I am testing from outside with Python.
The issue is that after each test is passed I need to reset the system state. However if a test fails it could leave the system in an arbitrary state I need to reset. After each test I go back to the initial state but if an assertion fails it will skip that part.
Therefore, what's the proper way to handle this situation? Some ideas I have are:
Put each test in a try, catch, finally but that doesn't seem so right (unittest already handles test exceptions).
Put each test in a different class and invoke tearDown() method at the end of it
Call initSystemState() at the beggining of each test to go back to init state (but it is slower than resetting only what needs to be reset at the end of the test)
Any suggestions? Ideally if I have testSpeed() test there should be a testSpeedOnStop() function to be called at the end. Perhaps unittest is not the right tool for this job also as all the functions have side-effects and are working together so maybe I should lean more towards integration tests libraries which I haven't explored.

Setting state is done in the setUp(self) method of the test class.
This method is automatically called prior to each test and provides a fresh state for each instance of the tests.
After each test, the tearDown method is run to possibly clean up remnants of failed tests.
You can write a setUp / tearDowm to be executed befora and after all tests; more elaborate tests may require stubbing or mocking objects to be build.

Related

Does Python's unittest have a global setUp for an entire TestSuite?

I'm a bit new to Python's unittest library and I'm currently looking at setting up my Flask server before running any of my integration tests. I know that the unittest.TestCase class allows you to use setUp() before every test cases in the class. I also know that the same class has another method called setUpClass() that runs only once for the entire class.
What I'm actually interested is trying to figure out how to do something similar like setUpClass(), but done on an entire unittest.TestSuite. However, I'm having no luck at it.
Sure, I could set up the server for every TestCase, but I would like to avoid doing this.
There is an answer on a separate question that suggests that by overriding unittest.TestResult's startTestRun(), you could have a set up function that covers the entire test suite. However, I've tried to passed in the custom TestResult object into unittest. TextTestRunner with no success.
So, how exactly can I do a set up for an entire test suite?
This is not well documented, but I recently needed to do this as well.
The docs mention that TestResult.startTestRun is "Called once before any tests are executed."
As you can see, in the implementation, the method doesn't do anything.
I tried subclassing TestResult and all kinds of things. I couldn't make any of that work, so I ended up monkey patching the class.
In the __init__.py of my test package, I did the following:
import unittest
OLD_TEST_RUN = unittest.result.TestResult.startTestRun
def startTestRun(self):
# whatever custom code you want to run exactly once before
# any of your tests runs goes here:
...
# just in case future versions do something in this method
# we'll call the existing method
OLD_TEST_RUN(self)
unittest.result.TestResult.startTestRun = startTestRun
There is also a stopTestRun method if you need to run cleanup code after all tests have run.
Note that this does not make a separate version of TestResult. The existing one is used by the unittest module as usual. The only thing we've done is surgically graft on our custom implementation of startTestRun

finding out which particular tests call some functions?

I have some pretty fragile code that I want to refactor. It's not very easy to unit test by itself because it interacts with database queries and Django form data.
That in itself is not a big deal. I already have extensive tests that, among other things, end up calling this function and check that results are as expected. But my full test suite takes about 5 minutes and I also don't want to have to fix other outstanding issues while working on this.
What I'd like to do is to run nosetests or nose2 on all my tests, track all test_xxx.py files that called the function of interest and then limit my testing during the refactoring to only that subset of test files.
I plan to use inspect.stack() to do this but was wondering if there is an existing plugin or if someone has done it before. If not, I intend to post whatever I come up with and maybe that will be of use later.
You can simply raise some exception in the function and do one run. All tests that fail do call you function.

Validating set up and tear down before and after running tests in pytest

I have some resource creation and deletion code that needs to run before and after certain tests, which I've put into a fixture using yield in the usual way. However, before running the tests, I want to verify that the resource creation has happened correctly, and likewise after the deletion, I want to verify that it has happened. I can easily stick asserts into the fixtures themselves, but I'm not sure this is good pytest practice, and I'm concerned that it will make debugging and interpreting the logs harder. Is there a better or canonical way to do validation in pytest?
I had encountered something like this recently - although, I was using unittest instead of pytest.
What I ended up doing was something similar to a method level setup/teardown. That way, future test functions would never be affected by past test functions.
For my use-case, I loaded my test fixtures in this setup function, then ran a couple of basic tests against those tests to ensure validity of fixtures (as part of setup itself). This, I realized, added a bit of time to each test in the class, but ensured that all the fixture data was exactly what I expected it to be (we were loading stuff into a dockerized elasticsearch container). I guess time for running tests is something you can make a judgement call about.

Initializing MEDIA_ROOT before each Django Test

I want to my Django tests to create and modify media files. So, much like Django tests do with databases, I want to set up an empty MEDIA_ROOT folder before each test is run.
I figured I'll create a temporary folder and point MEDIA_ROOT to it. However, I can't figure out where to put the code that does this. In this example, a special Runner is created. The runner sets up the media root and tears it down.
Unfortunately, setup_test_environment is called once before the first test function is run, and not every time a test is run.
I tried creating a FileSystemTestCase class that sets up the file system in its setUp function, and have all my test cases derive from it. While this works, it requires every person who writes a testcase to remember to call my setUp method, as it's not called automatically.
Usually I wouldn't bother with this, but the cost of forgetting to call the parent setUp method can be very high - if someone forgets the call and the tests are accidentally run on a live system, bad things will happen.
EDIT: The temporary solution I've found was to implement both my own runner and a base TestCase. Both set up a temporary MEDIA_ROOT, so if someone forgets to call my setUp method, the test will run in the temporary folder of the previous test, or the one set up by the runner. This can cause tests to fail, but will not ruin live data.
I'm hoping for a more elegant solution.
It seems to me that you're trying to address two separate issues:
Allow tests to run independently (with regard to MEDIA_ROOT) when testers do the right thing (i.e. inherit from your test class and call your setUp()).
Keep testers from messing up real data when they accidentally do the wrong thing.
Given that, I think a two-pronged approach makes sense. Your setUp() solves problem 1. Setting up MEDIA_ROOT in the test runner, though, hides the fact that your testers have done the wrong thing. Instead I would just focus on protecting the data: for example, you could set MEDIA_ROOT to None. That would shield the real data in MEDIA_ROOT; make it more likely that the tester will see an error if they don't use your setUp(); and reduce code duplication.
A more robust approach would be to write your own test runner that does the setup before each test (modeled after the way Django handles the database), but that may be overkill for your needs.

Explain the "setUp" and "tearDown" Python methods used in test cases

Can anyone explain the use of Python's setUp and tearDown methods while writing test cases apart from that setUp is called immediately before calling the test method and tearDown is called immediately after it has been called?
In general you add all prerequisite steps to setUp and all clean-up steps to tearDown.
You can read more with examples here.
When a setUp() method is defined, the test runner will run that method
prior to each test. Likewise, if a tearDown() method is defined, the
test runner will invoke that method after each test.
For example you have a test that requires items to exist, or certain state - so you put these actions(creating object instances, initializing db, preparing rules and so on) into the setUp.
Also as you know each test should stop in the place where it was started - this means that we have to restore app state to it's initial state - e.g close files, connections, removing newly created items, calling transactions callback and so on - all these steps are to be included into the tearDown.
So the idea is that test itself should contain only actions that to be performed on the test object to get the result, while setUp and tearDown are the methods to help you to leave your test code clean and flexible.
You can create a setUp and tearDown for a bunch of tests and define them in a parent class - so it would be easy for you to support such tests and update common preparations and clean ups.
If you are looking for an easy example please use the following link with example
You can use these to factor out code common to all tests in the test suite.
If you have a lot of repeated code in your tests, you can make them shorter by moving this code to setUp/tearDown.
You might use this for creating test data (e.g. setting up fakes/mocks), or stubbing out functions with fakes.
If you're doing integration testing, you can use check environmental pre-conditions in setUp, and skip the test if something isn't set up properly.
For example:
class TurretTest(unittest.TestCase):
def setUp(self):
self.turret_factory = TurretFactory()
self.turret = self.turret_factory.CreateTurret()
def test_turret_is_on_by_default(self):
self.assertEquals(True, self.turret.is_on())
def test_turret_turns_can_be_turned_off(self):
self.turret.turn_off()
self.assertEquals(False, self.turret.is_on())
Suppose you have a suite with 10 tests. 8 of the tests share the same setup/teardown code. The other 2 don't.
setup and teardown give you a nice way to refactor those 8 tests. Now what do you do with the other 2 tests? You'd move them to another testcase/suite. So using setup and teardown also helps give a natural way to break the tests into cases/suites

Categories

Resources