Explain the "setUp" and "tearDown" Python methods used in test cases - python

Can anyone explain the use of Python's setUp and tearDown methods while writing test cases apart from that setUp is called immediately before calling the test method and tearDown is called immediately after it has been called?

In general you add all prerequisite steps to setUp and all clean-up steps to tearDown.
You can read more with examples here.
When a setUp() method is defined, the test runner will run that method
prior to each test. Likewise, if a tearDown() method is defined, the
test runner will invoke that method after each test.
For example you have a test that requires items to exist, or certain state - so you put these actions(creating object instances, initializing db, preparing rules and so on) into the setUp.
Also as you know each test should stop in the place where it was started - this means that we have to restore app state to it's initial state - e.g close files, connections, removing newly created items, calling transactions callback and so on - all these steps are to be included into the tearDown.
So the idea is that test itself should contain only actions that to be performed on the test object to get the result, while setUp and tearDown are the methods to help you to leave your test code clean and flexible.
You can create a setUp and tearDown for a bunch of tests and define them in a parent class - so it would be easy for you to support such tests and update common preparations and clean ups.
If you are looking for an easy example please use the following link with example

You can use these to factor out code common to all tests in the test suite.
If you have a lot of repeated code in your tests, you can make them shorter by moving this code to setUp/tearDown.
You might use this for creating test data (e.g. setting up fakes/mocks), or stubbing out functions with fakes.
If you're doing integration testing, you can use check environmental pre-conditions in setUp, and skip the test if something isn't set up properly.
For example:
class TurretTest(unittest.TestCase):
def setUp(self):
self.turret_factory = TurretFactory()
self.turret = self.turret_factory.CreateTurret()
def test_turret_is_on_by_default(self):
self.assertEquals(True, self.turret.is_on())
def test_turret_turns_can_be_turned_off(self):
self.turret.turn_off()
self.assertEquals(False, self.turret.is_on())

Suppose you have a suite with 10 tests. 8 of the tests share the same setup/teardown code. The other 2 don't.
setup and teardown give you a nice way to refactor those 8 tests. Now what do you do with the other 2 tests? You'd move them to another testcase/suite. So using setup and teardown also helps give a natural way to break the tests into cases/suites

Related

session scope with pytest-mock

I'm looking for example of how to use the session-scoped "session-mocker" fixture of the pytest-mock plugin.
It's fairly clear how to modify the example the docs provide to use it in a particular test:
def test_foo(session_mocker):
session_mocker.patch('os.remove')
etc...
But I'm baffled as to where and how this global fixture should be initialized. Say, for example, that I want to mock "os.remove" for ALL of my tests. Do I set this up in confftest.py and, if so, how do I do it?
You use it in a fixture with a scope of session. The best place to put it would be conftest.py. That's mainly to make it obvious to other programmers that this fixture exists and what it might be doing. That's important because this fixture will effect other tests that might not necessarily know about this fixture or even want it.
I wouldn't recommend mocking something for the duration of a session. Tests, classes or even modules, yes. But not sessions.
For instance, the following test test_normal passes or fails depending on whether test_mocked was run in the same session or not. Since they're in the same "file", it's much easier to spot the problem. But these tests could be located in different test files, that do not appear related, and yet if both tests were run in the same session then the problem would occur.
import pytest
# could be in conftest.py
#pytest.fixture(scope='session')
def myfixture(session_mocker):
session_mocker.patch('sys.mymock', create=True)
def test_mocked(myfixture):
import sys
assert hasattr(sys, 'mymock')
def test_normal():
import sys
assert not hasattr(sys, 'mymock')
Instead, just create a fixture that is scoped for test, class or module, and include it directly in the test file. That way the behaviour is contained to just the set of tests that need it. Mocks are cheap to create, so having the mock recreated for every test is no big deal. It may even be beneficial, as the mock will be reset for each test.
Save session fixtures for things that are expensive to setup, and which have no state, or that the tests do not change its state (eg. a database that is used as a template to create a fresh database that each test will run against).

Does Python's unittest have a global setUp for an entire TestSuite?

I'm a bit new to Python's unittest library and I'm currently looking at setting up my Flask server before running any of my integration tests. I know that the unittest.TestCase class allows you to use setUp() before every test cases in the class. I also know that the same class has another method called setUpClass() that runs only once for the entire class.
What I'm actually interested is trying to figure out how to do something similar like setUpClass(), but done on an entire unittest.TestSuite. However, I'm having no luck at it.
Sure, I could set up the server for every TestCase, but I would like to avoid doing this.
There is an answer on a separate question that suggests that by overriding unittest.TestResult's startTestRun(), you could have a set up function that covers the entire test suite. However, I've tried to passed in the custom TestResult object into unittest. TextTestRunner with no success.
So, how exactly can I do a set up for an entire test suite?
This is not well documented, but I recently needed to do this as well.
The docs mention that TestResult.startTestRun is "Called once before any tests are executed."
As you can see, in the implementation, the method doesn't do anything.
I tried subclassing TestResult and all kinds of things. I couldn't make any of that work, so I ended up monkey patching the class.
In the __init__.py of my test package, I did the following:
import unittest
OLD_TEST_RUN = unittest.result.TestResult.startTestRun
def startTestRun(self):
# whatever custom code you want to run exactly once before
# any of your tests runs goes here:
...
# just in case future versions do something in this method
# we'll call the existing method
OLD_TEST_RUN(self)
unittest.result.TestResult.startTestRun = startTestRun
There is also a stopTestRun method if you need to run cleanup code after all tests have run.
Note that this does not make a separate version of TestResult. The existing one is used by the unittest module as usual. The only thing we've done is surgically graft on our custom implementation of startTestRun

pytest: Reset mocks between individual files

EDIT: While any testing recommendations are truly appreciated, I'm wondering specifically if there'a way that pytest can enforce the isolation for me, rather than relying on myself always remembering to clean up mocks.
Does pytest support "resetting" process state between individual Python files, or otherwise isolating test files from one another?
Our CI used to invoke pytest on individual files, like this:
pytest file1.py
pytest file2.py
...
It now invokes pytest once for all files, like this:
pytest file1.py file2.py ...
We ran into trouble when some test files (e.g. file1.py) performed module-level mocks. For example (simplified):
def setup_module():
patch("path.to.some.module.var", Mock()).start()
(There is no corresponding teardown_module.)
This works fine when pytest is called on files one-at-a-time. But when it runs against multiple files, any mocks made by previously executed test code persist into subsequent test code. Is there any way to "reset" this state between files? E.g., does pytest support a way to invoke each file's tests in a separate process? (Looked at the pytest docs but found nothing like this.)
For now, we're adding teardown_module() functions that call patch.stopall(), but it would be nice to have the added security of pytest implicitly isolating our test files from one another.
The standard way to do this is probably to use a fixture with a context manager instead of the setup/teardown construct:
#pytest.fixture(scope="module", autouse=True)
def patch_var():
with mock.patch("path.to.some.module.var"):
yield
This will end the patching after the test goes out of module scope.
It has the same effect as the less convenient:
#pytest.fixture(scope="module", autouse=True)
def patch_var():
patcher = mock.patch("path.to.some.module.var")
patcher.start()
yield
patcher.stop()
Note that you are responsible to stop patching yourself if you use start on the constructed patcher object. It is always more secure to use the context manager or the patch decorator (if applied to a single test) for patching.
UPDATE:
As far as I know, there is no way to unconditionally isolate the test modules completely from one another if executed in the same test run. This is what the concept of fixture scopes is for.
A fixture shall always be written with automatic cleanup. For patching, you use the context manager (as shown above) which does the cleanup for you, other things you have to cleanup yourself after the yield. If you want to have global changes over the whole test run, you use session-scoped fixtures. If you want to isolate test modules, you use module-scoped fixtures, and for test class or single test isolation you use class- or function-scoped fixtures.

Execute system state reset upon unittest assertation fail

I'm a bit confused regarding unittests. I have an embedded system I am testing from outside with Python.
The issue is that after each test is passed I need to reset the system state. However if a test fails it could leave the system in an arbitrary state I need to reset. After each test I go back to the initial state but if an assertion fails it will skip that part.
Therefore, what's the proper way to handle this situation? Some ideas I have are:
Put each test in a try, catch, finally but that doesn't seem so right (unittest already handles test exceptions).
Put each test in a different class and invoke tearDown() method at the end of it
Call initSystemState() at the beggining of each test to go back to init state (but it is slower than resetting only what needs to be reset at the end of the test)
Any suggestions? Ideally if I have testSpeed() test there should be a testSpeedOnStop() function to be called at the end. Perhaps unittest is not the right tool for this job also as all the functions have side-effects and are working together so maybe I should lean more towards integration tests libraries which I haven't explored.
Setting state is done in the setUp(self) method of the test class.
This method is automatically called prior to each test and provides a fresh state for each instance of the tests.
After each test, the tearDown method is run to possibly clean up remnants of failed tests.
You can write a setUp / tearDowm to be executed befora and after all tests; more elaborate tests may require stubbing or mocking objects to be build.

Validating set up and tear down before and after running tests in pytest

I have some resource creation and deletion code that needs to run before and after certain tests, which I've put into a fixture using yield in the usual way. However, before running the tests, I want to verify that the resource creation has happened correctly, and likewise after the deletion, I want to verify that it has happened. I can easily stick asserts into the fixtures themselves, but I'm not sure this is good pytest practice, and I'm concerned that it will make debugging and interpreting the logs harder. Is there a better or canonical way to do validation in pytest?
I had encountered something like this recently - although, I was using unittest instead of pytest.
What I ended up doing was something similar to a method level setup/teardown. That way, future test functions would never be affected by past test functions.
For my use-case, I loaded my test fixtures in this setup function, then ran a couple of basic tests against those tests to ensure validity of fixtures (as part of setup itself). This, I realized, added a bit of time to each test in the class, but ensured that all the fixture data was exactly what I expected it to be (we were loading stuff into a dockerized elasticsearch container). I guess time for running tests is something you can make a judgement call about.

Categories

Resources