EDIT: While any testing recommendations are truly appreciated, I'm wondering specifically if there'a way that pytest can enforce the isolation for me, rather than relying on myself always remembering to clean up mocks.
Does pytest support "resetting" process state between individual Python files, or otherwise isolating test files from one another?
Our CI used to invoke pytest on individual files, like this:
pytest file1.py
pytest file2.py
...
It now invokes pytest once for all files, like this:
pytest file1.py file2.py ...
We ran into trouble when some test files (e.g. file1.py) performed module-level mocks. For example (simplified):
def setup_module():
patch("path.to.some.module.var", Mock()).start()
(There is no corresponding teardown_module.)
This works fine when pytest is called on files one-at-a-time. But when it runs against multiple files, any mocks made by previously executed test code persist into subsequent test code. Is there any way to "reset" this state between files? E.g., does pytest support a way to invoke each file's tests in a separate process? (Looked at the pytest docs but found nothing like this.)
For now, we're adding teardown_module() functions that call patch.stopall(), but it would be nice to have the added security of pytest implicitly isolating our test files from one another.
The standard way to do this is probably to use a fixture with a context manager instead of the setup/teardown construct:
#pytest.fixture(scope="module", autouse=True)
def patch_var():
with mock.patch("path.to.some.module.var"):
yield
This will end the patching after the test goes out of module scope.
It has the same effect as the less convenient:
#pytest.fixture(scope="module", autouse=True)
def patch_var():
patcher = mock.patch("path.to.some.module.var")
patcher.start()
yield
patcher.stop()
Note that you are responsible to stop patching yourself if you use start on the constructed patcher object. It is always more secure to use the context manager or the patch decorator (if applied to a single test) for patching.
UPDATE:
As far as I know, there is no way to unconditionally isolate the test modules completely from one another if executed in the same test run. This is what the concept of fixture scopes is for.
A fixture shall always be written with automatic cleanup. For patching, you use the context manager (as shown above) which does the cleanup for you, other things you have to cleanup yourself after the yield. If you want to have global changes over the whole test run, you use session-scoped fixtures. If you want to isolate test modules, you use module-scoped fixtures, and for test class or single test isolation you use class- or function-scoped fixtures.
Related
I'm looking for example of how to use the session-scoped "session-mocker" fixture of the pytest-mock plugin.
It's fairly clear how to modify the example the docs provide to use it in a particular test:
def test_foo(session_mocker):
session_mocker.patch('os.remove')
etc...
But I'm baffled as to where and how this global fixture should be initialized. Say, for example, that I want to mock "os.remove" for ALL of my tests. Do I set this up in confftest.py and, if so, how do I do it?
You use it in a fixture with a scope of session. The best place to put it would be conftest.py. That's mainly to make it obvious to other programmers that this fixture exists and what it might be doing. That's important because this fixture will effect other tests that might not necessarily know about this fixture or even want it.
I wouldn't recommend mocking something for the duration of a session. Tests, classes or even modules, yes. But not sessions.
For instance, the following test test_normal passes or fails depending on whether test_mocked was run in the same session or not. Since they're in the same "file", it's much easier to spot the problem. But these tests could be located in different test files, that do not appear related, and yet if both tests were run in the same session then the problem would occur.
import pytest
# could be in conftest.py
#pytest.fixture(scope='session')
def myfixture(session_mocker):
session_mocker.patch('sys.mymock', create=True)
def test_mocked(myfixture):
import sys
assert hasattr(sys, 'mymock')
def test_normal():
import sys
assert not hasattr(sys, 'mymock')
Instead, just create a fixture that is scoped for test, class or module, and include it directly in the test file. That way the behaviour is contained to just the set of tests that need it. Mocks are cheap to create, so having the mock recreated for every test is no big deal. It may even be beneficial, as the mock will be reset for each test.
Save session fixtures for things that are expensive to setup, and which have no state, or that the tests do not change its state (eg. a database that is used as a template to create a fresh database that each test will run against).
I'm a bit new to Python's unittest library and I'm currently looking at setting up my Flask server before running any of my integration tests. I know that the unittest.TestCase class allows you to use setUp() before every test cases in the class. I also know that the same class has another method called setUpClass() that runs only once for the entire class.
What I'm actually interested is trying to figure out how to do something similar like setUpClass(), but done on an entire unittest.TestSuite. However, I'm having no luck at it.
Sure, I could set up the server for every TestCase, but I would like to avoid doing this.
There is an answer on a separate question that suggests that by overriding unittest.TestResult's startTestRun(), you could have a set up function that covers the entire test suite. However, I've tried to passed in the custom TestResult object into unittest. TextTestRunner with no success.
So, how exactly can I do a set up for an entire test suite?
This is not well documented, but I recently needed to do this as well.
The docs mention that TestResult.startTestRun is "Called once before any tests are executed."
As you can see, in the implementation, the method doesn't do anything.
I tried subclassing TestResult and all kinds of things. I couldn't make any of that work, so I ended up monkey patching the class.
In the __init__.py of my test package, I did the following:
import unittest
OLD_TEST_RUN = unittest.result.TestResult.startTestRun
def startTestRun(self):
# whatever custom code you want to run exactly once before
# any of your tests runs goes here:
...
# just in case future versions do something in this method
# we'll call the existing method
OLD_TEST_RUN(self)
unittest.result.TestResult.startTestRun = startTestRun
There is also a stopTestRun method if you need to run cleanup code after all tests have run.
Note that this does not make a separate version of TestResult. The existing one is used by the unittest module as usual. The only thing we've done is surgically graft on our custom implementation of startTestRun
Say I have a file fixtures.py that defines a simple py.test fixture called foobar.
Normally I would have to import that fixture to use it (including all of the sub-fixtures), like this:
from fixtures import foobar
def test_bazinga(foobar):
Note that I also don't want to use a star import.
How do I import this fixture so that I can just write:
import fixtures
def test_bazinga(foobar):
Is this even possible? It seems like it, because py.test itself defines exactly such fixtures (e.g. monkeypatch).
Fixtures and their visibility are a bit odd in pytest. They don't require importing, but if you defined them in a test_*.py file, they'll only be available in that file.
You can however put them in a (project- or subfolder-wide) conftest.py to use them in multiple files.
pytest-internal fixtures are simply defined in a core plugin, and thus available everywhere. In fact, a conftest.py is basically nothing else than a per-directory plugin.
You can also run py.test --fixtures to see where fixtures are coming from.
I am experimenting with pytest and got stuck with some unobvious behavior for me. I have session-scope fixture and use it like this:
#pytest.mark.usefixtures("myfixt")
class TestWithMyFixt(object):
#classmethod
def setup_class(cls):
...
And when I run test I see that setup_class() comes before fixture myfixt() call. What is the purpose behind such a behaviour? For me, it should be run after fixture initialization because it uses fixture. How can I use setup_class() after session fixture initialization?
Thanks in advance!
I found this question while looking for similar issue, and here is what I came up with.
You can create fixtures with different scope. Possible scopes are session, module, class and function.
Fixture can be defined as a method in the class.
Fixture can have autouse=True flag.
If we combine these, we will get this nice approach which doesn't use setup_class at all (in py.test, setup_class is considered an obsolete approach):
class TestWithMyFixt(object):
#pytest.fixture(autouse=True, scope='class')
def _prepare(self, myfixt):
...
With such implementation, _prepare will be started just once before first of tests within the class, and will be finalized after last test within the class.
At the time _prepare fixture is started, myfixt is already applied as a dependency of _prepare.
Looking at the code behind usefixtures it appears the fixtures are handled by FixtureManager [1] which works on a per instance level. By marking a class with usefixtures, it appears to denote to the test framework that each of the callables contained within the scope should use those fixtures, but it does not apply the fixtures at that point. Instead, it waits for the callable to be called at which point it checks all of the managers for any updates it should do (including the FixtureManager) and applies the pieces appropriately. I believe this means that for each test within your class, it will reapply the fixture so that each test starts from a common base point which would be better behavior than the contrary.
Thus, your setup_class is called because that's the earliest order of operations. It sounds like you should put your setup_class logic into your fixture which will cause it to be called at the time the fixture is implemented.
[1] - https://github.com/pytest-dev/pytest/blob/master/_pytest/python.py#L1628
I know is an old post but the answer is pretty simple, maybe it will be useful for others.
You have to call super(...).setUpClass method in current setUpClass.
setUpClass is called once before the test methods and perform all actions. when you call super(...).setUpClass it applies the fixtures.
Note 1: i made some small changes to the code to be prettier.
Note 2: this method should be called setUpClass, it is mandatory.
class TestWithMyFixt(APITestCase):
fixture = ["myfixt"]
#classmethod
def setUpClass(cls):
super(TestWithMyFixt, cls).setUpClass()
...
This is my opinion, but maybe the purpose of this behavior is let the developer to choose when he want to load the fixture.
Can anyone explain the use of Python's setUp and tearDown methods while writing test cases apart from that setUp is called immediately before calling the test method and tearDown is called immediately after it has been called?
In general you add all prerequisite steps to setUp and all clean-up steps to tearDown.
You can read more with examples here.
When a setUp() method is defined, the test runner will run that method
prior to each test. Likewise, if a tearDown() method is defined, the
test runner will invoke that method after each test.
For example you have a test that requires items to exist, or certain state - so you put these actions(creating object instances, initializing db, preparing rules and so on) into the setUp.
Also as you know each test should stop in the place where it was started - this means that we have to restore app state to it's initial state - e.g close files, connections, removing newly created items, calling transactions callback and so on - all these steps are to be included into the tearDown.
So the idea is that test itself should contain only actions that to be performed on the test object to get the result, while setUp and tearDown are the methods to help you to leave your test code clean and flexible.
You can create a setUp and tearDown for a bunch of tests and define them in a parent class - so it would be easy for you to support such tests and update common preparations and clean ups.
If you are looking for an easy example please use the following link with example
You can use these to factor out code common to all tests in the test suite.
If you have a lot of repeated code in your tests, you can make them shorter by moving this code to setUp/tearDown.
You might use this for creating test data (e.g. setting up fakes/mocks), or stubbing out functions with fakes.
If you're doing integration testing, you can use check environmental pre-conditions in setUp, and skip the test if something isn't set up properly.
For example:
class TurretTest(unittest.TestCase):
def setUp(self):
self.turret_factory = TurretFactory()
self.turret = self.turret_factory.CreateTurret()
def test_turret_is_on_by_default(self):
self.assertEquals(True, self.turret.is_on())
def test_turret_turns_can_be_turned_off(self):
self.turret.turn_off()
self.assertEquals(False, self.turret.is_on())
Suppose you have a suite with 10 tests. 8 of the tests share the same setup/teardown code. The other 2 don't.
setup and teardown give you a nice way to refactor those 8 tests. Now what do you do with the other 2 tests? You'd move them to another testcase/suite. So using setup and teardown also helps give a natural way to break the tests into cases/suites