Run code before and after each test in py.test? - python

I want to run additional setup and teardown checks before and after each test in my test suite. I've looked at fixtures but not sure on whether they are the correct approach. I need to run the setup code prior to each test and I need to run the teardown checks after each test.
My use-case is checking for code that doesn't cleanup correctly: it leaves temporary files. In my setup, I will check the files and in the teardown I also check the files. If there are extra files I want the test to fail.

py.test fixtures are a technically adequate method to achieve your purpose.
You just need to define a fixture like that:
#pytest.fixture(autouse=True)
def run_around_tests():
# Code that will run before your test, for example:
files_before = # ... do something to check the existing files
# A test function will be run at this point
yield
# Code that will run after your test, for example:
files_after = # ... do something to check the existing files
assert files_before == files_after
By declaring your fixture with autouse=True, it will be automatically invoked for each test function defined in the same module.
That said, there is one caveat. Asserting at setup/teardown is a controversial practice. I'm under the impression that the py.test main authors do not like it (I do not like it either, so that may colour my own perception), so you might run into some problems or rough edges as you go forward.

You can use a fixture in order to achieve what you want.
import pytest
#pytest.fixture(autouse=True)
def run_before_and_after_tests(tmpdir):
"""Fixture to execute asserts before and after a test is run"""
# Setup: fill with any logic you want
yield # this is where the testing happens
# Teardown : fill with any logic you want
Detailed Explanation
#pytest.fixture(autouse=True), from the docs: "Occasionally, you may want to have fixtures get invoked automatically without declaring a function argument explicitly or a usefixtures decorator." Therefore, this fixture will run every time a test is executed.
# Setup: fill with any logic you want, this logic will be executed before every test is actually run. In your case, you can add your assert statements that will be executed before the actual test.
yield, as indicated in the comment, this is where testing happens
# Teardown : fill with any logic you want, this logic will be executed after every test. This logic is guaranteed to run regardless of what happens during the
tests.
Note: in pytest there is a difference between a failing test and an error while executing a test. A Failure indicates that the test failed in some way.
An Error indicates that you couldn't get to the point of doing a proper test.
Consider the following examples:
Assert fails before test is run -> ERROR
import pytest
#pytest.fixture(autouse=True)
def run_around_tests():
assert False # This will generate an error when running tests
yield
assert True
def test():
assert True
Assert fails after test is run -> ERROR
import pytest
#pytest.fixture(autouse=True)
def run_around_tests():
assert True
yield
assert False
def test():
assert True
Test fails -> FAILED
import pytest
#pytest.fixture(autouse=True)
def run_around_tests():
assert True
yield
assert True
def test():
assert False
Test passes -> PASSED
import pytest
#pytest.fixture(autouse=True)
def run_around_tests():
assert True
yield
assert True
def test():
assert True

Fixtures are exactly what you want.
That's what they are designed for.
Whether you use pytest style fixtures, or setup and teardown (module, class, or method level) xUnit style fixtures, depends on the circumstance and personal taste.
From what you are describing, it seems like you could use pytest autouse fixtures.
Or xUnit style function level setup_function()/teardown_function().
Pytest has you completely covered. So much so that perhaps it's a fire hose of information.

You can use Module level setup/teardown Fixtures of Pytest.
Here's the Link
http://pytest.org/latest/xunit_setup.html
It Works as follows:
def setup_module(module):
""" setup any state specific to the execution of the given module."""
def teardown_module(module):
""" teardown any state that was previously setup with a setup_module
method."""
Test_Class():
def test_01():
#test 1 Code
It will call setup_module before this test and teardown_module after test completes.
You can include this fixture in each test-script to run it for each test.
IF you want to use something that is common to all tests in a directory You can use package/directory level fixtures nose framework
http://pythontesting.net/framework/nose/nose-fixture-reference/#package
In __init__.py file of the package you can include following
def setup_package():
'''Set up your environment for test package'''
def teardown_package():
'''revert the state '''

You may use decorators but programatically, so you don't need to put the decorator in each method.
I'm assuming several things in next code:
The test methods are all named like: "testXXX()"
The decorator is added to the same module where test methods are implemented.
def test1():
print ("Testing hello world")
def test2():
print ("Testing hello world 2")
#This is the decorator
class TestChecker(object):
def __init__(self, testfn, *args, **kwargs):
self.testfn = testfn
def pretest(self):
print ('precheck %s' % str(self.testfn))
def posttest(self):
print ('postcheck %s' % str(self.testfn))
def __call__(self):
self.pretest()
self.testfn()
self.posttest()
for fn in dir() :
if fn.startswith('test'):
locals()[fn] = TestChecker(locals()[fn])
Now if you call the test methods...
test1()
test2()
The output should be something like:
precheck <function test1 at 0x10078cc20>
Testing hello world
postcheck <function test1 at 0x10078cc20>
precheck <function test2 at 0x10078ccb0>
Testing hello world 2
postcheck <function test2 at 0x10078ccb0>
If you have test methods as class methods, the approach is also valid. For instance:
class TestClass(object):
#classmethod
def my_test(cls):
print ("Testing from class method")
for fn in dir(TestClass) :
if not fn.startswith('__'):
setattr(TestClass, fn, TestChecker(getattr(TestClass, fn)))
The call to TestClass.my_test() will print:
precheck <bound method type.my_test of <class '__main__.TestClass'>>
Testing from class method
postcheck <bound method type.my_test of <class '__main__.TestClass'>>

It is an old question but I personally found another way from the docs :
Use the pytest.ini file :
[pytest]
usefixtures = my_setup_and_tear_down
import pytest
#pytest.fixture
def my_setup_and_tear_down():
# SETUP
# Write here the logic that you need for the setUp
yield # this statement will let the tests execute
# TEARDOWN
# Write here the logic that you need after each tests
About the yield statement and how it allows to run the test : HERE

Fixtures by default have scope=function. So, if you just use a definition such as
#pytest.fixture
def fixture_func(self)
It defaults to (scope='function').
So any finalizers in the fixture function will be called after each test.

Related

Automatically wrap / decorate all pytest unit tests

Let's say I have a very simple logging decorator:
from functools import wraps
def my_decorator(func):
#wraps(func)
def wrapper(*args, **kwargs):
print(f"{func.__name__} ran with args: {args}, and kwargs: {kwargs}")
result = func(*args, **kwargs)
return result
return wrapper
I can add this decorator to every pytest unit test individually:
#my_decorator
def test_one():
assert True
#my_decorator
def test_two():
assert 1
How can I automatically add this decorator to every single pytest unit test so I don't have to add it manually? What if I want to add it to every unit test in a file? Or in a module?
My use case is to wrap every test function with a SQL profiler, so inefficient ORM code raises an error. Using a pytest fixture should work, but I have thousands of tests so it would be nice to apply the wrapper automatically instead of adding the fixture to every single test. Additionally, there may be a module or two I don't want to profile so being able to opt-in or opt-out an entire file or module would be helpful.
Provided you can move the logic into a fixture, as stated in the question, you can just use an auto-use fixture defined in the top-level conftest.py.
To add the possibility to opt out for some tests, you can define a marker that will be added to the tests that should not use the fixture, and check that marker in the fixture, e.g. something like this:
conftest.py
import pytest
def pytest_configure(config):
config.addinivalue_line(
"markers",
"no_profiling: mark test to not use sql profiling"
)
#pytest.fixture(autouse=True)
def sql_profiling(request):
if not request.node.get_closest_marker("no_profiling"):
# do the profiling
yield
test.py
import pytest
def test1():
pass # will use profiling
#pytest.mark.no_profiling
def test2():
pass # will not use profiling
As pointed out by #hoefling, you could also disable the fixture for a whole module by adding:
pytestmark = pytest.mark.no_profiling
in the module. That will add the marker to all contained tests.

Pytest getting test Information in classes other than testcase

I am writing a test framework using pytest. Is there a way to get testcase object in classes other than testcase. For example utility classes.
I want to print the testcase name and some markers for the test in utility classes. Are these information available in some contextmanager?
You cannot directly access pytest test properties if you are not inside a test fixture or a hook function, as there is no fixed test case class as in unittest. Your best bet is probably to get this information in a fixture and store it globally for access from a utility function:
testinfo={}
#pytest.fixture(autouse=True)
def test_info(request):
global testinfo
testinfo['name'] = request.node.name
testinfo['markers'] = [m.name for m in request.node.iter_markers()]
...
yield # the information is stored at test start...
testinfo = {} # ... and removed on test teardown
def utility_func():
if testinfo:
print(f"Name: {testinfo['name']} Markers: {testinfo['markers']}")
...
Or, the same if you use a test class:
class TestSomething:
def setup_method(self):
self.testinfo = {}
#pytest.fixture(autouse=True)
def info(self, request):
self.testinfo['name'] = request.node.name
self.testinfo['markers'] = [m.name for m in
request.node.iter_markers()]
yield # the information is stored at test start...
self.testinfo = {} # ... and removed on test teardown
def utility_func(self):
if self.testinfo:
print(f"Name: {self.testinfo['name']} Markers:"
f" {self.testinfo['markers']}")
#pytest.mark.test_marker
def test_something(self):
self.utility_func()
assert True
This will show the output:
Name: test_something Markers: ['test_marker']
This will work if you call the utility function during test execution - otherwise no value will be set.
Note however that this will only work reliably if you execute the test synchronously. If using pytest-xdist or similar tools for asynchronous test execution, this may not work due to the testinfo variable being overwitten by another test (though that depends on the implementation - it may work, if the variables are copied during a test run). In that case you can to do the logging directly in the fixture or hook function (which may generally be a better idea, depending on your use case).
For more information about available test node properties you can check the documentation of a request node.

Pytest reports test skipped with unittest.skip as passed

The test looks something like this:
import unittest
class FooTestCase(unittest.TestCase):
#unittest.skip
def test_bar(self):
self.assertIsNone('not none')
When run using pytest, the report looks something like:
path/to/my/tests/test.py::FooTestCase::test_bar <- ../../../../../usr/lib/python3.5/unittest/case.py PASSED
On the other hand, if I replace #unittest.skip with #pytest.mark.skip, it is properly reported as skipped:
path/to/my/tests/test.py::FooTestCase::test_bar <- ../../../../../usr/lib/python3.5/unittest/case.py SKIPPED
If anyone could say, am I doing something wrong or is that a bug in pytest?
unittest.skip() decorator requires an argument:
#unittest.skip(reason)
Unconditionally skip the decorated test. reason should describe why
the test is being skipped.
Its usage is found in their examples:
class MyTestCase(unittest.TestCase):
#unittest.skip("demonstrating skipping")
def test_nothing(self):
self.fail("shouldn't happen")
Thus unittest.skip is not a decorator by itself, but a decorator factory - the actual decorator is obtained as a result of calling unittest.skip.
This explains why your test passes instead of being skipped or failing, since it is actually equivalent to the following:
import unittest
class FooTestCase(unittest.TestCase):
def test_bar(self):
self.assertIsNone('not none')
test_bar = unittest.skip(test_bar)
# now test_bar becomes a decorator but is instead invoked by
# pytest as if it were a unittest method and passes

Get name of current test in setup using nose

I am currently writing some functional tests using nose. The library I am testing manipulates a directory structure.
To get reproducible results, I store a template of a test directory structure and create a copy of that before executing a test (I do that inside the tests setup function). This makes sure that I always have a well defined state at the beginning of the test.
Now I have two further requirements:
If a test fails, I would like the directory structure it operated on to not be overwritten or deleted, so that I can analyze the problem.
I would like to be able to run multiple tests in parallel.
Both these requirements could be solved by creating a new copy with a different name for each test that is executed. For this reason, I would like to get access to the name of the test that is currently executed in the setup function, so that I can name the copy appropriately. Is there any way to achieve this?
An illustrative code example:
def setup_func(test_name):
print "Setup of " + test_name
def teardown_func(test_name):
print "Teardown of " + test_name
#with_setup(setup_func, teardown_func)
def test_one():
pass
#with_setup(setup_func, teardown_func)
def test_two():
pass
Expected output:
Setup of test_one
Teardown of test_one
Setup of test_two
Teardown of test_two
Injecting the name as a parameter would be the nicest solution, but I am open to other suggestions as well.
Sounds like self._testMethodName or self.id() should work for you. These are property and method on unittest.TestCase class. E.g.:
from django.test import TestCase
class MyTestCase(TestCase):
def setUp(self):
print self._testMethodName
print self.id()
def test_one(self):
self.assertIsNone(1)
def test_two(self):
self.assertIsNone(2)
prints:
...
AssertionError: 1 is not None
-------------------- >> begin captured stdout << ---------------------
test_one
path.MyTestCase.test_one
--------------------- >> end captured stdout << ----------------------
...
AssertionError: 2 is not None
-------------------- >> begin captured stdout << ---------------------
test_two
path.MyTestCase.test_two
--------------------- >> end captured stdout << ----------------------
Also see:
A way to output pyunit test name in setup()
How to get currently running testcase name from testsuite in unittest
Hope that helps.
I have a solution that works for test functions, using a custom decorator:
def with_named_setup(setup=None, teardown=None):
def wrap(f):
return with_setup(
lambda: setup(f.__name__) if (setup is not None) else None,
lambda: teardown(f.__name__) if (teardown is not None) else None)(f)
return wrap
#with_named_setup(setup_func, teardown_func)
def test_one():
pass
#with_named_setup(setup_func, teardown_func)
def test_two():
pass
This reuses the existing with_setup decorator, but binds the name of the decorated function to the setup and teardown functions passed as parameters.
In the case you neither want to subclass unittest.TestCase or use a custom decorator (as explained in the other answers) you can get the information by digging through the call stack:
import inspect
def get_current_case():
'''
Get information about the currently running test case.
Returns the fully qualified name of the current test function
when called from within a test method, test function, setup or
teardown.
Raises ``RuntimeError`` if the current test case could not be
determined.
Tested on Python 2.7 and 3.3 - 3.6 with nose 1.3.7.
'''
for frame_info in inspect.stack():
if frame_info[1].endswith('unittest/case.py'):
return frame_info[0].f_locals['self'].id()
raise RuntimeError('Could not determine test case')

ValueError: no such test method in <class 'myapp.tests.SessionTestCase'>: runTest

I have a test case:
class LoginTestCase(unittest.TestCase):
...
I'd like to use it in a different test case:
class EditProfileTestCase(unittest.TestCase):
def __init__(self):
self.t = LoginTestCase()
self.t.login()
This raises:
ValueError: no such test method in <class 'LoginTest: runTest`
I looked at the unittest code where the exception is being called, and it looks like the tests aren't supposed to be written this way. Is there a standard way to write something you'd like tested so that it can be reused by later tests? Or is there a workaround?
I've added an empty runTest method to LoginTest as a dubious workaround for now.
The confusion with "runTest" is mostly based on the fact that this works:
class MyTest(unittest.TestCase):
def test_001(self):
print "ok"
if __name__ == "__main__":
unittest.main()
So there is no "runTest" in that class and all of the test-functions are being called. However if you look at the base class "TestCase" (lib/python/unittest/case.py) then you will find that it has an argument "methodName" that defaults to "runTest" but it does NOT have a default implementation of "def runTest"
class TestCase:
def __init__(self, methodName='runTest'):
The reason that unittest.main works fine is based on the fact that it does not need "runTest" - you can mimic the behaviour by creating a TestCase-subclass instance for all methods that you have in your subclass - just provide the name as the first argument:
class MyTest(unittest.TestCase):
def test_001(self):
print "ok"
if __name__ == "__main__":
suite = unittest.TestSuite()
for method in dir(MyTest):
if method.startswith("test"):
suite.addTest(MyTest(method))
unittest.TextTestRunner().run(suite)
Here's some 'deep black magic':
suite = unittest.TestLoader().loadTestsFromTestCase(Test_MyTests)
unittest.TextTestRunner(verbosity=3).run(suite)
Very handy if you just want to test run your unit tests from a shell (i.e., IPython).
If you don't mind editing unit test module code directly, the simple fix is to add under case.py class TestCase a new method called runTest that does nothing.
The file to edit sits under pythoninstall\Lib\unittest\case.py
def runTest(self):
pass
This will stop you ever getting this error.
Guido's answer is almost there, however it doesn't explain the thing. I needed to look to unittest code to grasp the flow.
Say you have the following.
import unittest
class MyTestCase(unittest.TestCase):
def testA(self):
pass
def testB(self):
pass
When you use unittest.main(), it will try to discover test cases in current module. The important code is unittest.loader.TestLoader.loadTestsFromTestCase.
def loadTestsFromTestCase(self, testCaseClass):
# ...
# This will look in class' callable attributes that start
# with 'test', and return their names sorted.
testCaseNames = self.getTestCaseNames(testCaseClass)
# If there's no test to run, look if the case has the default method.
if not testCaseNames and hasattr(testCaseClass, 'runTest'):
testCaseNames = ['runTest']
# Create TestSuite instance having test case instance per test method.
loaded_suite = self.suiteClass(map(testCaseClass, testCaseNames))
return loaded_suite
What the latter does, is converting test case class into test suite, that holds the instances of the class per its test method. I.e. my example will be turned into unittest.suite.TestSuite([MyTestCase('testA'), MyTestCase('testB')]). So if you would like to create a test case manually, you need to do the same thing.
#dmvianna's answer got me very close to being able to run unittest in a jupyter (ipython) notebook, but I had to do a bit more. If I wrote just the following:
class TestStringMethods(unittest.TestCase):
def test_upper(self):
self.assertEqual('foo'.upper(), 'FOO')
def test_isupper(self):
self.assertTrue('FOO'.isupper())
self.assertFalse('Foo'.isupper())
def test_split(self):
s = 'hello world'
self.assertEqual(s.split(), ['hello', 'world'])
# check that s.split fails when the separator is not a string
with self.assertRaises(TypeError):
s.split(2)
suite = unittest.TestLoader().loadTestsFromModule (TestStringMethods)
unittest.TextTestRunner().run(suite)
I got
Ran 0 tests in 0.000s
OK
It's not broken, but it doesn't run any tests! If I instantiated the test class
suite = unittest.TestLoader().loadTestsFromModule (TestStringMethods())
(note the parens at the end of the line; that's the only change) I got
ValueError Traceback (most recent call last)
in ()
----> 1 suite = unittest.TestLoader().loadTestsFromModule (TestStringMethods())
/usr/lib/python2.7/unittest/case.pyc in init(self, methodName)
189 except AttributeError:
190 raise ValueError("no such test method in %s: %s" %
--> 191 (self.class, methodName))
192 self._testMethodDoc = testMethod.doc
193 self._cleanups = []
ValueError: no such test method in : runTest
The fix is now reasonably clear: add runTest to the test class:
class TestStringMethods(unittest.TestCase):
def runTest(self):
test_upper (self)
test_isupper (self)
test_split (self)
def test_upper(self):
self.assertEqual('foo'.upper(), 'FOO')
def test_isupper(self):
self.assertTrue('FOO'.isupper())
self.assertFalse('Foo'.isupper())
def test_split(self):
s = 'hello world'
self.assertEqual(s.split(), ['hello', 'world'])
# check that s.split fails when the separator is not a string
with self.assertRaises(TypeError):
s.split(2)
suite = unittest.TestLoader().loadTestsFromModule (TestStringMethods())
unittest.TextTestRunner().run(suite)
Ran 3 tests in 0.002s
OK
It also works correctly (and runs 3 tests) if my runTest just passes, as suggested by #Darren.
This is a little yucchy, requiring some manual labor on my part, but it's also more explicit, and that's a Python virtue, isn't it?
I could not get any of the techniques via calling unittest.main with explicit arguments from here or from this related question Unable to run unittest's main function in ipython/jupyter notebook to work inside a jupyter notebook, but I am back on the road with a full tank of gas.
unittest does deep black magic -- if you choose to use it to run your unit-tests (I do, since this way I can use a very powerful battery of test runners &c integrated into the build system at my workplace, but there are definitely worthwhile alternatives), you'd better play by its rules.
In this case, I'd simply have EditProfileTestCase derive from LoginTestCase (rather than directly from unittest.TestCase). If there are some parts of LoginTestCase that you do want to also test in the different environment of EditProfileTestCase, and others that you don't, it's a simple matter to refactor LoginTestCase into those two parts (possibly using multiple inheritance) and if some things need to happen slightly differently in the two cases, factor them out into auxiliary "hook methods" (in a "Template Method" design pattern) -- I use all of these approaches often to diminish boilerplate and increase reuse in the copious unit tests I always write (if I have unit-test coverage < 95%, I always feel truly uneasy -- below 90%, I start to feel physically sick;-).

Categories

Resources