I am currently writing some functional tests using nose. The library I am testing manipulates a directory structure.
To get reproducible results, I store a template of a test directory structure and create a copy of that before executing a test (I do that inside the tests setup function). This makes sure that I always have a well defined state at the beginning of the test.
Now I have two further requirements:
If a test fails, I would like the directory structure it operated on to not be overwritten or deleted, so that I can analyze the problem.
I would like to be able to run multiple tests in parallel.
Both these requirements could be solved by creating a new copy with a different name for each test that is executed. For this reason, I would like to get access to the name of the test that is currently executed in the setup function, so that I can name the copy appropriately. Is there any way to achieve this?
An illustrative code example:
def setup_func(test_name):
print "Setup of " + test_name
def teardown_func(test_name):
print "Teardown of " + test_name
#with_setup(setup_func, teardown_func)
def test_one():
pass
#with_setup(setup_func, teardown_func)
def test_two():
pass
Expected output:
Setup of test_one
Teardown of test_one
Setup of test_two
Teardown of test_two
Injecting the name as a parameter would be the nicest solution, but I am open to other suggestions as well.
Sounds like self._testMethodName or self.id() should work for you. These are property and method on unittest.TestCase class. E.g.:
from django.test import TestCase
class MyTestCase(TestCase):
def setUp(self):
print self._testMethodName
print self.id()
def test_one(self):
self.assertIsNone(1)
def test_two(self):
self.assertIsNone(2)
prints:
...
AssertionError: 1 is not None
-------------------- >> begin captured stdout << ---------------------
test_one
path.MyTestCase.test_one
--------------------- >> end captured stdout << ----------------------
...
AssertionError: 2 is not None
-------------------- >> begin captured stdout << ---------------------
test_two
path.MyTestCase.test_two
--------------------- >> end captured stdout << ----------------------
Also see:
A way to output pyunit test name in setup()
How to get currently running testcase name from testsuite in unittest
Hope that helps.
I have a solution that works for test functions, using a custom decorator:
def with_named_setup(setup=None, teardown=None):
def wrap(f):
return with_setup(
lambda: setup(f.__name__) if (setup is not None) else None,
lambda: teardown(f.__name__) if (teardown is not None) else None)(f)
return wrap
#with_named_setup(setup_func, teardown_func)
def test_one():
pass
#with_named_setup(setup_func, teardown_func)
def test_two():
pass
This reuses the existing with_setup decorator, but binds the name of the decorated function to the setup and teardown functions passed as parameters.
In the case you neither want to subclass unittest.TestCase or use a custom decorator (as explained in the other answers) you can get the information by digging through the call stack:
import inspect
def get_current_case():
'''
Get information about the currently running test case.
Returns the fully qualified name of the current test function
when called from within a test method, test function, setup or
teardown.
Raises ``RuntimeError`` if the current test case could not be
determined.
Tested on Python 2.7 and 3.3 - 3.6 with nose 1.3.7.
'''
for frame_info in inspect.stack():
if frame_info[1].endswith('unittest/case.py'):
return frame_info[0].f_locals['self'].id()
raise RuntimeError('Could not determine test case')
Related
I am writing a test framework using pytest. Is there a way to get testcase object in classes other than testcase. For example utility classes.
I want to print the testcase name and some markers for the test in utility classes. Are these information available in some contextmanager?
You cannot directly access pytest test properties if you are not inside a test fixture or a hook function, as there is no fixed test case class as in unittest. Your best bet is probably to get this information in a fixture and store it globally for access from a utility function:
testinfo={}
#pytest.fixture(autouse=True)
def test_info(request):
global testinfo
testinfo['name'] = request.node.name
testinfo['markers'] = [m.name for m in request.node.iter_markers()]
...
yield # the information is stored at test start...
testinfo = {} # ... and removed on test teardown
def utility_func():
if testinfo:
print(f"Name: {testinfo['name']} Markers: {testinfo['markers']}")
...
Or, the same if you use a test class:
class TestSomething:
def setup_method(self):
self.testinfo = {}
#pytest.fixture(autouse=True)
def info(self, request):
self.testinfo['name'] = request.node.name
self.testinfo['markers'] = [m.name for m in
request.node.iter_markers()]
yield # the information is stored at test start...
self.testinfo = {} # ... and removed on test teardown
def utility_func(self):
if self.testinfo:
print(f"Name: {self.testinfo['name']} Markers:"
f" {self.testinfo['markers']}")
#pytest.mark.test_marker
def test_something(self):
self.utility_func()
assert True
This will show the output:
Name: test_something Markers: ['test_marker']
This will work if you call the utility function during test execution - otherwise no value will be set.
Note however that this will only work reliably if you execute the test synchronously. If using pytest-xdist or similar tools for asynchronous test execution, this may not work due to the testinfo variable being overwitten by another test (though that depends on the implementation - it may work, if the variables are copied during a test run). In that case you can to do the logging directly in the fixture or hook function (which may generally be a better idea, depending on your use case).
For more information about available test node properties you can check the documentation of a request node.
I'm trying to create a test environment with Pytest. The idea is to group test methods into classes.
For every class/group, I want to attach a config fixture that is going to be parametrized. So that I can run all the tests with "configuration A" and then all tests with "configuration B" and so on.
But also, I want a reset fixture, that can be executed before specific methods or all methods of a class.
The problem I have there is, once I apply my reset fixture (to a method or to a whole class), the config fixture seems to work in the function scope instead of the class scope. So, once I apply the reset fixture, the config fixture is called before/after every method in the class.
The following piece of code reproduces the problem:
import pytest
from pytest import *
#fixture(scope='class')
def config(request):
print("\nconfiguring with %s" % request.param)
yield
print("\ncleaning up config")
#fixture(scope='function')
def reset():
print("\nreseting")
#mark.parametrize("config", ["config-A", "config-B"], indirect=True)
##mark.usefixtures("reset")
class TestMoreStuff(object):
def test_a(self, config):
pass
def test_b(self, config):
pass
def test_c(self, config):
pass
The test shows how the config fixture should work, being executed only once for the whole class. If you uncomment the usefixtures decoration, you can notice that the config fixture will be executed in every test method. Is it possible to use the reset fixture without triggering this behaviour?
As I mentioned in a comment, that seems to be a bug in Pytest 3.2.5.
There's a workaround, which is to "force" the scope of a parametrization. So, in this case if you include the scope="class" in the parametrize decorator, you get the desired behaviour.
import pytest
from pytest import *
#fixture(scope='class')
def config(request):
print("\nconfiguring with %s" % request.param)
yield
print("\ncleaning up config")
#fixture(scope='function')
def reset():
print("\nreseting")
#mark.parametrize("config", ["config-A", "config-B"], indirect=True, scope="class")
#mark.usefixtures("reset")
class TestMoreStuff(object):
def test_a(self, config):
pass
def test_b(self, config):
pass
def test_c(self, config):
pass
It depends on which version of pytest you are using.
There are some semantical problems to implement this in older versions of pytest. So, this idea is not yet implemented in older pytest. Someone has already given suggestion to implement the same. you can refer this
"Fixture scope doesn't work when parametrized tests use parametrized fixtures".
This was the bug.
You can refer this
This issue has been resolved in latest version of pytest. Here's the commit for the same with pytest 3.2.5
Hope it would help you.
I have the following scripts:
conftest.py:
import pytest
#pytest.fixture(scope="session")
def setup_env(request):
# run some setup
return("result")
test.py:
import pytest
#pytest.mark.usefixtures("setup_env")
class TestDirectoryInit(object):
def setup(cls):
print("this is setup")
ret=setup_env()
print(ret)
def test1():
print("test1")
def teardown(cls):
print("this teardown")
I get the error:
def setup(cls):
print("this is setup")
> ret=setup_env()
E NameError: name 'setup_env' is not defined
In setup(), I want to get the return value "result" from setup_env() in conftest.py.
Could any expert guide me how to do it?
I believe that #pytest.mark.usefixtures is more meant for state alteration prior to the execution of each test. From the docs:
"Sometimes test functions do not directly need access to a fixture object."
https://docs.pytest.org/en/latest/fixture.html#using-fixtures-from-classes-modules-or-projects
Meaning that your fixture is running at the start of each test, but your functions do not have access to it.
When your tests need access to the object returned by your fixture, it should be already populated by name when placed in conftest.py and marked with #pytest.fixture. All you need to do is then delcare the name of the fixture as an argument to your test function, like so:
https://docs.pytest.org/en/latest/fixture.html#using-fixtures-from-classes-modules-or-projects
If you prefer to do this on a class or module level, you want to change the scope of your #pytest.fixture statement, like so:
https://docs.pytest.org/en/latest/fixture.html#sharing-a-fixture-across-tests-in-a-module-or-class-session
Sorry for so many links to the docs, but I think they have good examples. Hope that clears things up.
I want to run additional setup and teardown checks before and after each test in my test suite. I've looked at fixtures but not sure on whether they are the correct approach. I need to run the setup code prior to each test and I need to run the teardown checks after each test.
My use-case is checking for code that doesn't cleanup correctly: it leaves temporary files. In my setup, I will check the files and in the teardown I also check the files. If there are extra files I want the test to fail.
py.test fixtures are a technically adequate method to achieve your purpose.
You just need to define a fixture like that:
#pytest.fixture(autouse=True)
def run_around_tests():
# Code that will run before your test, for example:
files_before = # ... do something to check the existing files
# A test function will be run at this point
yield
# Code that will run after your test, for example:
files_after = # ... do something to check the existing files
assert files_before == files_after
By declaring your fixture with autouse=True, it will be automatically invoked for each test function defined in the same module.
That said, there is one caveat. Asserting at setup/teardown is a controversial practice. I'm under the impression that the py.test main authors do not like it (I do not like it either, so that may colour my own perception), so you might run into some problems or rough edges as you go forward.
You can use a fixture in order to achieve what you want.
import pytest
#pytest.fixture(autouse=True)
def run_before_and_after_tests(tmpdir):
"""Fixture to execute asserts before and after a test is run"""
# Setup: fill with any logic you want
yield # this is where the testing happens
# Teardown : fill with any logic you want
Detailed Explanation
#pytest.fixture(autouse=True), from the docs: "Occasionally, you may want to have fixtures get invoked automatically without declaring a function argument explicitly or a usefixtures decorator." Therefore, this fixture will run every time a test is executed.
# Setup: fill with any logic you want, this logic will be executed before every test is actually run. In your case, you can add your assert statements that will be executed before the actual test.
yield, as indicated in the comment, this is where testing happens
# Teardown : fill with any logic you want, this logic will be executed after every test. This logic is guaranteed to run regardless of what happens during the
tests.
Note: in pytest there is a difference between a failing test and an error while executing a test. A Failure indicates that the test failed in some way.
An Error indicates that you couldn't get to the point of doing a proper test.
Consider the following examples:
Assert fails before test is run -> ERROR
import pytest
#pytest.fixture(autouse=True)
def run_around_tests():
assert False # This will generate an error when running tests
yield
assert True
def test():
assert True
Assert fails after test is run -> ERROR
import pytest
#pytest.fixture(autouse=True)
def run_around_tests():
assert True
yield
assert False
def test():
assert True
Test fails -> FAILED
import pytest
#pytest.fixture(autouse=True)
def run_around_tests():
assert True
yield
assert True
def test():
assert False
Test passes -> PASSED
import pytest
#pytest.fixture(autouse=True)
def run_around_tests():
assert True
yield
assert True
def test():
assert True
Fixtures are exactly what you want.
That's what they are designed for.
Whether you use pytest style fixtures, or setup and teardown (module, class, or method level) xUnit style fixtures, depends on the circumstance and personal taste.
From what you are describing, it seems like you could use pytest autouse fixtures.
Or xUnit style function level setup_function()/teardown_function().
Pytest has you completely covered. So much so that perhaps it's a fire hose of information.
You can use Module level setup/teardown Fixtures of Pytest.
Here's the Link
http://pytest.org/latest/xunit_setup.html
It Works as follows:
def setup_module(module):
""" setup any state specific to the execution of the given module."""
def teardown_module(module):
""" teardown any state that was previously setup with a setup_module
method."""
Test_Class():
def test_01():
#test 1 Code
It will call setup_module before this test and teardown_module after test completes.
You can include this fixture in each test-script to run it for each test.
IF you want to use something that is common to all tests in a directory You can use package/directory level fixtures nose framework
http://pythontesting.net/framework/nose/nose-fixture-reference/#package
In __init__.py file of the package you can include following
def setup_package():
'''Set up your environment for test package'''
def teardown_package():
'''revert the state '''
You may use decorators but programatically, so you don't need to put the decorator in each method.
I'm assuming several things in next code:
The test methods are all named like: "testXXX()"
The decorator is added to the same module where test methods are implemented.
def test1():
print ("Testing hello world")
def test2():
print ("Testing hello world 2")
#This is the decorator
class TestChecker(object):
def __init__(self, testfn, *args, **kwargs):
self.testfn = testfn
def pretest(self):
print ('precheck %s' % str(self.testfn))
def posttest(self):
print ('postcheck %s' % str(self.testfn))
def __call__(self):
self.pretest()
self.testfn()
self.posttest()
for fn in dir() :
if fn.startswith('test'):
locals()[fn] = TestChecker(locals()[fn])
Now if you call the test methods...
test1()
test2()
The output should be something like:
precheck <function test1 at 0x10078cc20>
Testing hello world
postcheck <function test1 at 0x10078cc20>
precheck <function test2 at 0x10078ccb0>
Testing hello world 2
postcheck <function test2 at 0x10078ccb0>
If you have test methods as class methods, the approach is also valid. For instance:
class TestClass(object):
#classmethod
def my_test(cls):
print ("Testing from class method")
for fn in dir(TestClass) :
if not fn.startswith('__'):
setattr(TestClass, fn, TestChecker(getattr(TestClass, fn)))
The call to TestClass.my_test() will print:
precheck <bound method type.my_test of <class '__main__.TestClass'>>
Testing from class method
postcheck <bound method type.my_test of <class '__main__.TestClass'>>
It is an old question but I personally found another way from the docs :
Use the pytest.ini file :
[pytest]
usefixtures = my_setup_and_tear_down
import pytest
#pytest.fixture
def my_setup_and_tear_down():
# SETUP
# Write here the logic that you need for the setUp
yield # this statement will let the tests execute
# TEARDOWN
# Write here the logic that you need after each tests
About the yield statement and how it allows to run the test : HERE
Fixtures by default have scope=function. So, if you just use a definition such as
#pytest.fixture
def fixture_func(self)
It defaults to (scope='function').
So any finalizers in the fixture function will be called after each test.
I have a test case:
class LoginTestCase(unittest.TestCase):
...
I'd like to use it in a different test case:
class EditProfileTestCase(unittest.TestCase):
def __init__(self):
self.t = LoginTestCase()
self.t.login()
This raises:
ValueError: no such test method in <class 'LoginTest: runTest`
I looked at the unittest code where the exception is being called, and it looks like the tests aren't supposed to be written this way. Is there a standard way to write something you'd like tested so that it can be reused by later tests? Or is there a workaround?
I've added an empty runTest method to LoginTest as a dubious workaround for now.
The confusion with "runTest" is mostly based on the fact that this works:
class MyTest(unittest.TestCase):
def test_001(self):
print "ok"
if __name__ == "__main__":
unittest.main()
So there is no "runTest" in that class and all of the test-functions are being called. However if you look at the base class "TestCase" (lib/python/unittest/case.py) then you will find that it has an argument "methodName" that defaults to "runTest" but it does NOT have a default implementation of "def runTest"
class TestCase:
def __init__(self, methodName='runTest'):
The reason that unittest.main works fine is based on the fact that it does not need "runTest" - you can mimic the behaviour by creating a TestCase-subclass instance for all methods that you have in your subclass - just provide the name as the first argument:
class MyTest(unittest.TestCase):
def test_001(self):
print "ok"
if __name__ == "__main__":
suite = unittest.TestSuite()
for method in dir(MyTest):
if method.startswith("test"):
suite.addTest(MyTest(method))
unittest.TextTestRunner().run(suite)
Here's some 'deep black magic':
suite = unittest.TestLoader().loadTestsFromTestCase(Test_MyTests)
unittest.TextTestRunner(verbosity=3).run(suite)
Very handy if you just want to test run your unit tests from a shell (i.e., IPython).
If you don't mind editing unit test module code directly, the simple fix is to add under case.py class TestCase a new method called runTest that does nothing.
The file to edit sits under pythoninstall\Lib\unittest\case.py
def runTest(self):
pass
This will stop you ever getting this error.
Guido's answer is almost there, however it doesn't explain the thing. I needed to look to unittest code to grasp the flow.
Say you have the following.
import unittest
class MyTestCase(unittest.TestCase):
def testA(self):
pass
def testB(self):
pass
When you use unittest.main(), it will try to discover test cases in current module. The important code is unittest.loader.TestLoader.loadTestsFromTestCase.
def loadTestsFromTestCase(self, testCaseClass):
# ...
# This will look in class' callable attributes that start
# with 'test', and return their names sorted.
testCaseNames = self.getTestCaseNames(testCaseClass)
# If there's no test to run, look if the case has the default method.
if not testCaseNames and hasattr(testCaseClass, 'runTest'):
testCaseNames = ['runTest']
# Create TestSuite instance having test case instance per test method.
loaded_suite = self.suiteClass(map(testCaseClass, testCaseNames))
return loaded_suite
What the latter does, is converting test case class into test suite, that holds the instances of the class per its test method. I.e. my example will be turned into unittest.suite.TestSuite([MyTestCase('testA'), MyTestCase('testB')]). So if you would like to create a test case manually, you need to do the same thing.
#dmvianna's answer got me very close to being able to run unittest in a jupyter (ipython) notebook, but I had to do a bit more. If I wrote just the following:
class TestStringMethods(unittest.TestCase):
def test_upper(self):
self.assertEqual('foo'.upper(), 'FOO')
def test_isupper(self):
self.assertTrue('FOO'.isupper())
self.assertFalse('Foo'.isupper())
def test_split(self):
s = 'hello world'
self.assertEqual(s.split(), ['hello', 'world'])
# check that s.split fails when the separator is not a string
with self.assertRaises(TypeError):
s.split(2)
suite = unittest.TestLoader().loadTestsFromModule (TestStringMethods)
unittest.TextTestRunner().run(suite)
I got
Ran 0 tests in 0.000s
OK
It's not broken, but it doesn't run any tests! If I instantiated the test class
suite = unittest.TestLoader().loadTestsFromModule (TestStringMethods())
(note the parens at the end of the line; that's the only change) I got
ValueError Traceback (most recent call last)
in ()
----> 1 suite = unittest.TestLoader().loadTestsFromModule (TestStringMethods())
/usr/lib/python2.7/unittest/case.pyc in init(self, methodName)
189 except AttributeError:
190 raise ValueError("no such test method in %s: %s" %
--> 191 (self.class, methodName))
192 self._testMethodDoc = testMethod.doc
193 self._cleanups = []
ValueError: no such test method in : runTest
The fix is now reasonably clear: add runTest to the test class:
class TestStringMethods(unittest.TestCase):
def runTest(self):
test_upper (self)
test_isupper (self)
test_split (self)
def test_upper(self):
self.assertEqual('foo'.upper(), 'FOO')
def test_isupper(self):
self.assertTrue('FOO'.isupper())
self.assertFalse('Foo'.isupper())
def test_split(self):
s = 'hello world'
self.assertEqual(s.split(), ['hello', 'world'])
# check that s.split fails when the separator is not a string
with self.assertRaises(TypeError):
s.split(2)
suite = unittest.TestLoader().loadTestsFromModule (TestStringMethods())
unittest.TextTestRunner().run(suite)
Ran 3 tests in 0.002s
OK
It also works correctly (and runs 3 tests) if my runTest just passes, as suggested by #Darren.
This is a little yucchy, requiring some manual labor on my part, but it's also more explicit, and that's a Python virtue, isn't it?
I could not get any of the techniques via calling unittest.main with explicit arguments from here or from this related question Unable to run unittest's main function in ipython/jupyter notebook to work inside a jupyter notebook, but I am back on the road with a full tank of gas.
unittest does deep black magic -- if you choose to use it to run your unit-tests (I do, since this way I can use a very powerful battery of test runners &c integrated into the build system at my workplace, but there are definitely worthwhile alternatives), you'd better play by its rules.
In this case, I'd simply have EditProfileTestCase derive from LoginTestCase (rather than directly from unittest.TestCase). If there are some parts of LoginTestCase that you do want to also test in the different environment of EditProfileTestCase, and others that you don't, it's a simple matter to refactor LoginTestCase into those two parts (possibly using multiple inheritance) and if some things need to happen slightly differently in the two cases, factor them out into auxiliary "hook methods" (in a "Template Method" design pattern) -- I use all of these approaches often to diminish boilerplate and increase reuse in the copious unit tests I always write (if I have unit-test coverage < 95%, I always feel truly uneasy -- below 90%, I start to feel physically sick;-).