I have my test class as follows:
class MyTestCase(django.test.TestCase):
def setUp(sefl):
# set up stuff common to ALL the tests
#my_test_decorator('arg1')
#my_test_decorator('arg2')
def test_something_1(self):
# run test
def test_something_2(self):
# run test
#my_test_decorator('arg1')
#my_test_decorator('arg2')
def test_something_3(self):
# run test
...
def test_something_N(self):
# run test
Now, #my_test_decorator is a decorator I made that performs internal changes to setup some changes to the test environment at runtime and undo such changes after the test finished, but I need to do this to a specific set of test cases only, not to ALL of them and I would like to keep the setup general to all the tests and, for the specific tests, maybe do something like this:
def setUp(self):
# set up stuff common to ALL the tests
tests_to_decorate = ['test_something_1', 'test_something_3']
decorator_args = ['arg1', 'arg2']
if self._testMethodNamein in tests_to_decorate:
method = getattr(self, self._testMethodNamein)
for arg in decorator_args:
method = my_test_decorator(arg)(method)
setattr(self, self._testMethodNamein, method)
I mean, without repeating the decorators all over the file, but it seems that the test runner retrieves the set of tests to run even before instantiating the test class and thus is of no use doing this in the __init__ or setUp methods.
It would be nice to have a way to accomplish this without:
having to write my own test runner
needing to split the tests in two or more TestCase subclasses
repeating setUp in different classes
creating a class that hosts the setUp method and have the TestCase subclasses inherit from such class
Is this even possible?
Thanks!! :)
Related
So, I have fixtures defined in conftest.py file with scope="class" as I want to run them before each test class is invoked. The conftest file is placed inside project root directory for it to be visible to every test module.
Now in one of the test modules, I have another setup function which I want to run once for that module only. But the problem is setup_class() method is called before running fixtures defined in conftest.py. Is this expected? I wanted it to be opposite because I want to use something done in the fixtures defined in conftest. How to do that?
Code -
conftest.py:
#pytest.fixture(scope="class")
def fixture1(request):
#set a
#pytest.fixture(scope="class")
def fixture1(request):
test_1.py:
#pytest.mark.usefixtures("fixture_1", "fixture_2")
class Test1():
#need this to run AFTER the fixture_1 & fixture_2
def setup_class():
#setup
#get a set in fixture_1
def test_1()
.....
I know that I could simply define a fixture in the test file instead of setup_class but then I will have to specify it in arguments of every test method in order it to be invoked by pytest. But suggestions are welcome!
I have exactly the same problem. Only now I have realized that the problem might be taht the setup_class is called before the fixture >-/
I think that this question is similar to this one
Pytest - How to pass an argument to setup_class?
And the problem is mixing the unittest and pytest methods.
I kind of did what they suggested - I ommitted the setup_class and created a new fixture within the particular test file,
calling the fixture in the conftest.py.
It works so far.
M.
The problem is that you can use the result of a fixture only in test function (or method) which is run by pytest. Here I can suggest a workaround. But of course I'm not sure if it suites your needs.
The workaround is to call the function from a test method:
conftest.py
#pytest.fixture(scope='class')
def fixture1():
yield 'MYTEXT'
test_1.py
class Test1:
def setup_class(self, some_var):
print(some_var)
def test_one(self, fixture1):
self.setup_class(fixture1)
Fixtures and setup_class are two different paradigms to initialize test functions (and classes). In this case, mixing the two creates a problem: The class-scoped fixtures run when the individual test functions (methods) run. On the other hand, setup_class runs before they do. Hence, it is not possible to access a fixture value (or fixture-modified state) from setup_class.
One of the solutions is to stop using setup_class entirely and stick with a fixtures-only solution which is the preferred way in pytest nowadays (see the note at the beginning).
# conftest.py or the test file:
#pytest.fixture(scope="class")
def fixture_1(request):
print('fixture_1')
# the test file:
class Test1():
#pytest.fixture(scope="class", autouse=True)
def setup(self, fixture_1, request):
print('Test1.setup')
def test_a(self):
print('Test1.test_a')
def test_b(self):
print('Test1.test_b')
Note that the setup fixture depends on fixture_1 and hence can access it.
I am writing a test framework using pytest. Is there a way to get testcase object in classes other than testcase. For example utility classes.
I want to print the testcase name and some markers for the test in utility classes. Are these information available in some contextmanager?
You cannot directly access pytest test properties if you are not inside a test fixture or a hook function, as there is no fixed test case class as in unittest. Your best bet is probably to get this information in a fixture and store it globally for access from a utility function:
testinfo={}
#pytest.fixture(autouse=True)
def test_info(request):
global testinfo
testinfo['name'] = request.node.name
testinfo['markers'] = [m.name for m in request.node.iter_markers()]
...
yield # the information is stored at test start...
testinfo = {} # ... and removed on test teardown
def utility_func():
if testinfo:
print(f"Name: {testinfo['name']} Markers: {testinfo['markers']}")
...
Or, the same if you use a test class:
class TestSomething:
def setup_method(self):
self.testinfo = {}
#pytest.fixture(autouse=True)
def info(self, request):
self.testinfo['name'] = request.node.name
self.testinfo['markers'] = [m.name for m in
request.node.iter_markers()]
yield # the information is stored at test start...
self.testinfo = {} # ... and removed on test teardown
def utility_func(self):
if self.testinfo:
print(f"Name: {self.testinfo['name']} Markers:"
f" {self.testinfo['markers']}")
#pytest.mark.test_marker
def test_something(self):
self.utility_func()
assert True
This will show the output:
Name: test_something Markers: ['test_marker']
This will work if you call the utility function during test execution - otherwise no value will be set.
Note however that this will only work reliably if you execute the test synchronously. If using pytest-xdist or similar tools for asynchronous test execution, this may not work due to the testinfo variable being overwitten by another test (though that depends on the implementation - it may work, if the variables are copied during a test run). In that case you can to do the logging directly in the fixture or hook function (which may generally be a better idea, depending on your use case).
For more information about available test node properties you can check the documentation of a request node.
I need to add test to my django project, I need to create data test before execute tests. I read about setUp test data in this question. I can create data in setUpClass for all test in a class. Creating my complete data test is time consuming approach so I want to run it once for all of test classes, is any approach to set up data for all test classes once?
I found my answer, hope it can help someone else. Base on django docs.
A test runner is a class defining a run_tests() method. Django ships with a DiscoverRunner class that defines the default Django testing behavior. This class defines the run_tests() entry point, plus a selection of other methods that are used to by run_tests() to set up, execute and tear down the test suite.
In case of this question, there is 2 helpful methods in this class.setup_databases and teardown_databases so we can overwrite them to initialize data for all test classes.
from django.test.runner import DiscoverRunner as BaseRunner
class MyMixinRunner(object):
def setup_databases(self, *args, **kwargs):
temp_return = super(MyMixinRunner, self).setup_databases(*args, **kwargs)
# do something
return temp_return
def teardown_databases(self, *args, **kwargs):
# do somthing
return super(MyMixinRunner, self).teardown_databases(*args, **kwargs)
class MyTestRunner(MyMixinRunner, BaseRunner):
pass
after defining test runner class we need to add TEST_RUNNER to settings:
TEST_RUNNER = 'path.to.MyTestRunner'
I'm working on a module using sockets with hundreds of test cases. Which is nice. Except now I need to test all of the cases with and without socket.setdefaulttimeout( 60 )... Please don't tell me cut and paste all the tests and set/remove a default timeout in setup/teardown.
Honestly, I get that having each test case laid out on it's own is good practice, but i also don't like to repeat myself. This is really just testing in a different context not different tests.
i see that unittest supports module level setup/teardown fixtures, but it isn't obvious to me how to convert my one test module into testing itself twice with two different setups.
any help would be much appreciated.
you could do something like this:
class TestCommon(unittest.TestCase):
def method_one(self):
# code for your first test
pass
def method_two(self):
# code for your second test
pass
class TestWithSetupA(TestCommon):
def SetUp(self):
# setup for context A
do_setup_a_stuff()
def test_method_one(self):
self.method_one()
def test_method_two(self):
self.method_two()
class TestWithSetupB(TestCommon):
def SetUp(self):
# setup for context B
do_setup_b_stuff()
def test_method_one(self):
self.method_one()
def test_method_two(self):
self.method_two()
The other answers on this question are valid in as much as they make it possible to actually perform the tests under multiple environments, but in playing around with the options I think I like a more self contained approach. I'm using suites and results to organize and display results of tests. In order to run one tests with two environments rather than two tests I took this approach - create a TestSuite subclass.
class FixtureSuite(unittest.TestSuite):
def run(self, result, debug=False):
socket.setdefaulttimeout(30)
super().run(result, debug)
socket.setdefaulttimeout(None)
...
suite1 = unittest.TestSuite(testCases)
suite2 = FixtureSuite(testCases)
fullSuite = unittest.TestSuite([suite1,suite2])
unittest.TextTestRunner(verbosity=2).run(fullSuite)
I would do it like this:
Make all of your tests derive from your own TestCase class, let's call it SynapticTestCase.
In SynapticTestCase.setUp(), examine an environment variable to determine whether to set the socket timeout or not.
Run your entire test suite twice, once with the environment variable set one way, then again with it set the other way.
Write a small shell script to invoke the test suite both ways.
If your code does not call socket.setdefaulttimeout then you can run tests the following way:
import socket
socket.setdeaulttimeout(60)
old_setdefaulttimeout, socket.setdefaulttimeout = socket.setdefaulttimeout, None
unittest.main()
socket.setdefaulttimeout = old_setdefaulttimeout
It is a hack, but it can work
You could also inherit and rerun the original suite, but overwrite the whole setUp or a part of it:
class TestOriginal(TestCommon):
def SetUp(self):
# common setUp here
self.current_setUp()
def current_setUp(self):
# your first setUp
pass
def test_one(self):
# your test
pass
def test_two(self):
# another test
pass
class TestWithNewSetup(TestOriginal):
def current_setUp(self):
# overwrite your first current_setUp
I am unit testing mercurial integration and have a test class which currently creates a repository with a file and a clone of that repository in its setUp method and removes them in its tearDown method.
As you can probably imagine, this gets quite performance heavy very fast, especially if I have to do this for every test individually.
So what I would like to do is create the folders and initialize them for mercurial on loading the class, so each and every unittest in the TestCase class can use these repositories. Then when all the tests are run, I'd like to remove them. The only thing my setUp and tearDown methods then have to take care of is that the two repositories are in the same state between each test.
Basically what I'm looking for is a python equivalent of JUnit's #BeforeClass and #AfterClass annotations.
I've now done it by subclassing the TestSuite class, since the standard loader wraps all the test methods in an instance of the TestCase in which they're defined and puts them together in a TestSuite. I have the TestSuite call the before() and after() methods of the first TestCase. This of course means that you can't initialize any values to your TestCase object, but you probably want to do this in your setUp anyway.
The TestSuite looks like this:
class BeforeAfterSuite(unittest.TestSuite):
def run(self, result):
if len(self._tests) < 1:
return unittest.TestSuite.run(self, result)
first_test = self._tests[0]
if "before" in dir(first_test):
first_test.before()
result = unittest.TestSuite.run(self, result)
if "after" in dir(first_test):
first_test.after()
return result
For some slightly more finegrained control I also created the custom TestLoader which makes sure the BeforeAfterSuite is only used to wrap test-method-TestCase objects in, which looks like this:
class BeforeAfterLoader(unittest.TestLoader):
def loadTestsFromTestCase(self, testCaseClass):
self.suiteClass = BeforeAfterSuite
suite = unittest.TestLoader.loadTestsFromTestCase(self, testCaseClass)
self.suiteClass = unittest.TestLoader.suiteClass
return suite
Probably missing here is a try/except block around the before and after which could fail all the testcases in the suite or something.
from the Python unittest documentation :
setUpClass() :
A class method called before tests in an individual class run. setUpClass is called with the class as the only argument and must be decorated as a classmethod():
#classmethod
def setUpClass(cls):
...
New in version 2.7.
tearDownClass() :
A class method called after tests in an individual class have run. tearDownClass is called with the class as the only argument and must be decorated as a classmethod():
#classmethod
def tearDownClass(cls):
...