I need to add test to my django project, I need to create data test before execute tests. I read about setUp test data in this question. I can create data in setUpClass for all test in a class. Creating my complete data test is time consuming approach so I want to run it once for all of test classes, is any approach to set up data for all test classes once?
I found my answer, hope it can help someone else. Base on django docs.
A test runner is a class defining a run_tests() method. Django ships with a DiscoverRunner class that defines the default Django testing behavior. This class defines the run_tests() entry point, plus a selection of other methods that are used to by run_tests() to set up, execute and tear down the test suite.
In case of this question, there is 2 helpful methods in this class.setup_databases and teardown_databases so we can overwrite them to initialize data for all test classes.
from django.test.runner import DiscoverRunner as BaseRunner
class MyMixinRunner(object):
def setup_databases(self, *args, **kwargs):
temp_return = super(MyMixinRunner, self).setup_databases(*args, **kwargs)
# do something
return temp_return
def teardown_databases(self, *args, **kwargs):
# do somthing
return super(MyMixinRunner, self).teardown_databases(*args, **kwargs)
class MyTestRunner(MyMixinRunner, BaseRunner):
pass
after defining test runner class we need to add TEST_RUNNER to settings:
TEST_RUNNER = 'path.to.MyTestRunner'
Related
A project I'm working on uses django for basically everything. When writing a model, I found it necessary to override the save() method to spin off a task to be run by a worker:
class MyModel(models.Model)
def _start_processing(self):
my_task.apply_async(args=['arg1', ..., 'argn'])
def save(self, *args, **kwargs):
"""Saves the model object to the database"""
# do some stuff
self._start_processing()
# do some more stuff
super(MyModel, self).save(*args, **kwargs)
In my tester, I want to test the parts of the save override that are designated by # do some stuff and # do some more stuff, but don't want to run the task. To do this, I believe I should be using mocking (which I'm very new to).
In my test class, I've set it up to skip the task invocation:
class MyModelTests(TestCase):
def setUp(self):
# Mock the _start_processing() method. Ha!
#patch('my_app.models.MyModel._start_processing')
def start_processing(self, mock_start_processing):
print('This is when the task would normally be run, but this is a test!')
# Create a model to test with
self.test_object = MyModelFactory()
Since the factory creates and saves an instance of the model, I need to have overwritten the _start_processing() method before that is called. The above doesn't seem to be working (and the task runs and fails). What am I missing?
First of all, you have to wrap into decorator not the function which you want to use as replacement, but the "scope" in which your mock should work. So, for example, if you need to mock the _start_processing for the whole MyModelTests class, you should place the decorator before the class definition. If only for one test method - wrap only test method with it.
Secondly, define that start_processing function somewhere outside the class, and pass #patch('my_app.models.MyModel._start_processing', new=start_processing), so it will know what to use as a replacement for actual method. But be aware to match the actual method signature, so use just
def start_processing(self):
print('This is when the task would normally be run, but this is a test!')
Thirdly, you will have to add mock_start_processing argument to each test case inside this class (test_... methods), just because mocking works like this :).
And finally. You have to be aware about the target you are patching. Your current my_app.models.MyModel._start_processing could be broken. You have to patch the class using the path where it is USED, not where it is DEFINED. So, if you are creating objects with MyModelFactory inside TestCase, and MyModelFactory lives in my_app.factories and it imports MyModel as from .models import MyModel, you will have to use #patch('my_app.factories.MyModel._start_processing'), not 'my_app.models.MyModel._start_processing'.
Hopefully, it helps.
There is a Serializer FooSerialyzer with the field bar:
class FooSerialyzer(serializers.HyperlinkedModelSerializer):
bar = serialyzer.CharField()
def __init__(self, **kwargs):
# some custom fields logic
Problem: Now there are some unittests in the project. The problem is that there is one test that fails - only if ran after other tests.
The tests are to complex, so there is really no point to post them. I have put a debugger inside the __init__ and here is what I get.
When test ran alone:
(Pdb) self
FooSerialyzer():
bar = serialyzer.CharField()
When test ran after other tests:
(Pdb) self
FooSerialyzer():
Question: what could cause the Serialyzer to not have fields when ran after other tests? The tests are located in separate files, and use separate setup - can't even imagine how they could influence one another.
I have my test class as follows:
class MyTestCase(django.test.TestCase):
def setUp(sefl):
# set up stuff common to ALL the tests
#my_test_decorator('arg1')
#my_test_decorator('arg2')
def test_something_1(self):
# run test
def test_something_2(self):
# run test
#my_test_decorator('arg1')
#my_test_decorator('arg2')
def test_something_3(self):
# run test
...
def test_something_N(self):
# run test
Now, #my_test_decorator is a decorator I made that performs internal changes to setup some changes to the test environment at runtime and undo such changes after the test finished, but I need to do this to a specific set of test cases only, not to ALL of them and I would like to keep the setup general to all the tests and, for the specific tests, maybe do something like this:
def setUp(self):
# set up stuff common to ALL the tests
tests_to_decorate = ['test_something_1', 'test_something_3']
decorator_args = ['arg1', 'arg2']
if self._testMethodNamein in tests_to_decorate:
method = getattr(self, self._testMethodNamein)
for arg in decorator_args:
method = my_test_decorator(arg)(method)
setattr(self, self._testMethodNamein, method)
I mean, without repeating the decorators all over the file, but it seems that the test runner retrieves the set of tests to run even before instantiating the test class and thus is of no use doing this in the __init__ or setUp methods.
It would be nice to have a way to accomplish this without:
having to write my own test runner
needing to split the tests in two or more TestCase subclasses
repeating setUp in different classes
creating a class that hosts the setUp method and have the TestCase subclasses inherit from such class
Is this even possible?
Thanks!! :)
I have a JSON parser library (ijson) with a test suite using unittest. The library actually has several parsing implementations — "backends" — in the form of a modules with the identical API. I want to automatically run the test suite several times for each available backend. My goals are:
I want to keep all tests in one place as they are backend agnostic.
I want the name of the currently used backend to be visible in some fashion when a test fails.
I want to be able to run a single TestCase or a single test, as unittest normally allows.
So what's the best way to organize the test suite for this? Write a custom test runner? Let TestCases load backends themselves? Imperatively generate separate TestCase classes for each backend?
By the way, I'm not married to unittest library in particular and I'm open to try another one if it solves the problem. But unittest is preferable since I already have the test code in place.
One common way is to group all your tests together in one class with an abstract method that creates an instance of the backend (if you need to create multiple instances in a test), or expects setUp to create an instance of the backend.
You can then create subclasses that create the different backends as needed.
If you are using a test loader that automatically detects TestCase subclasses, you'll probably need to make one change: don't make the common base class a subclass of TestCase: instead treat it as a mixin, and make the backend classes subclass from both TestCase and the mixin.
For example:
class BackendTests:
def make_backend(self):
raise NotImplementedError
def test_one(self):
backend = self.make_backend()
# perform a test on the backend
class FooBackendTests(unittest.TestCase, BackendTests):
def make_backend(self):
# Create an instance of the "foo" backend:
return foo_backend
class BarBackendTests(unittest.TestCase, BackendTests):
def make_backend(self):
# Create an instance of the "bar" backend:
return bar_backend
When building a test suite from the above, you will have independent test cases FooBackendTests.test_one and BarBackendTests.test_one that test the same feature on the two backends.
I took James Henstridge's idea with a mixin class holding all the tests but actual test cases are then generated imperatively, as backends may fail on import in which case we don't want to test them:
class BackendTests(object):
def test_something(self):
# using self.backend
# Generating real TestCase classes for each importable backend
for name in ['backend1', 'backend2', 'backend3']:
try:
classname = '%sTest' % name.capitalize()
locals()[classname] = type(
classname,
(unittest.TestCase, BackendTests),
{'backend': import_module('backends.%s' % name)},
)
except ImportError:
pass
I have a group of test cases that all should have exactly the same test done, along the lines of "Does method x return the name of an existing file?"
I thought that the best way to do it would be a base class deriving from TestCase that they all share, and simply add the test to that class. Unfortunately, the testing framework still tries to run the test for the base class, where it doesn't make sense.
class SharedTest(TestCase):
def x(self):
...do test...
class OneTestCase(SharedTest):
...my tests are performed, and 'SharedTest.x()'...
I tried to hack in a check to simply skip the test if it's called on an object of the base class rather than a derived class like this:
class SharedTest(TestCase):
def x(self):
if type(self) != type(SharedTest()):
...do test...
else:
pass
but got this error:
ValueError: no such test method in <class 'tests.SharedTest'>: runTest
First, I'd like any elegant suggestions for doing this. Second, though I don't really want to use the type() hack, I would like to understand why it's not working.
You could use a mixin by taking advantage that the test runner only runs tests inheriting from unittest.TestCase (which Django's TestCase inherits from.) For example:
class SharedTestMixin(object):
# This class will not be executed by the test runner (it inherits from object, not unittest.TestCase.
# If it did, assertEquals would fail , as it is not a method that exists in `object`
def test_common(self):
self.assertEquals(1, 1)
class TestOne(TestCase, SharedTestMixin):
def test_something(self):
pass
# test_common is also run
class TestTwo(TestCase, SharedTestMixin):
def test_another_thing(self):
pass
# test_common is also run
For more information on why this works do a search for python method resolution order and multiple inheritance.
I faced a similar problem. I couldn't prevent the test method in the base class being executed but I ensured that it did not exercise any actual code. I did this by checking for an attribute and returning immediately if it was set. This attribute was only set for the Base class and hence the tests ran everywhere else but the base class.
class SharedTest(TestCase):
def setUp(self):
self.do_not_run = True
def test_foo(self):
if getattr(self, 'do_not_run', False):
return
# Rest of the test body.
class OneTestCase(SharedTest):
def setUp(self):
super(OneTestCase, self).setUp()
self.do_not_run = False
This is a bit of a hack. There is probably a better way to do this but I am not sure how.
Update
As sdolan says a mixin is the right way. Why didn't I see that before?
Update 2
(After reading comments) It would be nice if (1) the superclass method could avoid the hackish if getattr(self, 'do_not_run', False): check; (2) if the number of tests were counted accurately.
There is a possible way to do this. Django picks up and executes all test classes in tests, be it tests.py or a package with that name. If the test superclass is declared outside the tests module then this won't happen. It can still be inherited by test classes. For instance SharedTest can be located in app.utils and then used by the test cases. This would be a cleaner version of the above solution.
# module app.utils.test
class SharedTest(TestCase):
def test_foo(self):
# Rest of the test body.
# module app.tests
from app.utils import test
class OneTestCase(test.SharedTest):
...