I have the following django test case that is giving me errors:
class MyTesting(unittest.TestCase):
def setUp(self):
self.u1 = User.objects.create(username='user1')
self.up1 = UserProfile.objects.create(user=self.u1)
def testA(self):
...
def testB(self):
...
When I run my tests, testA will pass sucessfully but before testB starts, I get the following error:
IntegrityError: column username is not unique
It's clear that it is trying to create self.u1 before each test case and finding that it already exists in the Database. How do I get it to properly clean up after each test case so that subsequent cases run correctly?
setUp and tearDown methods on Unittests are called before and after each test case. Define tearDown method which deletes the created user.
class MyTesting(unittest.TestCase):
def setUp(self):
self.u1 = User.objects.create(username='user1')
self.up1 = UserProfile.objects.create(user=self.u1)
def testA(self):
...
def tearDown(self):
self.up1.delete()
self.u1.delete()
I would also advise to create user profiles using post_save signal unless you really want to create user profile manually for each user.
Follow-up on delete comment:
From Django docs:
When Django deletes an object, it
emulates the behavior of the SQL
constraint ON DELETE CASCADE -- in
other words, any objects which had
foreign keys pointing at the object to
be deleted will be deleted along with
it.
In your case, user profile is pointing to user so you should delete the user first to delete the profile at the same time.
If you want django to automatically flush the test database after each test is run then you should extend django.test.TestCase, NOT django.utils.unittest.TestCase (as you are doing currently).
It's good practice to dump the database after each test so you can be extra-sure you're tests are consistent, but note that your tests will run slower with this additional overhead.
See the WARNING section in the "Writing Tests" Django Docs.
Precisely, setUp exists for the very purpose of running once before each test case.
The converse method, the one that runs once after each test case, is named tearDown: that's where you delete self.u1 etc (presumably by just calling self.u1.delete(), unless you have supplementary specialized clean-up requirements in addition to just deleting the object).
Related
When I run this code in the Python interpreter, it returns True:
>>> from movies.models import Movie
>>> movie_list = Movie.objects.all()
>>> bool(movie_list)
True
When I run my test case, python3 manage.py test movies, it fails:
from django.test import TestCase
from .models import Movie
class QuestionMethodTests(TestCase):
def test_movie_list_empty(self):
movie_list = Movie.objects.all()
self.assertEqual(bool(movie_list), True)
What am I missing? Shouldn't the test pass?
I see. Does that mean the test cases only test the code but can't use
any of the actual database content in its tests?
By default no, and you don't want to mess with the actual DB anyway,
there is a usual way to provide the initial objects for the tests (the actual source can differ, e.g. loading from a file)
from django.test import TestCase
from .models import Movie
class QuestionMethodTests(TestCase):
def setUp(self):
# You can create your movie objects here
Movie.objects.create(title='Forest Gump', ...)
def test_movie_list_empty(self):
movie_list = Movie.objects.all()
self.assertEqual(bool(movie_list), True)
The TestCase class also contains a setUpTestData method if you fancy that, https://docs.djangoproject.com/en/1.8/topics/testing/tools/#django.test.TestCase.setUpTestData
PS: test_movie_list_empty name sounds weird, cause it seems to test that the movie list is NOT empty
Because in tests you are using a temporary database which doesn't have the objects:
Tests that require a database (namely, model tests) will not use your
“real” (production) database. Separate, blank databases are created
for the tests.
Regardless of whether the tests pass or fail, the test databases are
destroyed when all the tests have been executed.
It's dangerous to use the real database for tests. Especially that tests should be reproducible, on other machines too. You should use fixtures for tests. Look at factory_boy.
I'm creating some initial tests as I play with django-revisions. I'd like to be able to test that some of my api and view code correctly saves revisions. However, I can't get even a basic test to save a deleted version.
import reversion
from django.db import transaction
from django import test
from myapp import models
class TestRevisioning(test.TestCase):
fixtures = ['MyModel']
def testDelete(self):
object1 = models.MyModel.objects.first()
with transaction.atomic():
with reversion.create_revision():
object1.delete()
self.assertEquals(reversion.get_deleted(models.MyModel).count(), 1)
This fails when checking the length of the deleted QuerySet with:
AssertionError: 0 != 1
My hypothesis is that I need to create the initial revisions of my model (do the equivalent of ./manage.py createinitialrevisions). If this is the issue, how do I create the initial revisions in my test? If that isn't the issue, what else can I try?
So, the solution is pretty simple. I saved my object under revision control.
# imports same as question
class TestRevisioning(test.TestCase):
fixtures = ['MyModel']
def testDelete(self):
object1 = models.MyModel.objects.first()
# set up initial revision
with reversion.create_revision():
object1.save()
# continue with remainder of the test as per the question.
# ... etc.
I tried to override _fixture_setup(), but that didn't work. Another option would be to loop over the MyModel objects in the __init__(), saving them under reversion control.
'MyModel' is the name of the file with your fixtures?
If not, what you probably is missing is the data creation.
You can use fixtures (but a file, not the name of your model) or factories.
There's a whole chapter in Django documentation related to providing initial data in database for models: https://docs.djangoproject.com/en/1.7/howto/initial-data/
Hope it helps
I'm trying mock out a save method call on a django models.Model.
I'm using Mock as my mocking library.
I'm testing a function in the file house_factory.py , which is located at apps.deps.house_factory.
house_factory.py:
from apps.market.models import House
def create_house(location, date, price):
house = House(id=None, date, price)
house.save()
# calculate some stuff and further expand the house instance
# for example house.tag.add("some-tag")
# save after calculations
house.save()
I'd like to mock out the House model.
class HouseModelMock(mock.Mock):
def save(self):
pass
Testing method, is part of a unittest.TestCase class
#patch('apps.deps.house_factory.House', new_callable=HouseModelMock)
def create_house_test(self, MockedHouse):
""" Constants """
DAYS_FROM_TODAY = 55
DATE = datetime.date.today() + datetime.timedelta(days=DAYS_FROM_TODAY)
PRICE = 250000
# A location is also a django module , I'm using factory_boy here for building a 'mocked' location
location = LocationFactory.build()
create_house(DATE, PRICE)
MockedHouse.assert_called_with(None, DATE, PRICE)
MockedHouse.save.assert_called_with()
If I run this test I'm getting a:
call__
return self.call(*arg, **kw)
MemoryError
This is one of my first attempts to get serious with django and testing. So maybe I'm setting things up wrong, to mock out a database call.
Any help is appreciated,
Jonas.
"This is one of my first attempts to get serious with django and testing" ... you don't need to mock database saves as Django automatically creates a test DB to run your test suite against whenever you run python manage.py test. Then simply assert the values stored in your DB.
Ideally mock is used to patch own tests (and logic), rather than the default Django ones.
Tip: use an in memory db for unit tests, such as sqlite put the below in your settings.py file:
if 'test' in sys.argv:
DATABASES['default']['ENGINE'] = 'sqlite3'
This will significantly speed up your test run.
I am developing a CherryPy application and I want to write some automated tests for it. I chose to use nosetests for it. The application uses sqlalchemy as db backend so I need to use fixture package to provide fixed datasets. Also I want to do webtests. Here is how I set it all together:
I have a helper function init_model(test = False) in the file where all models are created. It connects to the production or test (if test == True or cherrypy.request.app.test == True) database and calls create_all
Then I have created a base class for tests like this:
class BaseTest(DataTestCase):
def __init__(self):
init_model(True)
application.test = True
self.app = TestApp(application)
self.fixture = SQLAlchemyFixture(env = models, engine = meta.engine, style = NamedDataStyle())
self.datasets = (
# all the datasets go here
)
And now I do my tests by creating child classes of BaseTest and calling self.app.some_method()
This is my first time doing tests in python and all this seems very complicated. I want to know if I am using the mentioned packages as their authors intended and if it's not overcomplicated.
That looks mostly like normal testing glue for a system of any size. In other words, it's not overly-complicated.
In fact, I'd suggest slightly more complexity in one respect: I think you're going to find setting up a new database in each child test class to be really slow. It's more common to at least set up all your tables once per run instead of once per class. Then, you either have each test method create all the data it needs for its own sake, and/or you run each test case in a transaction and roll it all back in a finally: block.
In my application, I want to create entries in certain tables when a new user signs up. For instance, I want to create a userprofile which will then reference their company and some other records for them. I implemented this with a post_save signal:
def callback_create_profile(sender, **kwargs):
# check if we are creating a new User
if kwargs.get('created', True):
user = kwargs.get('instance')
company = Company.objects.create(name="My Company")
employee = Employee.objects.create(company=company, name_first=user.first_name, name_last=user.last_name)
profile = UserProfile.objects.create(user=user, employee=employee, partner=partner)
# Register the callback
post_save.connect(callback_create_profile, sender=User, dispatch_uid="core.models")
This works well when run. I can use the admin to create a new user and the other three tables get entries with sensible as well. (Except that is, the employee since the user.first_name and user.last_name aren't filled out in the admin's form when it saves. I still don't understand why it is done like that)
The problem came when I ran my test suite. Before this, I had created a bunch of fixtures to create these entries in the tables. Now I get an error that states:
IntegrityError: duplicate key value violates unique constraint "core_userprofile_user_id_key"
I think this is because I have already created a company,employee and profile records in the fixture with id "1" and now the post_save signal is trying to recreate it.
My questios are: can I disable this post_save signal when running fixtures? Can I detect that I am running as part of the test suite and not create these records? Should I delete these records from the fixtures now (although the signal only sets defaults not the values I want to be testing against)? Why doesn't the fixture loading code just overwrite the created records?
How do people do this?
I think I figured out a way to do this. There is a 'raw' parameter in the kwargs passed in along with signals so I can replace my test above with this one:
if (kwargs.get('created', True) and not kwargs.get('raw', False)):
Raw is used when loaddata is running. This seems to do the trick.
It is mentioned here: http://code.djangoproject.com/ticket/13299
Would be nice if this was documented: http://docs.djangoproject.com/en/1.2/ref/signals/#django.db.models.signals.post_save
This is an old question, but the solution I've found most straightforward is to use the 'raw' argument, passed by load data, and decorate the listener functions, for example:
from functools import wraps
def disable_for_loaddata(signal_handler):
#wraps(signal_handler)
def wrapper(*args, **kwargs):
if kwargs['raw']:
print "Skipping signal for %s %s" % (args, kwargs)
return
signal_handler(*args, **kwargs)
return wrapper
and then
#disable_for_loaddata
def callback_create_profile(sender, **kwargs):
# check if we are creating a new User
...
Simple solution, add this to the beginning of your post_save function:
if kwargs.get('raw', False):
return False
This will cause this function to exit when loading a fixture.
See: https://docs.djangoproject.com/en/dev/ref/signals/#post-save
I faced a similar problem in one of my projects. In my case the signals were slowing down the tests as well. I ended up abandoning signals in favour of overriding a Model.save() method instead.
In your case however I don't think it makes sense to achieve this by overriding any save() methods. In that case you might want to try this. Warning, I only tried it once. It seemed to work but is not thoroughly tested.
Create your own test runner.
Before you load the fixtures, disconnect the callback_create_profile function from the User class' post_save signal.
Let the fixtures load.
Connect the function back to the signal.