I am trying to do a simple test of a model. I insert and retrieve the model and check all that data I inserted with is present. I expect this test to fail with a simple, blank model, but it passes. Is this a quirk of the testing framework that I have to live with? Can I set an option to prevent it from keeping refs to python objects?
In the following, I expect it to fail at line 30, but it does not. It fails at the ref comparison as I insists the refs be different and they are not..
import unittest
from google.appengine.ext import ndb
from google.appengine.ext import testbed
class Action(ndb.Model): pass
class ActionTestCase(unittest.TestCase):
def setUp(self):
# First, create an instance of the Testbed class.
self.testbed = testbed.Testbed()
# Then activate the testbed, which prepares the service stubs for use.
self.testbed.activate()
self.testbed.init_datastore_v3_stub()
self.testbed.init_memcache_stub()
def tearDown(self):
self.testbed.deactivate()
def testFetchRedirectAttribute(self):
act = Action()
act.attr = 'test phrase'
act.put()
self.assertEquals(1, len(Action.query().fetch(2)))
fetched = Action.query().fetch(2)[0]
self.assertEquals(fetched.attr, act.attr)
self.assertTrue(act != fetched)
if __name__ == '__main__':
unittest.main()
Models are defined as being equal if all of their properties are equal. If you care about identity instead (you probably shouldn't...), then you can use assertIs in your test.
As it turns out, storing refs is the behavior of stubs. However, for TDD purposes, we do need to check if a property is defined in the model. The simple way to do so is to use keyword argument. If I write the test as follows, then it fails as expected.
def testFetchRedirectAttribute(self):
act = Action(attr='test phrase)
act.put()
The solved my immediate problem of having a failing that that I could code against.
Related
We have distilled a situation down to the following:
import pytest
from django.core.management import call_command
from foo import bar
#pytest.fixture(scope='session')
def django_db_setup(django_db_setup, django_db_blocker):
LOGGER.info('ran call_command')
with django_db_blocker.unblock():
call_command('loaddata', 'XXX.json')
#pytest.mark.django_db(transaction=True)
def test_t1():
assert len(bar.objects.all())
#pytest.mark.django_db(transaction=True)
def test_t2():
assert len(bar.objects.all())
The test fixture XXX.json includes one bar. The first test (test_t1) succeeds. The second test (test_t2) fails. It appears that the transaction=True attribute does not result in the database being reinitialized with the data from the test fixture.
If TransactionTestCase from unittest is used instead, the initialization happens before every test case in the class and all tests succeed.
from django.test import TransactionTestCase
from foo import bar
class TestOne(TransactionTestCase):
fixtures = ['XXX.json']
def test_tc1(self):
assert len(bar.objects.all())
def test_tc2(self):
assert len(bar.objects.all())
objs = bar.objects.all()
for bar in objs:
bar.delete()
def test_tc3(self):
assert len(bar.objects.all())
I would appreciate any perspectives on why the pytest example doesn't result in a reinitialized database for the second test case.
The django_db_setup is session scoped, and therefore only run once at the beginning of the test session. When using transaction=True, the database gets flushed after every test (including the first) and so any data added in django_db_setup is removed.
TransactionTestCase obviously knows that it is using transactions and because it is a django thing it knows that it needs to re-add the fixtures for each test, but pytest in general is not aware of django's needs, and so it has no way to know that it needs to re-run your fixture django_db_setup – as far as it's concerned it only needs to run that once since it is session scoped.
You have the following options:
use a lower scoped fixture, probably to the function scope as suggested in the comments. But this will probably be opt-in, and this will be run within the transaction, so will be removed after the test is complete.
Write a fixture that is smart / django-aware, and knows when it needs to re-populate that data by detecting when the test is using transactions. But you need to ensure that the database connection being used is not in a transaction. I have done this on django 1.11 and it works fine, although it may need fixing after an upgrade. Looks something like this:
from unittest.mock import patch
from django.core.management import call_command
from django.db import DEFAULT_DB_ALIAS, ConnectionHandler
import pytest
_need_data_load = True
#pytest.fixture(autouse=True)
def auto_loaddata(django_db_blocker, request):
global _need_data_load
if _need_data_load:
# Use a separate DB connection to ensure we're not in a transaction.
con_h = ConnectionHandler()
try:
def_con = con_h[DEFAULT_DB_ALIAS]
# we still need to unblock the database because that's a test level
# constraint which simply monkey patches the database access methods
# in django to prevent access.
#
# Also note here we need to use the correct connection object
# rather than any default, and so I'm assuming the command
# imports `from django.db import connection` so I can swap it.
with django_db_blocker.unblock(), patch(
'path.to.your.command.modules.connection', def_con
):
call_command('loaddata')
finally:
con_h.close_all()
_need_auto_sql = False
using_transactional_db = (
'transactional_db' in request.fixturenames
or 'live_server' in request.fixturenames
)
if using_transactional_db:
# if we're using a transactional db then we will dump the whole thing
# on teardown, so need to flag that we should set it up again after.
_need_data_load = True
When I test using django.test.TransactionTestCase I have found that it uses the real database.
(django.test.TestCase works normally!)
I have confirmed this in my own project using the simple code:
class TestInventoryTransactions(TransactionTestCase):
def setUp(self):
print(Item.objects.all())
def test1(self):
pass
def test2(self):
pass
The output of this is
[...Bunch of items...]
[]
Showing that firstly that the real database is being used and not an empty test database.
Secondly, it removes everything from the database after the first test.
I really don't think this is the expected behaviour and don't see why it would be happening.
Using "manage.py test" does not have this problem. It only occurs when running the test file manually.
I'm making a server that can let clients upload and download data of different models. Is there some elegant way handle the requests?
More precisely, I don't want to do something like this,
app = webapp.WSGIApplication([
('/my_upload_and_download_url/ModelA/(.*)', MyRequestHandlerForA),
('/my_upload_and_download_url/ModelB/(.*)', MyRequestHandlerForB),
('/my_upload_and_download_url/ModelC/(.*)', MyRequestHandlerForC),
])
run_wsgi_app(app)
since what I do inside the handler would all be the same. For example,
class MyRequestHandlerForX(webapp.RequestHandler):
def get(self, key=None):
# return the instance with the designated key
def post(self, key=None):
# create/get the model instance
# iterate through the property list of the instance and set the values
the only difference among the handlers is to create instance for different models. The urls are alike, and the handlers are almost the same.
I checked this post about redirect requests to other handlers, and I've also read some methods to create an instance by a class name; but I think neither of them is good.
Anyone has a good solution?
p.s. This is my first post here. If there is anything inappropriate please tell me, thanks.
How you do this depends largely on the details of your code in the request handler. You can do a fairly generic one like this:
class ModelHandler(webapp.RequestHandler):
def get(self, kind, key):
model = db.class_for_kind(kind)
instance = model.get(key)
# Do something with the instance - eg, print it out
def post(self, kind, key):
model = db.class_for_kind(kind)
instance = model.create_from_request(self.request)
application = webapp.WSGIApplication([
('/foo/([^/]+)/([^/]+)', ModelHandler),
])
def main():
run_wsgi_app(application)
if __name__ == '__main__':
main()
This assumes you define a 'create_from_request' class method on each model class; you probably don't want to do it exactly this way, as it tightly couples model definitions with the forms used to input them; instead, you probably want to store a mapping of kind name or class to handler function, or do your forms and creation entirely automatically by reflecting on the properties of the class. Since you haven't specified what it is about doing this you're unsure about, it's hard to be more specific.
Also note the inclusion of a main() and other boilerplate above; while it will work the way you've pasted it, adding a main is substantially more efficient, as it allows the App Engine runtime to avoid having to evaluate your module on every request.
In your case I'd probably just have everything hit the same url path, and put the specifics in the GET parameters, like /my_upload_and_download_url?model=modelA.
You can also use webapp2 (http://webapp-improved.appspot.com/guide/app.html) which has a bunch of url routing support.
You could parse out the url path and do a look up, like this:
import urlparse
model_lookup = {'ModelA':ModelA,'ModelB':ModelB, 'ModelC':ModelC}
class MyRequestHandler(webapp.RequestHandler):
def get(self):
url = urlparse.urlparse(self.request.uri)
path_model = url.path.replace('/my_upload_and_download_url/','')
model = model_lookup[path_model]
...
Which allows you to use the same class for each path:
app = webapp.WSGIApplication([
('/my_upload_and_download_url/ModelA/(.*)', MyRequestHandler),
('/my_upload_and_download_url/ModelB/(.*)', MyRequestHandler),
('/my_upload_and_download_url/ModelC/(.*)', MyRequestHandler),
])
run_wsgi_app(app)
I am developing a CherryPy application and I want to write some automated tests for it. I chose to use nosetests for it. The application uses sqlalchemy as db backend so I need to use fixture package to provide fixed datasets. Also I want to do webtests. Here is how I set it all together:
I have a helper function init_model(test = False) in the file where all models are created. It connects to the production or test (if test == True or cherrypy.request.app.test == True) database and calls create_all
Then I have created a base class for tests like this:
class BaseTest(DataTestCase):
def __init__(self):
init_model(True)
application.test = True
self.app = TestApp(application)
self.fixture = SQLAlchemyFixture(env = models, engine = meta.engine, style = NamedDataStyle())
self.datasets = (
# all the datasets go here
)
And now I do my tests by creating child classes of BaseTest and calling self.app.some_method()
This is my first time doing tests in python and all this seems very complicated. I want to know if I am using the mentioned packages as their authors intended and if it's not overcomplicated.
That looks mostly like normal testing glue for a system of any size. In other words, it's not overly-complicated.
In fact, I'd suggest slightly more complexity in one respect: I think you're going to find setting up a new database in each child test class to be really slow. It's more common to at least set up all your tables once per run instead of once per class. Then, you either have each test method create all the data it needs for its own sake, and/or you run each test case in a transaction and roll it all back in a finally: block.
I have the following django test case that is giving me errors:
class MyTesting(unittest.TestCase):
def setUp(self):
self.u1 = User.objects.create(username='user1')
self.up1 = UserProfile.objects.create(user=self.u1)
def testA(self):
...
def testB(self):
...
When I run my tests, testA will pass sucessfully but before testB starts, I get the following error:
IntegrityError: column username is not unique
It's clear that it is trying to create self.u1 before each test case and finding that it already exists in the Database. How do I get it to properly clean up after each test case so that subsequent cases run correctly?
setUp and tearDown methods on Unittests are called before and after each test case. Define tearDown method which deletes the created user.
class MyTesting(unittest.TestCase):
def setUp(self):
self.u1 = User.objects.create(username='user1')
self.up1 = UserProfile.objects.create(user=self.u1)
def testA(self):
...
def tearDown(self):
self.up1.delete()
self.u1.delete()
I would also advise to create user profiles using post_save signal unless you really want to create user profile manually for each user.
Follow-up on delete comment:
From Django docs:
When Django deletes an object, it
emulates the behavior of the SQL
constraint ON DELETE CASCADE -- in
other words, any objects which had
foreign keys pointing at the object to
be deleted will be deleted along with
it.
In your case, user profile is pointing to user so you should delete the user first to delete the profile at the same time.
If you want django to automatically flush the test database after each test is run then you should extend django.test.TestCase, NOT django.utils.unittest.TestCase (as you are doing currently).
It's good practice to dump the database after each test so you can be extra-sure you're tests are consistent, but note that your tests will run slower with this additional overhead.
See the WARNING section in the "Writing Tests" Django Docs.
Precisely, setUp exists for the very purpose of running once before each test case.
The converse method, the one that runs once after each test case, is named tearDown: that's where you delete self.u1 etc (presumably by just calling self.u1.delete(), unless you have supplementary specialized clean-up requirements in addition to just deleting the object).