I've started looking in to unittest when using google app engine. And it seems to bit tricky from what I've read. Since you can't (and not suppose to) run your test's against the datastore.
I've written an abstract class to emulate a datastore model class. And it quite works pretty nice returning mockup data on get, all, fetch and so on (only tried on a small scale) returning dbModel like results.
The one thing I haven't found a solution I'm satisfied with is how to differentiate which model class to use. I want to use the mock-ups for unit tests and the actual db.Model for when webapp is running.
My current solution looks like this in my .py containing all db.Models:
if 'SERVER_SOFTWARE' in os.environ:
class dbTest(db.Model):
content = db.StringProperty()
comments = db.ListProperty(str)
else:
class dbTest(Abstract):
content = 'Test'
comments = ['test1', 'test2']
And it kinda feels like it could break any minute. Is this the way to go or could one combine these as one class and if the db.Model is invoked properly use that else the mockup?
Check out gaetestbed (docs). It stubs out the datastore (and all the other services like memcache) and makes testing from the command line very easy. It ensures a clean environment before every test runs.
I personally think it is nicer than the other solutions I have seen.
Instead of messing your models.py I would go with gaeunit.
I've used it with success in a couple of projects and the features I like are:
Just one file to add to your project (gaeunit.py) and you are almost done
Gaeunit isolates the test datastore from the development store (i.e. tests don't pollute your development db)
Since you can't (and not suppose to)
run your test's against the datastore.
This is not true. You can and should use the local datastore implementation as a test harness - there's no reason to waste your time creating mocks for every datastore behaviour. You can use a tool such as noseGAE or gaeunit, as suggested by other posters, but if you want to set it up yourself, see this snippet.
There's more than one problem you're trying to address here...
First, for running tests with GAE emulation you can take a look at gaeunit, which I like best. If you don't want to run them from the browser then you can take a look at noseGAE (part of nose). This should give you command-line testing.
Second, regarding your comment about about 'creating an overhead of dependencies' it sounds like you're searching for a good unit testing and mocking framework. These will let you mock out the database for tests which don't need to hit it. Try mox and mockito for python.
Related
So, I'm not a test expert and sometimes, when using packages like DRF, I think what should I test on the code...
If I write custom functions for some endpoints, I understand I should test this because I've written this code and there are no tests for this... But the DRF codebase is pretty tested.
But if I'm writing a simple API that only extends ModelSerializer and ModelViewSet what should I be testing?
The keys in the JSON serialized?
The relations?
What should I be testing?
Testing your ModelSerializer, Check the request payload against your expected Model fields.
Testing your ModelViewSet, Check the response HTTP_Status_Code against the expected Status codes for your viewsets. You can also test for your response data.
A good resource - https://realpython.com/test-driven-development-of-a-django-restful-api/
Even if you're only using automated features and added absolutely no customization on your serializer and viewset, and it's obvious to you that this part of the code works smoothly, you still need to write tests.
Code tends to get large, and some other person might be extending your code, or you might go back to your code a few months later and not remember how your implementation was. Knowing that tests are passing will inform other people (or yourself in the distant future) that you're code is working without having to read it and dive into the implementation details, which makes your code reliable.
The person using your API might be using it at a service and not even be interested in what framework or language you used for implementation, but only wants to be sure that the features he/she requires work properly. How can we ensure this? One way is to write tests and pass them.
That's why it's very important to write complete and reliable tests so people can safely use or extend your code knowing that the tests are passing and everything is OK.
I have a django app and I am running some unit tests on it. So the problem I am having is not when one test inserts into the test db. It is the tests that come after. Since each test is not saving the transaction, the entry from a previous test is not there which is fine, although the auto increment id's are increasing as if there are still entries into the database. Which I need to fix because I am inserting more data where I cannot control the id's given to it and need to be able to grab this specific data for the test. If I hard code the code to grab the objects I will have to change the code for every time I add a new test, which is not ideal.
I have multiple tests running, but for simplicity sake, I will show two.
from django.test import TestCase
from app.models import Model
class VersionMerge(TestCase):
fixtures = ['initial_test_data.json']
def test_model_test1(self):
*Insert new data*
*grab new data in*
*Check data*
def test_model_test2(self):
*Insert new data*
*grab new data*
*Check data*
The problem arises in test_model_test2 where when I try to grab the new data, I have to print the object out to see the id's to be able to grab it.
I have a solution on how I can fix this on the actual database but not the test one. For mine I need to be able to connect to a docker container and run a psql command to reset the table_id_seq.
docker exec -t $CONTAINER_ID psql --dbname=test_database_name -username=user -c "SELECT setval('modelName_appName_id_seq', 2, true)"
This will go to the table and set the last id value used to be 2 to make the next id 3. However whenever I try to run the command from inside python using
cmd = "command above"
os.system(cmd)
and when I run this I get the following error.
sh: 1: docker: not found
sh: 1: docker: not found
Looking for any help on this, either a new solution to the problem or improvements on mine.
TLDR; I need to be able to modify data in the database that the django unit tests create.
If you need a test to reset the primary key sequence, you can issue a RawSQL query doing that. How to do that exactly is answered in this StackOverflow question.
An easier option is available if you're using pytest. We're using pytest and pytest-django in all our Django projects and it makes testing a breeze. pytest-django provdes a database fixture that can take a boolean parameter to reset the sequences. Use it like so:
#pytest.mark.django_db(transaction=True, reset_sequences=True)
def mytest():
[...]
I got this to work by replacing TestCase with TransactionTestCase and set reset_sequences=True. However the tests are running slower.
from django.test import TransactionTestCase
class ViewTest(TransactionTestCase):
reset_sequences = True
def test_view_redirects(self):
...
Here's the doc
You are using a fixtures file - put whatever data you want in there; then edit it in your test as needed.
Although, that is harder to maintain. In my opinion - there are far better options that may work much more like what you intend.
You're better off using something like factory_boy and generating the models (and related foreign keys) at instantiation with dummy data you provide.
That way you know exactly what is being tested and it's completely independent of everything else. The nice part is that with factory_boy you will have a factories.py file you can keep up to date much easier than working with some fixture.
There are other options like Mixer or model_mommy, although I only have experience with factory_boy and mixer.
With factory_boy it might look something like this:
def test_model_test1(self):
factory.ModelFactory(
some_specific_attribute='some_specific_value'
)
model = Model.objects.all().first()
# Test against your model
I'm new to both django and unit-testing, and I'm trying to build unit-tests for my models but have been having some difficulty.
I have several models working closely together:
Resource which will maintain a file resource
MetadataField which represents a metadata field that can be added to resources, corresponds to a table full of fields
MetadataValue Matches MetadataField IDs with Resource IDs and a corresponding value, this is an intermediary table for the Resource - MetadataField many-to-many relationship
MetadataSchema represents a schema consisting of many MetadataFields. Each Resource is assigned a MetadataSchema which controls which MetadataFields it is represented by
Relationships:
Resource - MetadataField : Many-to-Many through MetadataValue
MetadataValue - MetadataSchema : Many-to-Many
Resource - MetadataSchema : One-to-Many
I'm not sure how to write tests to deal with these models. The model testing in the Test Driven Django tutorial seems to mostly cover initializing the objects and verifying attributes. If I do any setting up of these objects though it requires the use of all the others, so the tests will all be dependent on code that they're not meant to be testing.
e.g. if I wish to create a resource, I should also be assigning it a metadata schema and values for fields in that schema.
I've looked around for good examples of unit tested models in django but haven't been able to find anything (the django website doesn't seem to have unittests, and these projects all either have poor/missing testing or in a couple cases have good testing but almost no models used.
Here are the possible approaches I see:
Doing a lot of Mocking, to ensure that I am only ever testing one class, and keep the unit tests on the models very simple, testing only their methods/attributes but not that the relationships are functioning correctly. Then rely on higher level integration tests to pick up any problem in the relationships etc.
Design unittests that DO rely on other functionality, and accept that a break in one function will break more than one test, provided it remains easy to see where the fault occurred. So i would perhaps have a method testing whether I can successfully add a MetadataValue to a resource, which would require setting up at least one MetadataSchema and Resource. I could then use a try - except block to ensure that if the test fails before the assertions dealing with what I'm actually meant to be testing, it gives a specific error message suggesting the fault lies elsewhere. This way I could quickly scan multiple failed test messages to find the real culprit. It wouldn't be possible to do this separation reliably in every test though
I'm having a hard time getting my head round this, so I don't know if this all makes sense, but if there are best practices for this sort of situation please point me to them! Thanks
You can use django fixtures to load data for testing, this can be very time consuming and hard to maintain if your models change a lot.
I suggest you to use a library like Factory Boy, which allows you to create objects on demand for your tests when you need them. You can set as many factories as you want, you can see some examples here and here you can also see some examples on mocking with the mocker library and a lot of tips on testing django apps.
For me the purpose of Unit testing is to separate UNITS of code to test ONLY them, not worrying about all their dependencies. If I understand your idea correctly, You want to create something that is more an integration test (the relationship between two or more models) which is also a very helpful, but still a different, layer of testing :)
To test separate modules, especially when they use a lot of code around, I prefer to mock the dependencies. Google returned this as a first option for you Python mocks (I guess there are plenty of them out there).
The other thing is if there are TOO MANY dependencies You have to mock it probably means You have to rethink your architecture because of tight coupling :)
Good luck!
Use fixtures, they let you load model data without writing the code.
I'm debugging a big unittest test for django & would like to use my normal debugging tools to do it:
looking at the db in the django admin through runserver
looking in the db manually.
Neither work, because unittest hasn't committed the transaction it's running the db side of the test in.
The obvious solution seems to be to just tell unittest not to use a transaction, or get it to commit somehow. Another way would be to create a custom settings file which would let runserver connect to the transaction. But the first idea seems like it should be really easy. Any ideas? I'm using MySQL & django 1.3.1
Consider using TransactionTestCase as the parent class of your test cases rather than TestCase. TransactionTestCase doesn't use the transaction behavior of TestCase, so you can commit at the point where you need to inspect the database state.
Additionally, if your unit test is so big that you need to inspect its database state while it's running, you're probably doing it wrong. A unit test should test one thing and one thing only, and it should be fairly obvious what the state is at any point. See Carl Meyer's Pycon 2012 talk on testing in Django for some excellent advice on writing good unit tests.
Disclaimer:
I'm very new to Django. I must say that so far I really like it. :)
(now for the "but"...)
But, there seems to be something I'm missing related to unit testing. I'm working on a new project with an Oracle backend. When you run the unit tests, it immediately gives a permissions error when trying to create the schema. So, I get what it's trying to do (create a clean sandbox), but what I really want is to test against an existing schema. And I want to run the test with the same username/password that my server is going to use in production. And of course, that user is NOT going to have any kind of DDL type rights.
So, the basic problem/issue that I see boils down to this: my system (and most) want to have their "app_user" account to have ONLY the permissions needed to run. Usually, this is basic "CRUD" permissions. However, Django unit tests seem to need more than this to do a test run.
How do other people handle this? Is there some settings/work around/feature of Django that I'm not aware (please refer to the initial disclaimer).
Thanks in advance for your help.
David
Don't force Django to do something unnatural.
Allow it to create the test schema. It's a good thing.
From your existing schema, do an unload to create .JSON dump files of the data. These files are your "fixtures". These fixtures are used by Django to populate the test database. This is The Greatest Testing Tool Ever. Once you get your fixtures squared away, this really does work well.
Put your fixture files into fixtures directories within each app package.
Update your unit tests to name the various fixtures files that are required for that test case.
This -- in effect -- tests with an existing schema. It rebuilds, reloads and tests in a virgin database so you can be absolutely sure that it works without destroying (or even touching) live data.
As you've discovered, Django's default test runner makes quite a few assumptions, including that it'll be able to create a new test database to run the tests against.
If you need to override this or any of these default assumptions, you probably want to write a custom test runner. By doing so you'll have full control over exactly how tests are discovered, bootstrapped, and run.
(If you're running Django's development trunk, or are looking forward to Django 1.2, note that defining custom test runners has recently gotten quite a bit easier.)
If you poke around, you'll find a few examples of custom test runners you could use to get started.
Now, keep in mind that once you've taken control of test running you'll need to ensure that you someone meet the same assumptions about environment that Django's built-in runner does. In particular, you'll need to someone guarantee that whatever test database you'll use is a clean, fresh one for the tests -- you'll be quite unhappy if you try to run tests against a database with unpredictable contents.
After I read David's (OP) question, I was curious about this too, but I don't see the answer I was hoping to see. So let me try to rephrase what I think at least part of what David is asking. In a production environment, I'm sure his Django models probably will not have access to create or drop tables. His DBA will probably not allow him to have permission to do this. (Let's assume this is True). He will only be logged into the database with regular user privileges. But in his development environment, the Django unittest framework forces him to have higher level privileges for the unittests instead of a regular user because Django requires it to create/drop tables for the model unittests. Since the unittests are now running at a higher privilege than will occur in production, you could argue that running the unittests in development are not 100% valid and errors could happen in production that might have been caught in development if Django could run the unittests with user privileges.
I'm curious if Django unittests will ever have the ability to create/drop tables with one user's (higher) privileges, and run the unittests with a different user's (lower) privileges. This would help more accurately simulate the production environment in development.
Maybe in practice this is really not an issue. And the risk is so minor compared to the reward that it not worth worrying about it.
Generally speaking, when unit tests depend on test data to be present, they also depend on it to be in a specific format/state. As such, your framework's policy is to not only execute DML (delete/insert test data records), but it also executes DDL (drop/create tables) to ensure that everything is in working order prior to running your tests.
What I would suggest is that you grant the necessary privileges for DDL to your app_user ONLY on your test_ database.
If you don't like that solution, then have a look at this blog entry where a developer also ran into your scenario and solved it with a workaround:
http://www.stopfinder.com/blog/2008/07/26/flexible-test-database-engine-selection-in-django/
Personally, my choice would be to modify the privileges for the test database. This way, I could rule out all other variables when comparing performance/results between testing/production environments.
HTH,
-aj
What you can do, is creating separate test settings.
As I've learned at http://mindlesstechnology.wordpress.com/2008/08/16/faster-django-unit-tests/ you can use the sqlite3 backend, which is created in memory by the Django unit test framework.
Quoting:
Create a new test-settings.py file next to your app’s settings.py
containing:
from projectname.settings import * DATABASE_ENGINE = 'sqlite3'
Then when you want to run tests real fast, instead of manage.py test,
you run
manage.py test --settings=test-settings
This runs my test suite in less than 5 seconds.
Obviously you still want to run tests on your real db backend, but
this is awesome for sanity checks, and while you’re doing test
development.
To load initial data, provide fixtures in your testcase.
class MyAppTestCase(TestCase):
fixtures = ['myapp/fixtures/filename']