A straight forward question:
Is there some easy way to write fixtures (i.e. in JSON format, but I don't care really) when using models that incorporate PickleFields?
EDIT:
In the end I think I'll just get rid of fixtures altogether. I'll use named *.py scripts that will create all the objects for me. I've always found fixtures quite cumbersome anyway.
Assuming you are using django-picklefield you can use dumpdata/loaddata just like you would with any other model. Tested it briefly and everything works fine.
Related
So, I'm not a test expert and sometimes, when using packages like DRF, I think what should I test on the code...
If I write custom functions for some endpoints, I understand I should test this because I've written this code and there are no tests for this... But the DRF codebase is pretty tested.
But if I'm writing a simple API that only extends ModelSerializer and ModelViewSet what should I be testing?
The keys in the JSON serialized?
The relations?
What should I be testing?
Testing your ModelSerializer, Check the request payload against your expected Model fields.
Testing your ModelViewSet, Check the response HTTP_Status_Code against the expected Status codes for your viewsets. You can also test for your response data.
A good resource - https://realpython.com/test-driven-development-of-a-django-restful-api/
Even if you're only using automated features and added absolutely no customization on your serializer and viewset, and it's obvious to you that this part of the code works smoothly, you still need to write tests.
Code tends to get large, and some other person might be extending your code, or you might go back to your code a few months later and not remember how your implementation was. Knowing that tests are passing will inform other people (or yourself in the distant future) that you're code is working without having to read it and dive into the implementation details, which makes your code reliable.
The person using your API might be using it at a service and not even be interested in what framework or language you used for implementation, but only wants to be sure that the features he/she requires work properly. How can we ensure this? One way is to write tests and pass them.
That's why it's very important to write complete and reliable tests so people can safely use or extend your code knowing that the tests are passing and everything is OK.
I have written a generic framework in python for some particular type of task. It is a webserver which serves different requests and operations. This framework can be used by many projects and each one has a different set of validation rules. Right now, I'm just updating my script for each project.
I'm thinking of externalizing this validation part, how do I go about this? The validations are more than mere field content validations; I'm thinking of having a config file which maps incoming request <-> validationModule something like /site1/a/b.xml=validateSite1.py and importing this module in an if condition if the request is for site1. So I'll have generic framework scripts + individual scripts for each site.
Is there a cleaner way to do this?
I think it'd be better to use Python itself as the top-level mapping from URL paths to validation modules. A configuration might look like this:
import site1
import site2
def dispatch(uri):
if uri.startswith('/site1/'):
return site1.validate(uri)
elif uri.startswith('/site2/):
return site2.validate(uri)
This simple example might tempt you to "abstract" it out into a more "generic framework" that turns strings into filenames to use as validation scripts. Here are some advantages of the doing the above instead:
Performance: site modules are imported just once, we don't look up filenames per request.
Flexibility: if you decide later that the dispatching logic is more complicated, you can easily use arbitrary Python code to deal with it. There will never be a need to extend your mapping system itself--only the config files that require more complexity.
Single language.
For a Django test I'd like to load a fixture, which is in a csv file. What is the best way to do that?
Django's built-in fixtures functionality doesn't support CSV. You'd need to process the file automatically using the csv module, probably in the test's setUp method.
For easier reuse I'd suggest my Django-PyFixture library. You'd then program your fixture in raw python, but it would semantically look like one more fixture.
I have an initial_data fixture that I want to load everytime except for production. I already have different settings file for production and non-production deployments.
Any suggestions on how to accomplish this?
Clarification: I do not want test fixtures. Basically, I just need the fixture to be loaded based on a setting change of some sort. I'll be digging into the Django code to see if I could figure out an elegant way to accomplish this.
You can actually setup different test fixtures for each test if you want:
http://docs.djangoproject.com/en/dev/topics/testing/#topics-testing-fixtures
If you only want to load the fixtures in one time, you can also write a custom TestRunner that will allow you to do that setup at the beginning:
docs.djangoproject.com/en/dev/topics/testing/#using-different-testing-frameworks
Both of those will still load the data from the production fixtures as that is done with syncdb, but you can override the data, or even delete it all. This may not be optimal if you are loading large amounts of data into your production product. If this is the case, I would recommend you adding a custom command like load_production_data that allows you to do it quickly and easily from the command line.
The easiest way is to use manage.py testserver [fixture ...]
If this is a staging (rather than dev) deployment, though, you may not want to use django's builtin server. In that case, a quick (if hacky) way of doing what you're after is to have the fixtures in an app (called, for example, "undeployed") that is only installed in your non-production settings.
I've started looking in to unittest when using google app engine. And it seems to bit tricky from what I've read. Since you can't (and not suppose to) run your test's against the datastore.
I've written an abstract class to emulate a datastore model class. And it quite works pretty nice returning mockup data on get, all, fetch and so on (only tried on a small scale) returning dbModel like results.
The one thing I haven't found a solution I'm satisfied with is how to differentiate which model class to use. I want to use the mock-ups for unit tests and the actual db.Model for when webapp is running.
My current solution looks like this in my .py containing all db.Models:
if 'SERVER_SOFTWARE' in os.environ:
class dbTest(db.Model):
content = db.StringProperty()
comments = db.ListProperty(str)
else:
class dbTest(Abstract):
content = 'Test'
comments = ['test1', 'test2']
And it kinda feels like it could break any minute. Is this the way to go or could one combine these as one class and if the db.Model is invoked properly use that else the mockup?
Check out gaetestbed (docs). It stubs out the datastore (and all the other services like memcache) and makes testing from the command line very easy. It ensures a clean environment before every test runs.
I personally think it is nicer than the other solutions I have seen.
Instead of messing your models.py I would go with gaeunit.
I've used it with success in a couple of projects and the features I like are:
Just one file to add to your project (gaeunit.py) and you are almost done
Gaeunit isolates the test datastore from the development store (i.e. tests don't pollute your development db)
Since you can't (and not suppose to)
run your test's against the datastore.
This is not true. You can and should use the local datastore implementation as a test harness - there's no reason to waste your time creating mocks for every datastore behaviour. You can use a tool such as noseGAE or gaeunit, as suggested by other posters, but if you want to set it up yourself, see this snippet.
There's more than one problem you're trying to address here...
First, for running tests with GAE emulation you can take a look at gaeunit, which I like best. If you don't want to run them from the browser then you can take a look at noseGAE (part of nose). This should give you command-line testing.
Second, regarding your comment about about 'creating an overhead of dependencies' it sounds like you're searching for a good unit testing and mocking framework. These will let you mock out the database for tests which don't need to hit it. Try mox and mockito for python.