Half way through my current project, after suffering the pain of spending uncountable minutes on debugging, I have decided to adopt TDD. To start, I am planning to write a set of unit tests for each existing models. But for models that only have attributes defined (ie. no additional methods/properties) I am not sure what I need to test nor how.
class Product(models.Model):
name = models.CharField(max_length=50)
description = models.TextField(default='', blank=True)
retails = models.ManyToManyField(Retail, verbose_name='Retail stores that carry the product')
manufacturer = models.ForeignKey(Manufacturer, related_name='products')
date_created = models.DateTimeField(auto_now_add=True)
date_modified = models.DateTimeField(auto_now=True)
Using Product as an example, what are the things about it that unit tests should cover? And how should ForeignKey and ManyToManyField be covered?
This was an article I found helpful: A Guide to Testing in Django (archived link). Here is a good summary of what to test:
Another common setback for developers/designers new to testing is the
question of 'what should (or shouldn't) I test?' While there are no
hard & fast rules here that neatly apply everywhere, there are some
general guidelines I can offer on making the decision:
If the code in question is a built-in Python function/library, don't test it. Examples like the datetime library.
If the code in question is built into Django, don't test it. Examples like the fields on a Model or testing how the built-in
template.Node renders included tags.
If your model has custom methods, you should test that, usually with unit tests.
Same goes for custom views, forms, template tags, context processors, middleware, management commands, etc. If you implemented
the business logic, you should test your aspects of the code.
So, for your example, there wouldn't really be anything to test until you write some custom functions.
In my opinion, testing ForeignKey and ManyToManyField links would fall under the second category (code built into Django), so I wouldn't test these, as you are really testing whether or not Django is functioning properly. If you have a method which creates an instance of your product, including foreign relationships and M2Ms, you could verify the data has been created, that would be testing your custom method, not Django functionality.
Using the TDD paradigm, the tests are built to verify business logic, and design requirements.
My CS350 class TDD stipulated that it's best practice to test all accessors and mutators. So for a model, you would first write tests that call each assessor function and make sure that it returns the proper value.
For each function which changes a data field in the model, you would not only test the result of that data field in particular, but you would also test all of the other fields in the model instance to make sure that none of them were modified erroneously.
To restat:, if a model has fields a, b, and c, you would create an instance using your constructor, then asset that all three are set properly. Say there's another function, set_a(). You would assert that not only the value of 'a' has changed, but that the values of b and c remain unchanged.
Related
What I need is basically a database model with version control. So that every time a record is modified/deleted, the data isn't lost, and the change can be undone.
I've been trying to implement it myself with something like this:
from django.db import models
class AbstractPersistentModel(models.Model):
time_created = models.DateTimeField(auto_now_add=True)
time_changed = models.DateTimeField(null=True, default=None)
time_deleted = models.DateTimeField(null=True, default=None)
class Meta:
abstract = True
Then every model would inherit from AbstractPersistentModel.
Problem is, if I override save() and delete() to make sure they don't actually touch the existing data, I'll still be left with the original object, and not the new version.
After trying to come up with a clean, safe and easy-to-use solution for some hours, I gave up.
Is there some way to implement this functionality that isn't overwhelming?
It seems common enough problem that I thought it would be built into Django itself, or at least there'd be a well documented package for this, but I couldn't find any.
When I hear version control for models and Django, I immediately think of django-reversion.
Then, if you want to access the versions of an instance, and not the actual instance, simply use the Version model.
from reversion.models import Version
versions = Version.objects.get_for_object(instance)
I feel you can work around your issue not by modifying your models but by modifying the logic that access them.
So, you could have two models for your same object: one that can be your staging area, in which you store values as the ones you mention, such as time_created, time_modified, and modifying_user, or others. From there, in the code for your views you go through that table and select the records you want/need according to your design and store in your definitive table.
Lets say I have a recipe website with two basic models, 'User' and 'Recipe'
class User(models.Model):
username= models.CharField()
email = models.CharField()
class Recipe(models.Model):
name = models.CharField()
description = models.CharField()
I would like to add the functionality so that users can 'favorite' a recipe.
In this case, I need to use a many-to-many relationship. My question is, how do I decide which model to add the relationship to?
For example, each user could have a list of 'favorite' recipes:
class User(models.Model):
favorites = models.ManyToManyField(Recipe)
Alternatively, each recipe could have a list of users who Favorited the recipe:
class Recipe(models.Model):
user_favorites = models.ManyToManyField(User)
What is considered the best practice? Is either one better for query performance?
It makes no difference from the database point of view, as pointed out in the comments.
But I have had two arguments where it did matter to me.
First (maybe less important), the built-in admin treats the two models differently by default. The model on which you define the relationship gets a widget for choosing the related objects. And a '+' for conveniently adding new objects of the related type.
secondly, you have to import one of the models in the file of the other one, if they are in different files. This matters if you want to write a reusable app that does not depend on anything outside. It mattered to me also because:
I once (well, not just once actually :)) broke my app/database/etc such, that I decided to start a new project and copy the code there. In this case you have to comment out some settings.INSTALLED_APPS to test step for step that everything works. Here it is important not to have circular includes (to include a commented-out app raises an error). So I try to import the "most basic" into the others, and not the other way round.
This not a simple answer to your question, but two points which I consider. Maybe some more experienced users can correct me if it's wrong in some sense.
I wrote a quest system for an online game. My quests are serialized into json objects for a JavaScript client that fetches those quests then from a REST backend (I use django RestFramework)
Now I'm wondering on which class or django model I should put the "behaviour" that belongs to the data.
I stored the data that belongs to a quest in several separate models:
A model QuestHistory: with models.Fields like Boolean completed, and Datetime started where I put the information belonging to a specific user (it also as a field user).
Then I have a model QuestTemplate : The part that is always the same, fields like quest_title and quest_description
I also have a model Rewards and model Task and TaskHistory that are linked to a quest with a foreign Key field.
To combine this information back to quest I created a pure python class Quest(object): and defined methods on this class like check_quest_completion. This class is the then later serialized. The Problem with this approach is that It becomes quite verbose, for example when I instantiate this class or when I define the Serializer.
Is there a python or django "shortcut" to put all fields of a django model into another class (my Quest class here), something similar to the dict.update method maybe?
Or should I try to put the methods on the models instead and get rid of the Quest class?
I have some other places in my game that look very similar to the quest system for example the inventory system so I'm hoping for a more elegant solution.
You should put the methods of the Quest class on the model itself and get rid of the Quest class.
I think I need to create a 'many-to-many generic relationship'.
I have two types of Participants:
class MemberParticipant(AbstractParticipant):
class Meta:
app_label = 'participants'
class FriendParticipant(AbstractParticipant):
"""
Abstract participant common information shared for all rewards.
"""
pass
These Participants can have 1 or more rewards of 2 different kinds (rewards model is from another app):
class SingleVoucherReward(AbstractReward):
"""
Single-use coupons are coupon codes that can only be used once
"""
pass
class MultiVoucherReward(AbstractReward):
"""
A multi-use coupon code is a coupon code that can be used unlimited times.
"""
So now I need to link these all up. This is how I was thinking of creating the relationship (see below) would this work, any issues you see?
Proposed linking model below:
class ParticipantReward(models.Model):
participant_content_type = models.ForeignKey(ContentType, editable=False,
related_name='%(app_label)s_%(class)s_as_participant',
)
participant_object_id = models.PositiveIntegerField()
participant = generic.GenericForeignKey('participant_content_type', 'participant_object_id')
reward_content_type = models.ForeignKey(ContentType, editable=False,
related_name='%(app_label)s_%(class)s_as_reward',
)
reward_object_id = models.PositiveIntegerField()
reward = generic.GenericForeignKey('reward_content_type', 'reward_object_id')
Note: I'm using Django 1.6
Your approach is exactly the right way to do it given your existing tables. While there's nothing official (this discussion, involving a core developer in 2007, appears not to have gone anywhere), I did find this blog post which takes the same approach (and offers it in a third-party library), and there's also a popular answer here which is similar, except only one side of the relationship is generic.
I'd say the reason this functionality has never made it into django's trunk is that while it's a rare requirement, it's fairly easy to implement using the existing tools. Also, the chance of wanting a custom "through" table is probably quite high so most end-user implementations are going to involve a bit of custom code anyway.
The only other potentially simpler approach would be to have base Participant and Reward models, with the ManyToMany relationship between those, and then use multi-table inheritance to extend these models as Member/Friend etc.
Ultimately, you'll just need to weigh up the complexity of a generic relation versus that of having your object's data spread across two models.
Late reply, but I found this conversation when looking for a way to implement generic m2m relations and felt my 2 cents would be helpful for future googlers.
As Greg says, the approach you chose is a good way to do it.
However, I would not qualify generic many to many as 'easy to implement using existing tools' when you want to use features such as reverse relations or prefetching.
The 3rd party app django-genericm2m is nice but has several shortcomings in my opinion (the fact that the 'through' objects are all in the same database table by default and that you don't have 'add' / 'remove' methods - and therefore bulk add/remove).
With that in view, because I needed something to implement generic many-to-many relations 'the django way' and also because I wanted to learn a little bit about django internals, I recently released django-gm2m. It has a very similar API to django's built-in GenericForeignKey and ManyToManyField (with prefetching, through models ...) and adds deletion behavior customisation. The only thing it lacks for the moment is a suitable django admin interface.
I'm building my first Django app to manage multiple SaaS products.
This entails storing custom attributes for each Version of each Product.
For example, a new version of a Product is released that includes new configuration options that the earlier versions of the Product do not support.
I need to be able to keep track of those new values for each instance of the new Version.
I'm thinking I want the Admins to be able to add "custom fields" at the Product level by Version.
Looking for suggestions as to the best approach.
Thanks.
The common way of tracking model versions is to use django-reversion.
It sounds like each instance needs its own custom attributes. That means that changing the Models relating to Product and Version need not occur. This is good, because models can only change with the code (unless you get into dynamically generating Models which is usually not a good idea).
So, you need to be able to model attributes for each Product instance, regardless of Version. This should be a simple data modelling exercise, not necessarily related to Django.
A Product has a set of fields
A Product has a Version
A Product has a set of Attributes
This is quite easily modelled, depending on how you want to manage attributes.
class Version(models.Model):
version = models.CharField(max_length=10)
class ProductAttributes(models.Model):
name = models.CharField(max_length=64)
description = models.CharField(max_length=255)
# other fields as necessary
class Product(models.Model):
name = models.CharField(max_length=64)
version = models.ForeignKey(Version)
attributes = models.ManyToManyField(ProductAttributes, related_name='products')
That should be your modelling sorted in a very basic way. Now, let's create some instances.
v1 = Version(version='1.0.0')
v1.save()
hosted = ProductAttributes(name='Hosted', description='We host the apps!')
hosted.save()
shiny = ProductAttributes(name='Shiny', description='I like shiny')
shiny.save()
p = Product(name='Web Based Email', version=v1)
p.save()
p.attributes.add(hosted)
p.attributes.add(shiny)
p.attributes.all()
# shows shiny and hosted!
You can tweak the ModelAdmin for Product such that you can add ProductAttributes inline when adding or editing a Product. You can also have a separate ModelAdmin for ProductAttributes so you can create a list of known Attributes that can be applied to products at a later date.
There are two basic approaches for this.
Use a document based db (ie, "NoSQL") like Couch or Mongo. These have flexible schemas, so allow for multiple variations on a product.
Use the Entity Attribute Value (wikipedia) schema pattern. django-eav is an app that provides this.
Decide to go with sub-classes with each Product as each has a limited set of specific attributes that won't change much or at all over time. Thanks for all the great feedback. Learned a lot :-)