Note:
I understand and am well aware of the difference between passing a function as a parameter and invoking a function and passing the result as a parameter. I believe I am passing the function correctly.
Specs
Django 1.11
PostgreSQL 10.4
Scenario:
I have dozens of models in my application, with many existing records. I need to add a random seed to each of these models that will get created and set when a new model instance is created. I also want to generate the random seed for the existing instances.
My understanding of how Django model defaults and Migrations work is that when a new field is added to a model, if that field has a default value set, Djano will update all existing instances with the new field and corresponding default.
However, despite the fact that I'm definitely passing a function as the default, and the function produces a random number, Django is using the same random number when updating existing rows (e.g. it seems that Djano is only calling the function once, then using the return value for all entries).
Example
A shortened version of my code:
def get_random():
return str(random.random())
class Response(models.Model):
# various properties
random = models.CharField(max_length=40, default=get_random)
user = models.ForeignKey(User, on_delete=models.CASCADE)
content = JSONField(null=True)
The random field is being added after the model and many instances of it have already been created. A makemigrations command appears to generate the proper migration file, with a migrations.AddField() call, passing in default=get_random as a parameter.
However, after running makemigrations, all existing existing Response instances contain the exact same number in their random field. Creating and saving new instances of the model work as expected (with a pseudo-unique random number).
Workaround
An easy workaround is to just run a one-time script that does a
for r in Response.objects.all():
r.random = get_random()
r.save()
Or override the model's save() method and then do a mass save. But I don't think these workarounds should be necessary. It also means that if I want to make a unique field with a random default, then I will need multiple migrations. First I would have to add the field with the assigned default. Next I would need to apply the migration and manually re-initialize the field values. Then a second migration to add the unique=True property.
It seems that if Django is to apply default values to existing instances upon a makemigrations then it should apply them using the same semantics as creating a new instance. Is there any way to force Django to call the function for each model instance when migrating?
To add a non-null column to an existing table, Django needs to use an ALTER TABLE ADD COLUMN ... DEFAULT <default_value>. This only allows Django to call the default function once, that's why you see every row having the same value.
Your workaround is pretty much spot on, except that you can populate the existing rows with unique values using a data migration, so that it doesn't require any manual steps. The entire procedure for this use-case is described in the docs: https://docs.djangoproject.com/en/2.1/howto/writing-migrations/#migrations-that-add-unique-fields
Related
In one of my tables I have a field game_fen_when_leave = models.TextField(). But it gives me an error "You are trying to add a non-nullable field 'game_fen_when_leave' to game without a default; we can't do that (the database needs something to populate existing rows)". Is it necessary for this field to have a default value? I saw an example without having a default.
Short answer
When creating a new model: No it is not
When adding it to an existing model: Yes it is
A bit more on the topic:
With the information given I guess your are about to add this new field to an existing table.
When adding a new non-nullable fields to an existing model you will need to provide a default value. This is because there might already be rows in that particular table and those would need a default value to populate this new field with. (I'm actually just repeating the error message here.)
In the example that you are referring:
The model is new and there cannot be existing rows that would need to be populated with default values. Therefore default value for the TextField is not needed.
Couple of possibilities
Remove and create the model from scratch: If you remove the table by migrations and create it again as a completely new table. You don't have to provide a default value as there cannot be existing rows.
Add a default value: Default value could simply be an empty string and that probably is the way to go.
By default Django TextField is a non-nullable yes. You have the power to change that, but it is not advised to do so:
https://docs.djangoproject.com/en/3.0/ref/models/fields/#null
If a string-based field has null=True, that means it has two possible
values for “no data”: NULL, and the empty string.
Note:
I understand and am well aware of the difference between passing a function as a parameter and invoking a function and passing the result as a parameter. I believe I am passing the function correctly.
Specs
Django 1.11
PostgreSQL 10.4
Scenario:
I have dozens of models in my application, with many existing records. I need to add a random seed to each of these models that will get created and set when a new model instance is created. I also want to generate the random seed for the existing instances.
My understanding of how Django model defaults and Migrations work is that when a new field is added to a model, if that field has a default value set, Djano will update all existing instances with the new field and corresponding default.
However, despite the fact that I'm definitely passing a function as the default, and the function produces a random number, Django is using the same random number when updating existing rows (e.g. it seems that Djano is only calling the function once, then using the return value for all entries).
Example
A shortened version of my code:
def get_random():
return str(random.random())
class Response(models.Model):
# various properties
random = models.CharField(max_length=40, default=get_random)
user = models.ForeignKey(User, on_delete=models.CASCADE)
content = JSONField(null=True)
The random field is being added after the model and many instances of it have already been created. A makemigrations command appears to generate the proper migration file, with a migrations.AddField() call, passing in default=get_random as a parameter.
However, after running makemigrations, all existing existing Response instances contain the exact same number in their random field. Creating and saving new instances of the model work as expected (with a pseudo-unique random number).
Workaround
An easy workaround is to just run a one-time script that does a
for r in Response.objects.all():
r.random = get_random()
r.save()
Or override the model's save() method and then do a mass save. But I don't think these workarounds should be necessary. It also means that if I want to make a unique field with a random default, then I will need multiple migrations. First I would have to add the field with the assigned default. Next I would need to apply the migration and manually re-initialize the field values. Then a second migration to add the unique=True property.
It seems that if Django is to apply default values to existing instances upon a makemigrations then it should apply them using the same semantics as creating a new instance. Is there any way to force Django to call the function for each model instance when migrating?
To add a non-null column to an existing table, Django needs to use an ALTER TABLE ADD COLUMN ... DEFAULT <default_value>. This only allows Django to call the default function once, that's why you see every row having the same value.
Your workaround is pretty much spot on, except that you can populate the existing rows with unique values using a data migration, so that it doesn't require any manual steps. The entire procedure for this use-case is described in the docs: https://docs.djangoproject.com/en/2.1/howto/writing-migrations/#migrations-that-add-unique-fields
I'm creating API with Django rest framework. I've one table which will be used as types for the products. Another table which will map that types with the product. consider it as producttypesmapping table. So, I'm creating product type update endpoint. which will only update the producttypesmapping.
The issue is I've used ChoiceField() in the serializer. So I need the tuple of tuples variable to prevent from storing unwanted values. which will be initialized in util.py and to make it dynamic it loads directly by querying the producttypes table. So, I only have to query the data only once.
TAG_CHOICES_TYPE_ONE = []
tags = ProductTypes.objects.filter(tag_type_id=1).values('id', 'value')
for index, item in enumerate(tags):
TAG_CHOICES_TYPE_ONE.append((item["id"], item["value"]))
TAG_CHOICES_TYPE_ONE = tuple(TAG_CHOICES_TYPE_ONE)
But the problem is utils.py executed before the even the producttypes initialized with any data.
First of all, unless you want to attach some additional data about the mapping, your mapping model shouldn't be necessary. Simply use a ForeignKey or a ManyToManyField.
Then, you should consider using a PrimaryKeyRelatedField instead of a ChoiceField. This field takes a queryset as an argument, which will help you limit your choices. If you chose a ChoiceField to be able to get a nice display in the browsable API, you can achieve the same by creating a string representation for your model.
In django model query, i want to know the sequential execution of it. Consider a query Blog.objects.get(name='palm').
In this where the Blog is defined, is that same as class blog in models.py?
What is objects i can't find anything related to this in source files of django. If Blog is a class, then what is the type of objects?
I want a development side concept. Can any one please explain how django makes these possible?
Every non-abstract Django model class has an attribute objects attached to it (unless you of course explicitly remove it).
object is a Manager. It is an object that has a lot of methods to construct queries that are then send to the database to fetch/store data.
So you first access the objects manager of the Blog class, next you call .get(name='palm') on it. This means that Django will translate this into a query. This depends on the database system you use. For instance if it is MySQL it will look like:
SELECT name, some, other columns
FROM app_blog
WHERE name = 'palm'
The database will respond with zero, one or more rows, and Django will, in case no or more than one row is found, raise a DoesNotExists or MultipleObjectsReturned error. Otherwise it will load the data into a Blog object (by deserializing the columns into Python objects).
I have the Model where i have relations with 3 diff models.
Now i know that if i use
object.delete() , then child objects will also gets deleted.
Now the problem is that in my whole models classes i have the database column called DELETED which i want to set to 1 whenever someone deletes some object.
I can override the deleted function in class called BaseModel and and override the custom delete method of updating field to 1. But the problem is
If i do that way then i have to manually go through all the cascading relationships and manually call the delete ob every object.
Is there any way that by just calling object.delete(). It automatically traverses through child objects as well
Please look at Django: How can I find which of my models refer to a model.
You can use a Collector to get all references to all the necessary items using collect(). This is the code Django is using to simulate the CASCADE behavior. Once you have collected all the references, for each of those items you can update the DELETED column.
More info in the code.
Good luck.