Django is proving the model field argument default (https://docs.djangoproject.com/en/dev/ref/models/fields/#default) but as I know it will be called every time new object is created via django.
If we insert/create a records with raw queries (using django.db.connection.cursor) we will get the exception cause Field 'xyz' doesn't have a default value.
How to represent the db level default value for the column in model. Like db_index.
I hope you guy understand my question.
There is an open ticket 470 to include default values in the SQL schema.
Until this feature has been added to Django, you'll have to manually run alter table statements yourself or write a migration to run them if you want a default value in the SQL schema.
Note that Django allows callables as defaults, so even if this feature is added to Django, it won't be possible to have all defaults in the database.
Related
In my project, I have made some setting fields within the "settings.py" file configurable by exposing them to the user via an environment file. So the user can modify the values on the .env file, and that is then used to update the setting fields within the main project settings.py file.
I want to improve this by migrating some of this values to the database, so users can set their values interactively via the product's UI instead of having to modify the .env. I have taken the following approach:
After the default database has been declared in the DATABASES dictionary, I isntantiate a connection.cursor() to run a raw SQL query that retrieves the settings from the database, as described in the documentation.
I manipulate the results of the cursor to construct a dictionary in which keys are setting identifiers and values are the relevant values from the database, as set by the user.
This dictionary is then used to assign the appropriate value to each Django setting variable (i.e. SESSION_COOKIE_AGE, MEDIA_ROOT, etc.). So at each setting variable instead of doing a getenv, I retrieve the value from the dictionary using the relevant key.
I have observed the code's behavior within settings.py, and I can see that each setting value gets assigned to the correct variable, identically to how it was when using the previous .env approach. The problem is that when these setting variables are accessed in the code via django.conf.settings or by direct import (from project.settings import SETTING), their value is an empty string, as if it has not been declared in the first place.
I have noticed that settings declared before the cursor is instantiated (regardless of whether their value was hardcoded or retrieved from the .env) work fine. Settings after the cursor seem to not maintain their state outside the settings.py file.
Can anyone please enlighten me as to why using a cursor within settings.py essentially invalidates all setting fields declared after it?
I managed to resolve the issue. The problem was that I was using Django's connection model (from django.db import connection), which performs the SQL query on the default database. Since this was running within settings.py while the project was still in its set-up process, it caused the strange behavior described above.
Solution: Instead of using django.db.connection (a Django model) to perform the query, I used Python's pyodbc library instead, which is the database engine used in the project. With pyodbc I was able to establish a connection to the database and run my query (as described here) regardless of Django's state.
The documentation just says
To save an object back to the database, call save()
That does not make it clear. Exprimenting, I found that if I include an id, it updates existing entry, while, if I don't, it creates a new row. Does the documentation specify what happens?
It's fuly documented here:
https://docs.djangoproject.com/en/2.2/ref/models/instances/#how-django-knows-to-update-vs-insert
You may have noticed Django database objects use the same save()
method for creating and changing objects. Django abstracts the need to
use INSERT or UPDATE SQL statements. Specifically, when you call
save(), Django follows this algorithm:
If the object’s primary key attribute is set to a value that evaluates
to True (i.e., a value other than None or the empty string), Django
executes an UPDATE. If the object’s primary key attribute is not set
or if the UPDATE didn’t update anything (e.g. if primary key is set to
a value that doesn’t exist in the database), Django executes an
INSERT. The one gotcha here is that you should be careful not to
specify a primary-key value explicitly when saving new objects, if you
cannot guarantee the primary-key value is unused. For more on this
nuance, see Explicitly specifying auto-primary-key values above and
Forcing an INSERT or UPDATE below.
As a side note: django is OSS so when in doubt you can always read the source code ;-)
Depends on how the Model object was created. If it was queried from the database, UPDATE. If it's a new object and has not been saved before, INSERT.
Basically, I am creating a Django app with a PostgreSQL database, now for a given table on a database I have to use Triggers for preventing update on selected columns, Trigger is working properly from the database side doing what it is made to do, preventing update on selected columns and allowing to update non-associated column with trigger.
Now from the Django side, whenever I try to update the table's fields/columns which are not associated with the trigger, it invokes/executes the trigger preventing the update operation for those not associated fields/columns and this trigger is even being executed when trying to add new record or data from Django side preventing insertion of new record.
Can anyone please help?
Thank you
I found the solution to my problem.
I created a PostgreSQL trigger to prevent the updating of a few columns. From the database side, I tested it and is working completely fine. The problem was with the Django side.
In the database, a single column can be updated by entering a value, it has no connection with other columns so no error in updating the specific column.
For example, a row has two fields, one modifiable and other is prevented with a trigger, so when I modify that modifiable field it will be modified.
The problem with Django, in Django, it will always write the whole object. Like, if you have an object with two fields, one modifiable, the other not, but both are mapped in Django, when you save that object, Django will update both fields, even if only one or none of them have changed.
As the Django update all the fields even if only one field was being changed, the trigger was being invoked.
so I had to look for an option that will allow only to update specified fields or changed/modified fields only and not the other unchanged fields.
I found this update_fields,
update_fields helps me specific which fields should be updated instead of updating the whole row or all the fields.
https://docs.djangoproject.com/en/1.10/topics/migrations/
Here it says:
"PostgreSQL is the most capable of all the databases here in terms of schema support; the only caveat is that adding columns with default values will cause a full rewrite of the table, for a time proportional to its size.
"For this reason, it’s recommended you always create new columns with null=True, as this way they will be added immediately."
I am asking if I get it correct.
From what I understand, I should first create the field with null=True and no default value then migrate it and then give it a default value and migrate it again, the values will be added immediately, but otherwise the whole database would be rewritten and Django migration doesn't want to do the trick by itself?
It's also mentioned in that same page that:
In addition, MySQL will fully rewrite tables for almost every schema
operation and generally takes a time proportional to the number of
rows in the table to add or remove columns. On slower hardware this
can be worse than a minute per million rows - adding a few columns to
a table with just a few million rows could lock your site up for over
ten minutes.
and
SQLite has very little built-in schema alteration support, and so
Django attempts to emulate it by:
Creating a new table with the new schema Copying the data across
Dropping the old table Renaming the new table to match the original
name
So in short, what that statement you are referring to above really says is
postgresql exhibits mysql like behaviour when adding a new column with
a default value
The approach you are trying would work. Adding a column with a null would mean no table re write. You can then alter the column to have a default value. However existing nulls will continue to be null
The way I understand it, on the second migration the default value will not be written to the existing rows. Only when a new row is created with no value for the default field it will be written.
I think the warning to use null=True for new column is only related to performance. If you really want all the existing rows to have the default value just use default= and accept the performance consequence of a table rewrite.
My tables are specified as models in Python, which are then made into MySQL tables by Django. In Python:
class my_table(models.Model):
is_correct = models.IntegerField(default = 0)
If I do an insert into this table then I want it to automatically insert 0 for the column is_correct unless I specify a different value. I also want this to happen if I am using raw sql or if I am inserting from a MySQL stored procedure.
The default argument only seems to work from within Python. It's not translated into anything that MySQL sees. Is such a thing possible?
Yes, default argument works on django level. If you want to move it to mysql, perform ALTER TABLE ... query manually.
You also can try to extend models.IntegerField and override db_type. But it should be done for every field...