Django cannot insert duplicate key value NULL - python

I need to allow NULL in enc_id, but if values are not null, I need those values to be unique. Here's my model:
class Intake(models.Model):
id = models.AutoField(primary_key=True)
enc_id = models.IntegerField(blank=True, null=True, unique=True)
enc_date = models.DateField(null=True)
enrollment = models.ForeignKey('Enrollment')
Error when trying to add another instance without an enc_id:
django.db.utils.IntegrityError: ('23000', "[23000] [FreeTDS][SQL Server]Violation of UNIQUE KEY constraint 'UQ__caseload__E136D21F4222D4EF'. Cannot insert duplicate key in object 'dbo.caseload_intake'. The duplicate key value is (<NULL>). (2627) (SQLExecDirectW)")
According to what I've read (including this and a few resolved Django issues), having blank=True, null=True, unique=True should allow me to have duplicate NULLs, but no go. I've recreated my DB just in case and it still raises the integrity error.
I'm running Django 1.10 and MS SQL Server 10. Any ideas?

For those who come up against this in the future - this is a MS SQL Server limitation. ANSI standards require that a UNIQUE index contains no duplicate values - SQL Server (but not other databases) consider NULL is equal to NULL - many other databases (e.g. Postgres) and the ANSI standard from 92 onward consider NULL to be an unknowable value, which is not the same as a different unknowable value -- consider in SQL SELECT WHERE __ IS NULL vs WHERE ___ = NULL. SQL Server has an ANSI_NULLS flag, which in effect forces the standards-compliant use of IS (NOT) NULL rather than xyz =/<> NULL in queries, but this doesn't have the same effect in indexes.
More non-Django-specific info here: https://www.sqlservergeeks.com/sql-server-unique-constraint-multiple-null-values/

Rosie,
it doesn't make a lot of sense to me to have a unique field where you can have duplicate NULLs. What about you remove the constraint and reinforce this rule by other means?
For example, you could override the method save in order to do that..
def save(self, *args, **kwargs):
# place your logic here
I hope this makes sense, but a rule at DB level such as the one you are defining is extremely limiting..

Related

Django rename field and create new one with the same name returns psycopg2 error: DuplicateTable: relation already exists

I have two Django (foreign key) fields - FieldA and FieldB - referring to the same class but being different objects. Then I wanted to rename FieldB to FieldC. I renamed this in one migration (automatic detection).
Then I realised I actually need a new field with the same name as the old FieldB (also foreign key to the same class). Therefore I created a second migration to add a new field: FieldB. Since I just renamed the other one I assumed that this would not give any issues in the DB.
Locally I develop on an SQLite DB and this works fine. When I pushed it to our Postgres DB, this returned an error.
Model class
class ModelClass(Model):
field_a: ForeignClassA = models.ForeignKey(ForeignClassA, on_delete=models.SET_NULL, blank=True, null=True, related_name='FieldA')
# this one should be renamed to field_c after which I create another field with the same name and properties.
field_b: ForeignClassA = models.ForeignKey(ForeignClassA, on_delete=models.SET_NULL, blank=True, null=True, related_name='FieldB')
Migration one: rename
operations = [
migrations.RenameField(
model_name='modelname',
old_name='FieldB',
new_name='FieldC',
),]
Migration two: add field
operations = [
migrations.AddField(
model_name='modelname',
name='FieldB',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='FieldB', to='app.othermodel'),
),]
When I run this migration I get the following error
Applying app.xxx1_previous... OK
Applying app.xxx2_rename_field... OK
Applying app.xxx3_add_field...Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.10/site-packages/django/db/backends/utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
psycopg2.errors.DuplicateTable: relation "app_modelname_fieldB_id_8c448c6a" already exists
Normally this column should have been deleted and replaced with a new column with the new name. I found this issue describing a similar issue in Postgress and now I wonder if this is a bug in Django. Could it be that the cleanup from the rename is not done correctly?
EDIT 1
After a closer inspection of the Postgres DB I can see that after the rename I still have two columns in my table: FieldA_id and FieldB_id, while I would expect to have FieldA_id and FieldC_id. Obviously this creates an issue if I subsequently try to add FieldB again.
Could it be that Postgres (or the Django controller) does not rename this column for some reason?
EDIT 2
I inspected the SQL query to the Postgres DB. The following SQL is produced:
BEGIN;
--
-- Rename field field_b on modelclass to field_c
--
SET CONSTRAINTS "app_modelclass_field_b_id_8c448c6a_fk_otherapp_otherclass_id" IMMEDIATE; ALTER TABLE "app_modelclass" DROP CONSTRAINT "app_modelclass_field_b_id_8c448c6a_fk_otherapp_otherclass_id";
ALTER TABLE "app_modelclass" RENAME COLUMN "field_b_id" TO "field_c_id";
ALTER TABLE "app_modelclass" ADD CONSTRAINT "app_modelclass_field_c_id_9f82ac2c_fk_otherapp_otherclass_id" FOREIGN KEY ("field_c_id") REFERENCES "otherapp_otherclass" ("id") DEFERRABLE INITIALLY DEFERRED;
COMMIT;
This rename however is seems to be only executed partially since the next step reports an issue, but the migration succeeds.
However, when I manually run this SQL query, the command succeeds and the upgrade is partly successful. In summary:
The SQL query works
While running the SQL, the column name is updated
While running the SQL, the constraint is updated
Still the next step complains about a reference to this rename. Still clueless at the moment what this might be.
After a long search down the SQL rabbit hole, I found out that the rename migration for PostgresQL does not drop the old index.
When I wanted to create a new field, it tried to create a new index with the same name as the old index (which wasn't removed).
A simple way to avoid it is to 'sandwich' the rename with two alter operations. One to unset the index and one to set it back afterwards.
operations = [
migrations.AlterField(
model_name='modelclass',
name='field_b',
field=models.ForeignKey(blank=True, db_index=False, null=True, on_delete=django.db.models.deletion.SET_NULL,
to='otherapp.otherclass'),
),
migrations.RenameField(
model_name='modelclass',
old_name='field_b',
new_name='field_c',
),
migrations.AlterField(
model_name='modelclass',
name='field_c',
field=models.ForeignKey(blank=True, db_index=True, null=True, on_delete=django.db.models.deletion.SET_NULL,
to='otherapp.otherclass'),
),
]
I reported this issue also on the Django bug tracker after which they closed it since there exists a duplicate from 7 years ago.

Django implementation of default value in database

I had a field on a model with was:
class SomeModel(models.Model):
some_field = models.CharField(max_length=10, null=True, blank=True)
Then I changed my model to:
class SomeModel(models.Model):
some_field = models.CharField(max_length=10, default='')
When I ran django-admin sqlmigrate somemodels somemigration to check my migration I found the following changes:
ALTER TABLE "somemodels" ALTER COLUMN "some_field" SET DEFAULT '';
UPDATE "somemodels" SET "some_field" = '' WHERE "some_field" IS NULL;
ALTER TABLE "somemodels" ALTER COLUMN "some_field" SET NOT NULL;
ALTER TABLE "somemodels" ALTER COLUMN "some_field" DROP DEFAULT;
I am not understanding why the Django apply a DROP DEFAULT in the table since I am creating a default value. If this is correct, how does Django implement the default values?
Information about my tools:
Postgresql 9.5;
Django 1.11b1;
The comments to django/db/backends/base/schema.py, starting ln. 571, detail the steps involved here:
When changing a column NULL constraint to NOT NULL with a given default value, we need to perform 4 steps:
Add a default for new incoming writes
Update existing NULL rows with new default
Replace NULL constraint with NOT NULL
Drop the default again.
Django does not usually use the built-in SQL default to set values (remember that Django can use callable values for defaults). You can find more information in this rejected bug report.

Django Migration Error with MySQL: BLOB/TEXT column 'id' used in key specification without a key length"

We have Django Model, use Binary Field for ID.
# Create your models here.
class Company(models.Model):
id = models.BinaryField(max_length=16, primary_key=True)
name = models.CharField(max_length=12)
class Meta:
db_table = "company"
We use MySQL Database and have error when migrate.
File "/home/cuongtran/Downloads/sample/venv/lib/python3.5/site-packages/MySQLdb/connections.py", line 270, in query
_mysql.connection.query(self, query)
django.db.utils.OperationalError: (1170, "BLOB/TEXT column 'id' used in key specification without a key length")
Do you have any solution? We need to use MySQL and want to use the Binary Field for ID.
Thank you!
I think you cannot achieve this. Based on Django documentation it looks like use of binary fields is discouraged
A field to store raw binary data. It only supports bytes assignment.
Be aware that this field has limited functionality. For example, it is
not possible to filter a queryset on a BinaryField value. It is also
not possible to include a BinaryField in a ModelForm.
Abusing BinaryField
Although you might think about storing files in the database, consider
that it is bad design in 99% of the cases. This field is not a
replacement for proper static files handling.
And based on a Django bug, it is most likely impossible to achieve a unique value restriction on a binary field. This bug is marked as wont-fix. I am saying most likely impossible as I did not find evidence to confirm that binary field is stored as a BLOB field but the error does allude to it.
Description
When I used a field like this:
text = models.TextField(maxlength=2048, unique=True)
it results in the following sql error when the admin app goes to make the table
_mysql_exceptions.OperationalError: (1170, "BLOB/TEXT column 'text' used in key specification without a key length")
After a bit of investigation, it turns out that mysql refuses to use unique with the column unless it is only for an indexed part of the text field:
CREATE TABLE `quotes` ( \`id\` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `text` longtext NOT NULL , \`submitTS\` datetime NOT NULL, `submitIP` char(15) NOT NULL, `approved` bool NOT NULL, unique (text(1000)));
Of course 1000 is just an arbitrary number I chose, it happens to be the maximum my database would allow. Not entirely sure how this can be fixed, but I figured it was worth mentioning.
MySQL restricts the primary key on BLOB/TEXT column to first N chars, when you generates migration file using Django's makemigrations command, BinaryField in Django is mapped to longblob which is BLOB column in MySQL without specifying the key length.
Which means your Django model definition :
class Company(models.Model):
id = models.BinaryField(max_length=16, primary_key=True)
name = models.CharField(max_length=12)
class Meta:
db_table = "company"
will be converted to SQL expression that causes this error (You can check out the detailed SQL expressions by sqlmigrate command) :
CREATE TABLE `company` (`id` longblob NOT NULL PRIMARY KEY,
`name` varchar(12) NOT NULL);
while the correct SQL expression for MySQL should be like this :
CREATE TABLE `company` (`id` longblob NOT NULL,
`name` varchar(12) NOT NULL);
ALTER TABLE `company` ADD PRIMARY KEY (id(16));
where PRIMARY KEY (id(16)) comes from your id length in the BLOB column, used to structure primary key index of the table.
So the easiest solution is as described in the accepted answer -- avoid BinaryField in Django as primary key, or you can manually add raw SQL scripts to your migration file if you really need BinaryField (BLOB column) to be primary key and you are sure the id field will NOT go beyond the specific size (in your case, 16 bytes).

Django IntegrityError with DateTimeField

I have field in my model as follows.
view_time = ArrayField(
models.DateTimeField(auto_now_add=True))
but i get error:
django.db.utils.IntegrityError: null value in column "view_time"violates not-null constraint
DETAIL: Failing row contains (18, 0, null, null, null).
error arises when i try to create new object, and add value:
recent_views = UserRecentViews.objects.create()
recent_views.add_view(product.article)
i use django 1.8.8 and Python 3.5.2
i reset database fiew times but it doesn`t help, db is Postgres.
I think problem in object creation? but why django can not create object with current datetime? auto_now_add=True was added for this.
My question is how add autogenerated datetime field with django?
First of all, your database appears to be incompletely normalized. The use of comma separate values in a column or an array type is usually a good indication of that.
Secondly.
Tip: Arrays are not sets; searching for specific array elements can be
a sign of database misdesign. Consider using a separate table with a
row for each item that would be an array element. This will be easier
to search, and is likely to scale better for a large number of
elements.
Arrays are just postgresql's way of giving you enough rope to ...
Your best bet really is to normalize your database. Your inferior option is to set blank=True, null = True
view_time = ArrayField(
models.DateTimeField(auto_now_add=True), blank=True, null=True)
That's because when you do the following django has no reason to create any DateTimeField objects at all.
recent_views = UserRecentViews.objects.create()
So it just sets the array field as null, which is not allowed.
Oh to be more specific
but why django can not create object with current datetime
because you are not telling it to.

Python mysql check for duplicate before insert

here is the table
CREATE TABLE IF NOT EXISTS kompas_url
(
id BIGINT(20) NOT NULL AUTO_INCREMENT,
url VARCHAR(1000),
created_date datetime,
modified_date datetime,
PRIMARY KEY(id)
)
I am trying to do INSERT to kompas_url table only if url is not exist yet
any idea?
thanks
You can either find out whether it's in there first, by SELECTing by url, or you can make the url field unique:
CREATE TABLE IF NOT EXISTS kompas_url
...
url VARCHAR(1000) UNIQUE,
...
)
This will stop MySQL from inserting a duplicate row, but it will also report an error when you try and insert. This isn't good—although we can handle the error, it might disguise others. To get around this, we use the ON DUPLICATE KEY UPDATE syntax:
INSERT INTO kompas_url (url, created_date, modified_date)
VALUES ('http://example.com', NOW(), NOW())
ON DUPLICATE KEY UPDATE modified_date = NOW()
This allows us to provide an UPDATE statement in the case of a duplicate value in a unique field (this can include your primary key). In this case, we probably want to update the modified_date field with the current date.
EDIT: As suggested by ~unutbu, if you don't want to change anything on a duplicate, you can use the INSERT IGNORE syntax. This simply works as follows:
INSERT IGNORE INTO kompas_url (url, created_date, modified_date)
VALUES ('http://example.com', NOW(), NOW())
This simply turns certain kinds of errors into warnings—most usefully, the error that states there will be a duplicate unique entry. If you place the keyword IGNORE into your statement, you won't get an error—the query will simply be dropped. In complex queries, this may also hide other errors that might be useful though, so it's best to make doubly sure your code is correct if you want to use it.

Categories

Resources