I'm using Django with MySQL and having a problem after using 'inspectdb' command to create my models.py file.
DDL:
CREATE TABLE YDB_Collects (
COriginal_Data_Type_ID VARCHAR(16) NOT NULL,
CTask_Name VARCHAR(16) NOT NULL,
PRIMARY KEY (COriginal_Data_Type_ID, CTask_Name),
INDEX FK_COLLECTS_TASK (CTask_Name),
CONSTRAINT FK_COLLECTS_ORIGINAL_DATA_TYPE FOREIGN KEY (COriginal_Data_Type_ID) REFERENCES YDB_Original_Data_Type (Original_Data_Type_ID),
CONSTRAINT FK_COLLECTS_TASK FOREIGN KEY (CTask_Name) REFERENCES YDB_Task (Task_Name)
)
As you can see, COriginal_Data_Type_ID and CTask_Name are foreign keys, as well as the composite primary key.
For this DDL, Django's 'inspectdb' command gave this model:
class YdbCollects(models.Model):
coriginal_data_type = models.ForeignKey('YdbOriginalDataType', db_column='COriginal_Data_Type_ID') # Field name made lowercase.
ctask_name = models.ForeignKey('YdbTask', db_column='CTask_Name') # Field name made lowercase.
class Meta:
managed = False
db_table = 'ydb_collects'
unique_together = (('COriginal_Data_Type_ID', 'CTask_Name'),)
But when I run 'makemigrations' command, it gives me the error message:
'unique_together' refers to the non-existent field 'COriginal_Data_Type_ID' and 'CTask_Name'
When I change:
unique_together = (('COriginal_Data_Type_ID', 'CTask_Name'),)
into:
unique_together = (('coriginal_data_type', 'ctask_name'),)
then yeah, it goes OK. But is this correct way to go? Seems like the code has different schema from my DDL. Original foreign key I defined was ID of data type, not data type itself.
Did I do something wrong here? And where are my COriginal_Data_Type_ID and CTask_Name fields?
This has been fixed in Django 1.10 and later (and backported to Django 1.8.8 and 1.9).
It was a Django bug and the fix is here: django/django#2cb50f9
It involved this change in django/core/management/commands/inspectdb.py:
# tup = '(' + ', '.join("'%s'" % c for c in columns) + ')' # Change this
tup = '(' + ', '.join("'%s'" % column_to_field_name[c] for c in columns) + ')' # to this
unique_together.append(tup)
Related
Given models
from django.db import models
class RelatedTo(models.Model):
pass
class Thing(models.Model):
n = models.IntegerField()
related_to = models.ForeignKey(RelatedTo, on_delete=models.CASCADE)
class Meta:
constraints = [
models.UniqueConstraint(
fields=['n', 'related_to'],
name='unique_n_per_related_to'
)
]
and
>>> r = RelatedTo.objects.create()
>>> thing_zero = Thing.objects.create(related_to=r, n=0)
>>> thing_one = Thing.objects.create(related_to=r, n=1)
I want to switch their numbers (n).
In update method of my serializer (drf) I was trying to
#transaction.atomic
def update(self, instance, validated_data):
old_n = instance.n
new_n = validated_data['n']
Thing.objects.filter(
related_to=instance.related_to,
n=new_n
).update(n=old_n)
return super().update(instance, validated_data)
but it still runs into constraint.
select_for_update doesn't help either.
Is it possible not to run into this DB constraint using Django ORM or do I have to run raw sql to achieve that?
Django==3.1.2
postgres:12.5
Error
duplicate key value violates unique constraint "unique_n_per_related_to"
DETAIL: Key (n, related_to)=(1, 1) already exists.
I wasn't able to resolve this issue neither with bulk_update nor with raw sql.
stmt = f"""
update {to_update._meta.db_table} as t
set n = i.n
from (values
('{to_update.id}'::uuid, {n}),
('{method.id}'::uuid, {n})
) as i(id, n)
where t.id = i.id
"""
with connection.cursor() as cur:
cur.execute(stmt)
The only solution for this problem is making the column nullable and write 3 times to the table which physically hurts.
Python version 3.6.8,
peewee version 3.10.0
I have 3 tables set up in a sqlite database using peewee.
Plan:
- id : int (primary key)
- plan_name : varchar (unique)
- status : int (foreign key for PlanStatus)
- category : int (foreign key for PlanCategory)
PlanStatus:
- id : int (primary key)
- value : varchar (unique)
PlanCategory:
- id : int (primary key)
- value : varchar (unique)
PlanStatus is an enum reference table, and PlanCategory is another enum reference table. In the code below, PlanStatus is implemented naively with much boilerplate that would have to be duplicated for each other enum table.
In contrast, PlanCategory inherits from parent class EnumBaseModel, including 2 classmethods. The goal is to reduce boilerplate with inheritance.
The result is that both enum tables were populated successfully, and you can access values from them with queries. However, in creating a Plan entry, a row is added in the database (inspected in sqlite), but a select query returns the row with a missing value for the PlanCategory foreign key.
Creating tables and adding rows:
from peewee import *
DATABASE = SqliteDatabase('test.db')
# Base class with the inner Meta class defined
class BaseModel(Model):
class Meta:
database = DATABASE
# PlanStatus class, used with the following 2 methods
class PlanStatus(BaseModel):
value = CharField(unique=True)
# Helper function for PlanStatus
def init_plan_status_values(values):
for value in values:
if not PlanStatus.select().where(PlanStatus.value == value).exists():
PlanStatus.create(value=value)
# Helper function for PlanStatus
def get_plan_status(value):
try:
return PlanStatus.get(PlanStatus.value == value)
except DoesNotExist as err:
return None
# Base class with 2 classmethods
class EnumBaseModel(BaseModel):
value = CharField(unique=True)
#classmethod
def init_values(cls, values):
for value in values:
if not cls.select().where(cls.value == value).exists():
cls.create(value=value)
#classmethod
def get(cls, value):
try:
return cls.select().where(cls.value == value).get()
except DoesNotExist as err:
return None
# PlanCategory inherits EnumBaseModel class and its 2 classmethods
class PlanCategory(EnumBaseModel):
pass
# Plan has 2 foreign keys
class Plan(BaseModel):
plan_name = CharField(unique=True)
status = ForeignKeyField(model=PlanStatus, backref='plans')
category = ForeignKeyField(model=PlanCategory, backref='plans')
DATABASE.connect()
DATABASE.create_tables(
[
PlanStatus,
PlanCategory,
Plan
],
safe=True
)
# Populating the enum values PlanStatus the explicit way above
init_plan_status_values(('STATUS-1', 'STATUS-2', 'STATUS-3'))
# Find status_2 the explicit way above
status_2 = get_plan_status('STATUS-2')
# Populating the enum values in PlanCategory using the inherited classmethod above
PlanCategory.init_values(('CATEGORY-1', 'CATEGORY-2', 'CATEGORY-3'))
# Find category_3 using the inherited classmethod above
category_3 = PlanCategory.get('CATEGORY-3')
# Add one plan
try:
Plan.create(
plan_name='not bad plan',
status=status_2,
category=category_3,
)
except IntegrityError as err:
print(err)
Now we see in sqlite3 the rows were added succesfully:
SQLite version 3.22.0 2018-01-22 18:45:57
Enter ".help" for usage hints.
sqlite> .tables
plan plancategory planstatus
sqlite> select * from planstatus;
1|STATUS-1
2|STATUS-2
3|STATUS-3
sqlite> select * from plancategory;
1|CATEGORY-1
2|CATEGORY-2
3|CATEGORY-3
sqlite> select * from plan;
1|not bad plan|2|3
sqlite>
Now checking the plan entry from the select() query, 'a_plan.status' is valid, but 'a_plan.category' is None.
# We see the references status_2 and category_3 are valid
print('status_2 = ', type(status_2), status_2, status_2.value)
print('category_3 = ', type(category_3), category_3, category_3.value)
print()
# We check the one plan in the table and see now the foreign-key value "category" is missing
a_plan = Plan.get()
print('a_plan: plan_name={}, status={}, category={}'.format(
a_plan.plan_name,
a_plan.status,
a_plan.category
))
print()
Printed results:
status_2 = <Model: PlanStatus> 2 STATUS-2
category_3 = <Model: PlanCategory> 3 CATEGORY-3
a_plan: plan_name=not bad plan, status=2, category=None
Additionally, I found attributes 'status_id' and 'category_id' created by peewee. At least 'category_id' still retains the foreign key int value.
# After inspecting dir(a_plan), found these attributes:
print('status = ', type(a_plan.status), a_plan.status)
print('status_id = ', type(a_plan.status_id), a_plan.status_id)
print('category = ', type(a_plan.category), a_plan.category)
print('category_id = ', type(a_plan.category_id), a_plan.category_id)
Printed results:
status = <Model: PlanStatus> 2
status_id = <class 'int'> 2
category = <class 'NoneType'> None
category_id = <class 'int'> 3
Is there any way to fix the problem so it can resolve 'a_plan.category'?
You're overriding methods (.get) that are used by Peewee. Don't do that! I think you are making things too magical (and introducing queries all over the place in the process).
Try simplifying. I can almost guarantee the issue is in the overrides you're doing of classmethods that Peewee depends on.
How can I update a tables columns and column data types in PeeWee?
I have already created the table Person in the database from my model. But I've now added some new fields to the model and changed the type of certain existing fields/columns.
The following doesn't update the table structure:
psql_db = PostgresqlExtDatabase(
'MyDB',
user='foo',
password='bar',
host='',
port='5432',
register_hstore=False
)
class PsqlModel(Model):
"""A base model that will use our Postgresql database"""
class Meta:
database = psql_db
class Person(PsqlModel):
name = CharField()
birthday = DateField() # New field
is_relative = BooleanField() # Field type changed from varchar to bool
def __str__(self):
return '%s, %s, %s' % (self.name, self.birthday, self.is_relative)
psql_db.connect()
# is there a function to update/change the models table columns??
psql_db.create_tables([Person], True) # Hoping an update of the table columns occurs
# Error because no column birthday and incorrect type for is_relative
grandma_glen = Person.create(name='Glen', birthday=date(1966,1,12), is_relative=True)
From the documentation: http://docs.peewee-orm.com/en/latest/peewee/example.html?highlight=alter
Adding fields after the table has been created will required you to
either drop the table and re-create it or manually add the columns
using an ALTER TABLE query.
Alternatively, you can use the schema migrations extension to alter
your database schema using Python.
From http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#migrate:
# Postgres example:
my_db = PostgresqlDatabase(...)
migrator = PostgresqlMigrator(my_db)
title_field = CharField(default='')
status_field = IntegerField(null=True)
migrate(
migrator.add_column('some_table', 'title', title_field),
migrator.rename_column('some_table', 'pub_date', 'publish_date'),
migrator.add_column('some_table', 'status', status_field),
migrator.drop_column('some_table', 'old_column'),
)
And a lot many other operations are possible.
So, first you will need to alter the table schema, and then, you can update your model to reflect those changes.
We run into a known issue in django:
IntegrityError during Many To Many add()
There is a race condition if several processes/requests try to add the same row to a ManyToManyRelation.
How to work around this?
Envionment:
Django 1.9
Linux Server
Postgres 9.3 (An update could be made, if necessary)
Details
How to reproduce it:
my_user.groups.add(foo_group)
Above fails if two requests try to execute this code at once. Here is the database table and the failing constraint:
myapp_egs_d=> \d auth_user_groups
id | integer | not null default ...
user_id | integer | not null
group_id | integer | not null
Indexes:
"auth_user_groups_pkey" PRIMARY KEY, btree (id)
fails ==> "auth_user_groups_user_id_group_id_key" UNIQUE CONSTRAINT,
btree (user_id, group_id)
Environment
Since this only happens on production machines, and all production machines in my context run postgres, a postgres only solution would work.
Can the error be reproduced?
Yes, let us use the famed Publication and Article models from Django docs. Then, let's create a few threads.
import threading
import random
def populate():
for i in range(100):
Article.objects.create(headline = 'headline{0}'.format(i))
Publication.objects.create(title = 'title{0}'.format(i))
print 'created objects'
class MyThread(threading.Thread):
def run(self):
for q in range(1,100):
for i in range(1,5):
pub = Publication.objects.all()[random.randint(1,2)]
for j in range(1,5):
article = Article.objects.all()[random.randint(1,15)]
pub.article_set.add(article)
print self.name
Article.objects.all().delete()
Publication.objects.all().delete()
populate()
thrd1 = MyThread()
thrd2 = MyThread()
thrd3 = MyThread()
thrd1.start()
thrd2.start()
thrd3.start()
You are sure to see unique key constraint violations of the type reported in the bug report. If you don't see them, try increasing the number of threads or iterations.
Is there a work around?
Yes. Use through models and get_or_create. Here is the models.py adapted from the example in the django docs.
class Publication(models.Model):
title = models.CharField(max_length=30)
def __str__(self): # __unicode__ on Python 2
return self.title
class Meta:
ordering = ('title',)
class Article(models.Model):
headline = models.CharField(max_length=100)
publications = models.ManyToManyField(Publication, through='ArticlePublication')
def __str__(self): # __unicode__ on Python 2
return self.headline
class Meta:
ordering = ('headline',)
class ArticlePublication(models.Model):
article = models.ForeignKey('Article', on_delete=models.CASCADE)
publication = models.ForeignKey('Publication', on_delete=models.CASCADE)
class Meta:
unique_together = ('article','publication')
Here is the new threading class which is a modification of the one above.
class MyThread2(threading.Thread):
def run(self):
for q in range(1,100):
for i in range(1,5):
pub = Publication.objects.all()[random.randint(1,2)]
for j in range(1,5):
article = Article.objects.all()[random.randint(1,15)]
ap , c = ArticlePublication.objects.get_or_create(article=article, publication=pub)
print 'Get or create', self.name
You will find that the exception no longer shows up. Feel free to increase the number of iterations. I only went up to a 1000 with get_or_create it didn't throw the exception. However add() usually threw an exception with in 20 iterations.
Why does this work?
Because get_or_create is atomic.
This method is atomic assuming correct usage, correct database
configuration, and correct behavior of the underlying database.
However, if uniqueness is not enforced at the database level for the
kwargs used in a get_or_create call (see unique or unique_together),
this method is prone to a race-condition which can result in multiple
rows with the same parameters being inserted simultaneously.
Update:
Thanks #louis for pointing out that the through model can in fact be eliminated. Thuse the get_or_create in MyThread2 can be changed as.
ap , c = article.publications.through.objects.get_or_create(
article=article, publication=pub)
If you are ready to solve it in PostgreSQL you may do the following in psql:
-- Create a RULE and function to intercept all INSERT attempts to the table and perform a check whether row exists:
CREATE RULE auth_user_group_ins AS
ON INSERT TO auth_user_groups
WHERE (EXISTS (SELECT 1
FROM auth_user_groups
WHERE user_id=NEW.user_id AND group_id=NEW.group_id))
DO INSTEAD NOTHING;
Then it will ignore duplicates only new inserts in table:
db=# TRUNCATE auth_user_groups;
TRUNCATE TABLE
db=# INSERT INTO auth_user_groups (user_id, group_id) VALUES (1,1);
INSERT 0 1 -- added
db=# INSERT INTO auth_user_groups (user_id, group_id) VALUES (1,1);
INSERT 0 0 -- no insert no error
db=# INSERT INTO auth_user_groups (user_id, group_id) VALUES (1,2);
INSERT 0 1 -- added
db=# SELECT * FROM auth_user_groups; -- check
id | user_id | group_id
----+---------+----------
14 | 1 | 1
16 | 1 | 2
(2 rows)
db=#
From what I'm seeing in the code provided. I believe that you have a constraint for uniqueness in pairs (user_id, group_id) in groups. So that's why running 2 times the same query will fail as you are trying to add 2 rows with the same user_id and group_id, the first one to execute will pass, but the second will raise an exception.
I have a model with a unique_together defined for 3 fields to be unique together:
class MyModel(models.Model):
clid = models.AutoField(primary_key=True, db_column='CLID')
csid = models.IntegerField(db_column='CSID')
cid = models.IntegerField(db_column='CID')
uuid = models.CharField(max_length=96, db_column='UUID', blank=True)
class Meta(models.Meta):
unique_together = [
["csid", "cid", "uuid"],
]
Now, if I attempt to save a MyModel instance with an existing csid+cid+uuid combination, I would get:
IntegrityError: (1062, "Duplicate entry '1-1-1' for key 'CSID'")
Which is correct. But, is there a way to customize that key name? (CSID in this case)
In other words, can I provide a name for a constraint listed in unique_together?
As far as I understand, this is not covered in the documentation.
Its not well documented, but depending on if you are using Django 1.6 or 1.7 there are two ways you can do this:
In Django 1.6 you can override the unique_error_message, like so:
class MyModel(models.Model):
clid = models.AutoField(primary_key=True, db_column='CLID')
csid = models.IntegerField(db_column='CSID')
cid = models.IntegerField(db_column='CID')
# ....
def unique_error_message(self, model_class, unique_check):
if model_class == type(self) and unique_check == ("csid", "cid", "uuid"):
return _('Your custom error')
else:
return super(MyModel, self).unique_error_message(model_class, unique_check)
Or in Django 1.7:
class MyModel(models.Model):
clid = models.AutoField(primary_key=True, db_column='CLID')
csid = models.IntegerField(db_column='CSID')
cid = models.IntegerField(db_column='CID')
uuid = models.CharField(max_length=96, db_column='UUID', blank=True)
class Meta(models.Meta):
unique_together = [
["csid", "cid", "uuid"],
]
error_messages = {
NON_FIELD_ERRORS: {
'unique_together': "%(model_name)s's %(field_labels)s are not unique.",
}
}
Changing index name in ./manage.py sqlall output.
You could run ./manage.py sqlall yourself and add in the constraint name yourself and apply manually instead of syncdb.
$ ./manage.py sqlall test
BEGIN;
CREATE TABLE `test_mymodel` (
`CLID` integer AUTO_INCREMENT NOT NULL PRIMARY KEY,
`CSID` integer NOT NULL,
`CID` integer NOT NULL,
`UUID` varchar(96) NOT NULL,
UNIQUE (`CSID`, `CID`, `UUID`)
)
;
COMMIT;
e.g.
$ ./manage.py sqlall test
BEGIN;
CREATE TABLE `test_mymodel` (
`CLID` integer AUTO_INCREMENT NOT NULL PRIMARY KEY,
`CSID` integer NOT NULL,
`CID` integer NOT NULL,
`UUID` varchar(96) NOT NULL,
UNIQUE constraint_name (`CSID`, `CID`, `UUID`)
)
;
COMMIT;
Overriding BaseDatabaseSchemaEditor._create_index_name
The solution pointed out by #danihp is incomplete, it only works for field updates (BaseDatabaseSchemaEditor._alter_field)
The sql I get by overriding _create_index_name is:
BEGIN;
CREATE TABLE "testapp_mymodel" (
"CLID" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"CSID" integer NOT NULL,
"CID" integer NOT NULL,
"UUID" varchar(96) NOT NULL,
UNIQUE ("CSID", "CID", "UUID")
)
;
COMMIT;
Overriding BaseDatabaseSchemaEditor.create_model
based on https://github.com/django/django/blob/master/django/db/backends/schema.py
class BaseDatabaseSchemaEditor(object):
# Overrideable SQL templates
sql_create_table_unique = "UNIQUE (%(columns)s)"
sql_create_unique = "ALTER TABLE %(table)s ADD CONSTRAINT %(name)s UNIQUE (%(columns)s)"
sql_delete_unique = "ALTER TABLE %(table)s DROP CONSTRAINT %(name)s"
and this is the piece in create_model that is of interest:
# Add any unique_togethers
for fields in model._meta.unique_together:
columns = [model._meta.get_field_by_name(field)[0].column for field in fields]
column_sqls.append(self.sql_create_table_unique % {
"columns": ", ".join(self.quote_name(column) for column in columns),
})
Conclusion
You could:
override create_model to use _create_index_name for unique_together contraints.
modify sql_create_table_unique template to include a name parameter.
You may also be able to check a possible fix on this ticket:
https://code.djangoproject.com/ticket/24102
Integrity error is raised from database but from django:
create table t ( a int, b int , c int);
alter table t add constraint u unique ( a,b,c); <-- 'u'
insert into t values ( 1,2,3);
insert into t values ( 1,2,3);
Duplicate entry '1-2-3' for key 'u' <---- 'u'
That means that you need to create constraint with desired name in database. But is django in migrations who names constraint. Look into _create_unique_sql :
def _create_unique_sql(self, model, columns):
return self.sql_create_unique % {
"table": self.quote_name(model._meta.db_table),
"name": self.quote_name(self._create_index_name(model, columns, suffix="_uniq")),
"columns": ", ".join(self.quote_name(column) for column in columns),
}
Is _create_index_name who has the algorithm to names constraints:
def _create_index_name(self, model, column_names, suffix=""):
"""
Generates a unique name for an index/unique constraint.
"""
# If there is just one column in the index, use a default algorithm from Django
if len(column_names) == 1 and not suffix:
return truncate_name(
'%s_%s' % (model._meta.db_table, self._digest(column_names[0])),
self.connection.ops.max_name_length()
)
# Else generate the name for the index using a different algorithm
table_name = model._meta.db_table.replace('"', '').replace('.', '_')
index_unique_name = '_%x' % abs(hash((table_name, ','.join(column_names))))
max_length = self.connection.ops.max_name_length() or 200
# If the index name is too long, truncate it
index_name = ('%s_%s%s%s' % (
table_name, column_names[0], index_unique_name, suffix,
)).replace('"', '').replace('.', '_')
if len(index_name) > max_length:
part = ('_%s%s%s' % (column_names[0], index_unique_name, suffix))
index_name = '%s%s' % (table_name[:(max_length - len(part))], part)
# It shouldn't start with an underscore (Oracle hates this)
if index_name[0] == "_":
index_name = index_name[1:]
# If it's STILL too long, just hash it down
if len(index_name) > max_length:
index_name = hashlib.md5(force_bytes(index_name)).hexdigest()[:max_length]
# It can't start with a number on Oracle, so prepend D if we need to
if index_name[0].isdigit():
index_name = "D%s" % index_name[:-1]
return index_name
For the current django version (1.7) the constraint name for a composite unique constraint looks like:
>>> _create_index_name( 'people', [ 'c1', 'c2', 'c3'], '_uniq' )
'myapp_people_c1_d22a1efbe4793fd_uniq'
You should overwrite _create_index_name in some way to change algorithm. A way, maybe, writing your own db backend inhering from mysql and overwriting _create_index_name in your DatabaseSchemaEditor on your schema.py (not tested)
I believe you have to do that in your Database;
MySQL:
ALTER TABLE `votes` ADD UNIQUE `unique_index`(`user`, `email`, `address`);
I believe would then say ... for key 'unique_index'
One solution is you can catch the IntegrityError at save(), and then make custom error message as you want as below.
try:
obj = MyModel()
obj.csid=1
obj.cid=1
obj.uuid=1
obj.save()
except IntegrityError:
message = "IntegrityError: Duplicate entry '1-1-1' for key 'CSID', 'cid', 'uuid' "
Now you can use this message to display as error message.