Why I receive sqlalchemy.exc.IntegrityError - python

I faced with error, the sense of witch I can't understand:
I write to models:
class User(db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(255))
class UserFile(db.Model):
__tablename__ = 'user_files'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(255))
author_id = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=False)
author = db.relationship(User, foreign_keys=[author_id])
I need to do a number of additional steps when I delete a UserFile instance.
When a UserFile instance is deleted directly, I can do whatever I need to do. There is a problem when the User instance is deleted. In this case, I need to remove all UserFile instances associated with the User. But I can't use cascade deletion, because I need to perform additional actions for each UserFile.
I tried using SQLAlchemy 'before_delete' event, but I got an error because it was already running after deletion, although it was called 'before'. I saw this by adding output of the message to the console and not seeing this message in the console until I got the error.
Then I tried using FLASK-sqlalchemy signals. I did:
from flask_sqlalchemy import before_models_committed
#before_models_committed.connect_via(app)
def delete_all_user_folders_after_delete(sender, changes):
for obj, operation in changes:
if isinstance(obj, User) and operation == 'delete':
print('files: ', UserFile.query.filter_by(author_id=obj.id, parent_id=None).all())
for item in UserFile.query.filter_by(author_id=obj.id,
parent_id=None).all():
print(item)
delete_file(item, True)
And got error on line:
print ('files: ', UserFile.query.filter_by(author_id=obj.id, parent_id=None).all())
What is the cause of this error and how do I properly pre-delete all Userfiles before deleting a User?
Error description:
sqlalchemy.exc.IntegrityError: (raised as a result of Query-invoked autoflush; consider using a session.no_autoflush block if this flush is occurring prematurely) (psycopg2.IntegrityError) update or delete on table "users" violates foreign key constraint "user_files_author_id_fkey" on table "user_files"
DETAIL: Key (id)=(2) is still referenced from table "user_files".

The query performed in delete_all_user_folders_after_delete() causes autoflush, which flushes the deletions prematurely, before your manual cleanup has been done. The default referential action in PostgreSQL is NO ACTION, which "means that if any referencing rows still exist when the constraint is checked, an error is raised". It would seem that you have not deferred the constraint in question, so it is checked immediately.
You could perhaps try the solution proposed in the error message:
#before_models_committed.connect_via(app)
def delete_all_user_folders_after_delete(sender, changes):
for obj, operation in changes:
if isinstance(obj, User) and operation == 'delete':
with db.session.no_autoflush:
fs = UserFile.query.filter_by(author_id=obj.id, parent_id=None).all()
print('files: ', fs)
for item in fs:
print(item)
# If this queries the DB, remember to disable autoflush again
delete_file(item, True)

Related

sqlalchemy integrity error, unique constraint failed owner.owner_id

I am in the process of bulk adding objects to my database on the condition that it
a) doesn't already exist in my database
b) isn't already in the process of being created (because the data i'm working with could have duplicate results given to me)
Previously, to handle step b, I was storing the "to-be-created" objects in a list, and iterating through it to check if there was a matching object, but I stumbled upon sets and figured that I could just run the `.add` method on the set and know that the collection was being deduped for me.
I'm testing with a fresh database, so I know my issue exists in the process of step b.
My code looks something like
new_owners = set()
for item in items:
owner = Owner.find_owner_by_id(item['owner']['id'])
if owner is None:
owner = Owner(owner_id=item['owner']['id'], display_name=item['owner']['display_name'])
new_owners.add(owner)
# print to check deduped set of owners
for owner in new_owners:
print(f'{owner.display_name} | {owner.owner_id}')
db.session.add_all(new_owners)
db.session.commit()
Owner.py
#dataclass()
class Owner(db.Model):
__tablename__ = 'owner'
id = Column(Integer, primary_key=True)
owner_id = Column(String(40), unique=True)
display_name = Column(String(128), nullable=False)
def __eq__(self, other):
return self.owner_id == other.owner_id
def __hash__(self):
return hash(self.owner_id)
I'm not sure what I am missing at this point, because my `print` check before adding the objects to the database session doesn't show any duplicate objects, but somehow I still get this unique constraint error.
[SQL: INSERT INTO owner (owner_id, display_name) VALUES (?, ?)]
[parameters: ('yle42kxojqswhkwj77bb34g7x', 'RealBrhaka')]
Which would only happen if this object was in the given data more than once, but I would expect the set to handle deduping this

Generic solution to delete records which not used as a foreign key

Generic solution to delete records which not used as a foreign key
Here are the presets, there are several models: 'ParentA', 'ParentB','ChildAA', 'ChildBA' and so on.
the relationship between ParentX and ChildXY is the ChildXY has a foreign key of ParentX,
for example:
#this is ParentA
class ParentA(Base):
__tablename__ = 'parenta'
id = Column(Integer, primary_key=True)
name = Column(String(12))
need_delete = Column(Integer)
children = relationship("ChildAA",
back_populates="parent")
#this is ChildAA
class ChildAA(Base):
__tablename__ = 'childaa'
name = Column(String(12))
id = Column(Integer, primary_key=True)
need_delete = Column(Integer)
parenta_id = Column(Integer, ForeignKey('parenta.id'))
parenta = relationship("ParentA")
#this is ParentB
........
And I wanna delete all the records(all the childx, parentx included) whose attribute 'need_delete' is 1 and record itself havn't been used as a foreign key by child table. I found a direct but complicated way:
I can firstly go through all the childx tables and safely removd records, and then to the parentx tables and delete records with the
code block one by one:
#deletion is for ParentA
for parent in session.query(ParentA).join(ParentA.children).group_by(ParentA).having(func.count(ChildAA.id) == 0):
if parent.need_delete == 1
session.delete(parent)
#deletion is for ParentB
......
#deletion is for ParentC
.....
session.commit()
And this is hard coded, Is there any generic way to delete records which is used as a foreign key at the present?
You could use NOT EXISTS, an antijoin, to query those parents which have no children and need delete:
from sqlalchemy import inspect
# After you've cleaned up the child tables:
# (Replace the triple dot with the rest of your parent types)
for parent_type in [ParentA, ParentB, ...]:
# Query for `parent_type` rows that need delete
q = session.query(parent_type).filter(parent_type.need_delete == 1)
# Go through all the relationships
for rel in inspect(parent_type).relationships:
# Add a NOT EXISTS(...) to the query predicates (the antijoin)
q = q.filter(~getattr(parent_type, rel.key).any())
# Issue a bulk delete. Replace `False` with 'fetch',
# if you do need to synchronize the deletions with the ongoing
# SQLA session. In your example you commit after the deletions,
# which expires instances in session, so no synchronization is
# required.
q.delete(synchronize_session=False)
...
session.commit()
Instead of first querying all the instances to the session and marking for deletion one by one use a bulk delete.
Do note that you must be explicit about your relationships and the parent side must be defined. If you have foreign keys referring to parent tables not defined as an SQLAlchemy relationship on the parent you'll probably get unwanted deletions of children (depends on how the foreign key constraints have been configured).
Another approach could be to configure your foreign key constraints to restrict deletions and handle the raised errors in a subtransaction (savepoint), but I suppose you've already set your schema up and that'd require altering the existing foreign key constraints.

sqlalchemy one-to-many ORM update error

I have two tables: Eca_users and Eca_user_emails, one user can have many emails. I recive json with users and their emails. And I wont to load them into MS SQL database. Users can update their emails, so in this json I can get the same users with new (or changed) emails.
My code
# some import here
Base = declarative_base()
class Eca_users(Base):
__tablename__ = 'eca_users'
sql_id = sqlalchemy.Column(sqlalchemy.Integer(), primary_key = True)
first_id = sqlalchemy.Column(sqlalchemy.String(15))
name = sqlalchemy.Column(sqlalchemy.String(200))
main_email = sqlalchemy.Column(sqlalchemy.String(200))
user_emails = relationship("Eca_user_emails", backref=backref('eca_users'))
class Eca_user_emails(Base):
__tablename__ = 'user_emails'
sql_id = sqlalchemy.Column(sqlalchemy.Integer(), primary_key = True)
email_address = Column(String(200), nullable=False)
status = Column(String(10), nullable=False)
active = Column(DateTime, nullable=True)
sql_user_id = Column(Integer, ForeignKey('eca_users.sql_id'))
def main()
engine = sqlalchemy.create_engine('mssql+pymssql://user:pass/ECAusers?charset=utf8')
Session = sessionmaker()
Session.configure(bind = engine)
session = Session()
#then I get my json, parse it and...
query = session.query(Eca_users).filter(Eca_users.first_id == str(user_id))
if query.count() == 0:
# not interesting now
else:
for exstUser in query:
exstUser.name = name #update user info
exstUser.user_emails = [:] # empty old emails
# creating new Email obj
newEmail = Eca_user_emails(email_address = email_record['email'],
status = email_record['status'],
active = active_date)
exstUser.user_emails.append(newEmail) # and I get error here because autoflush
session.commit()
if __name__ == '__main__':
main()
Error message:
sqlalchemy.exc.IntegrityError: ...
[SQL: 'UPDATE user_emails SET sql_user_id=%(sql_user_id)s WHERE user_emails.sql_id = %(user_emails_sql_id)s'] [parameters: {'sql_user_id': None, 'user_emails_sql_id': Decimal('1')}]
Can't find any idea why this sql_user_id is None :(
When I chek exstUser and newEmail objects in debugger - it looks like everething fine. I mean all the reference is OK. The session obj and it's dirty attribute looks also OK in the debugger (sql_user_id is set for Eca_user_emails obj).
And what is most strange for me - this code worked absolutely fine when it was without a main function, just all code after the classes declaration. But after I wrote main declaration and put all code here I started to get this error.
I am completely new to Python so maybe this is one of stupid mistakes...
Any ideas how to fix it and what is the reason? Thanks for reading this :)
By the way: Python 3.4, sqlalchemy 1.0, SQL Server 2012
sql_user_id is None because by default SQLAlchemy clears out the foreign key when you delete a child object across a relationship, that is, when you clear exstUser.user_emails SQLAlchemy sets sql_user_id to None for all those instances. If you want SQLAlchemy to issue DELETEs for Eca_user_emails instances when they are detached from Eca_users, you need to add delete-orphan cascade option to the user_emails relationship. If you want SQLAlchemy to issue DELETEs for Eca_user_emails instances when a Eca_users instance is deleted, you need to add the delete cascade option to the user_emails relationship.
user_emails = relationship("Eca_user_emails", backref=backref('eca_users'), cascade="save-update, merge, delete, delete-orphan")
You can find more information about cascades in the SQLAlchemy docs

Get last inserted value from MySQL using SQLAlchemy

I've just run across a fairly vexing problem, and after testing I have found that NONE of the available answers are sufficient.
I have seen various suggestions but none seem to be able to return the last inserted value for an auto_increment field in MySQL.
I have seen examples that mention the use of session.flush() to add the record and then retrieve the id. However that always seems to return 0.
I have also seen examples that mention the use of session.refresh() but that raises the following error: InvalidRequestError: Could not refresh instance ''
What I'm trying to do seems insanely simple but I can't seem to figure out the secret.
I'm using the declarative approach.
So, my code looks something like this:
class Foo(Base):
__tablename__ = 'tblfoo'
__table_args__ = {'mysql_engine':'InnoDB'}
ModelID = Column(INTEGER(unsigned=True), default=0, primary_key=True, autoincrement=True)
ModelName = Column(Unicode(255), nullable=True, index=True)
ModelMemo = Column(Unicode(255), nullable=True)
f = Foo(ModelName='Bar', ModelMemo='Foo')
session.add(f)
session.flush()
At this point, the object f has been pushed to the DB, and has been automatically assigned a unique primary key id. However, I can't seem to find a way to obtain the value to use in some additional operations. I would like to do the following:
my_new_id = f.ModelID
I know I could simply execute another query to lookup the ModelID based on other parameters but I would prefer not to if at all possible.
I would much appreciate any insight into a solution to this problem.
Thanks for the help in advance.
The problem is you are setting defaul for the auto increment. So when it run the insert into query the log of server is
2011-12-21 13:44:26,561 INFO sqlalchemy.engine.base.Engine.0x...1150 INSERT INTO tblfoo (`ModelID`, `ModelName`, `ModelMemo`) VALUES (%s, %s, %s)
2011-12-21 13:44:26,561 INFO sqlalchemy.engine.base.Engine.0x...1150 (0, 'Bar', 'Foo')
ID : 0
So the output is 0 which is the default value and which is passed because you are setting default value for autoincrement column.
If I run same code without default then it give the correct output.
Please try this code
from sqlalchemy import create_engine
engine = create_engine('mysql://test:test#localhost/test1', echo=True)
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
from sqlalchemy.orm import sessionmaker
Session = sessionmaker(bind=engine)
session = Session()
from sqlalchemy import Column, Integer, Unicode
class Foo(Base):
__tablename__ = 'tblfoo'
__table_args__ = {'mysql_engine':'InnoDB'}
ModelID = Column(Integer, primary_key=True, autoincrement=True)
ModelName = Column(Unicode(255), nullable=True, index=True)
ModelMemo = Column(Unicode(255), nullable=True)
Base.metadata.create_all(engine)
f = Foo(ModelName='Bar', ModelMemo='Foo')
session.add(f)
session.flush()
print "ID :", f.ModelID
Try using session.commit() instead of session.flush(). You can then use f.ModelID.
Not sure why the flagged answer worked for you. But in my case, that does not actually insert the row into the table. I need to call commit() in the end.
So the last few lines of code are:
f = Foo(ModelName='Bar', ModelMemo='Foo')
session.add(f)
session.flush()
print "ID:", f.ModelID
session.commit()

Problem to initialize a object

I got a weird error when I try to create a Screen object. Before It worked without any problem, but I got this error when I added a new attribute to the User class. This attribute is related to Screen in a relation many to many through user_screens. This is the error:
"InvalidRequestError: One or more mappers failed to compile. Exception was probably suppressed within a hasattr() call. Message was: One or more mappers failed to compile. Exception was probably suppressed within a hasattr() call. Message was: Class 'zeppelinlib.screen.ScreenTest.Screen' is not mapped"
These are the classes:
class Screen(rdb.Model):
"""Set up screens table in the database"""
rdb.metadata(metadata)
rdb.tablename("screens")
id = Column("id", Integer, primary_key=True)
title = Column("title", String(100))
ip = Column("ip", String(20))
...
user_screens = Table(
"user_screens",
metadata,
Column("user_id", Integer, ForeignKey("users.id")),
Column("screen_id", Integer, ForeignKey("screens.id"))
)
class User(rdb.Model):
"""Set up users table in the database"""
rdb.metadata(metadata)
rdb.tablename("users")
id = Column("id", Integer, primary_key=True)
name = Column("name", String(50))
...
group = relationship("UserGroup", uselist=False)
channels = relationship("Channel", secondary=user_channels, order_by="Channel.titleView", backref="users")
mediaGroups = relationship("MediaGroup", secondary=user_media_groups, order_by="MediaGroup.title", backref="users")
screens = relationship("Screen", secondary=user_screens, backref="users")
I might not added new relation to user because I really don't know what the problem is...
Thanks in avance!
Try to specify the primary join via the primaryjoin keyword argument on one (don't know which one) of your relationships. Sometimes (in complex relationship graphs) SQLAlchemy has a hard time to figure out how it should go about. Worked for me more than once, albeit on 0.5.x.

Categories

Resources