I'm trying to model the following situation: A program has many versions, and one of the versions is the current one (not necessarily the latest).
This is how I'm doing it now:
class Program(Base):
__tablename__ = 'programs'
id = Column(Integer, primary_key=True)
name = Column(String)
current_version_id = Column(Integer, ForeignKey('program_versions.id'))
current_version = relationship('ProgramVersion', foreign_keys=[current_version_id])
versions = relationship('ProgramVersion', order_by='ProgramVersion.id', back_populates='program')
class ProgramVersion(Base):
__tablename__ = 'program_versions'
id = Column(Integer, primary_key=True)
program_id = Column(Integer, ForeignKey('programs.id'))
timestamp = Column(DateTime, default=datetime.datetime.utcnow)
program = relationship('Filter', foreign_keys=[program_id], back_populates='versions')
But then I get the error: Could not determine join condition between parent/child tables on relationship Program.versions - there are multiple foreign key paths linking the tables. Specify the 'foreign_keys' argument, providing a list of those columns which should be counted as containing a foreign key reference to the parent table.
But what foreign key should I provide for the 'Program.versions' relationship? Is there a better way to model this situation?
Circular dependency like that is a perfectly valid solution to this problem.
To fix your foreign keys problem, you need to explicitly provide the foreign_keys argument.
class Program(Base):
...
current_version = relationship('ProgramVersion', foreign_keys=current_version_id, ...)
versions = relationship('ProgramVersion', foreign_keys="ProgramVersion.program_id", ...)
class ProgramVersion(Base):
...
program = relationship('Filter', foreign_keys=program_id, ...)
You'll find that when you do a create_all(), SQLAlchemy has trouble creating the tables because each table has a foreign key that depends on a column in the other. SQLAlchemy provides a way to break this circular dependency by using an ALTER statement for one of the tables:
class Program(Base):
...
current_version_id = Column(Integer, ForeignKey('program_versions.id', use_alter=True, name="fk_program_current_version_id"))
...
Finally, you'll find that when you add a complete object graph to the session, SQLAlchemy has trouble issuing INSERT statements because each row has a value that depends on the yet-unknown primary key of the other. SQLAlchemy provides a way to break this circular dependency by issuing an UPDATE for one of the columns:
class Program(Base):
...
current_version = relationship('ProgramVersion', foreign_keys=current_version_id, post_update=True, ...)
...
This design is not ideal; by having two tables refer to one another, you cannot effectively insert into either table, because the foreign key required in the other will not exist. One possible solution in outlined in the selected answer of
this question related to microsoft sqlserver, but I will summarize/elaborate on it here.
A better way to model this might be to introduce a third table, VersionHistory, and eliminate your foreign key constraints on the other two tables.
class VersionHistory(Base):
__tablename__ = 'version_history'
program_id = Column(Integer, ForeignKey('programs.id'), primary_key=True)
version_id = Column(Integer, ForeignKey('program_version.id'), primary_key=True)
current = Column(Boolean, default=False)
# I'm not too familiar with SQLAlchemy, but I suspect that relationship
# information goes here somewhere
This eliminates the circular relationship you have created in your current implementation. You could then query this table by program, and receive all existing versions for the program, etc. Because of the composite primary key in this table, you could access any specific program/version combination. The addition of the current field to this table takes the burden of tracking currency off of the other two tables, although maintaining a single current version per program could require some trigger gymnastics.
HTH!
I have an app I am building with Flask that contains models for Projects and Plates, where Plates have Project as a foreignkey.
Each project has a year, given as an integer (so 17 for 2017); and each plate has a number and a name, constructed from the plate.project.year and plate.number. For example, Plate 106 from a project done this year would have the name '17-0106'. I would like this name to be unique.
Here are my models:
class Project(Model):
__tablename__ = 'projects'
id = Column(Integer, primary_key=True)
name = Column(String(64),unique=True)
year = Column(Integer,default=datetime.now().year-2000)
class Plate(Model):
__tablename__ = 'plates'
id = Column(Integer, primary_key=True)
number = Column(Integer)
project_id = Column(Integer, ForeignKey('projects.id'))
project = relationship('Project',backref=backref('plates',cascade='all, delete-orphan'))
#property
def name(self):
return str(self.project.year) + '-' + str(self.number).zfill(4)
My first idea was to make the number unique amongst the plates that have the same project.year attribute, so I have tried variations on
__table_args__ = (UniqueConstraint('project.year', 'number', name='_year_number_uc'),), but this needs to access the other table.
Is there a way to do this in the database? Or, failing that, an __init__ method that checks for uniqueness of either the number/project.year combination, or the name property?
There are multiple solutions to your problem. For example, you can de-normalize project.year-number combination and store it as a separate Plate field. Then you can put a unique key on it. The question is how you're going to maintain that value. The two obvious options are triggers (assuming your DB supports triggers and you're ok to use them) or sqla Events, see http://docs.sqlalchemy.org/en/latest/orm/events.html#
Both solutions won't emit an extra SELECT query. Which I believe is important for you.
your question is somewhat similar to Can SQLAlchemy events be used to update a denormalized data cache?
Executing this command:
sqlacodegen <connection-url> --outfile db.py
The db.py contains generated tables:
t_table1 = Table(...)
and classes too:
Table2(Base):
__tablename__ = 'table2'
The problem is that a table is generated in one way only - either a table or a class.
I would like to make it generate models (classes) only but in the provided flags I couldn't find such an option. Any idea?
It looks like what you're describing is a feature itself. sqlacodegenwill not always generate class models.
It will only form model classes for tables that have a primary key and are not association tables, as you can see in the source code:
# Only form model classes for tables that have a primary key and are not association tables
if noclasses or not table.primary_key or table.name in association_tables:
model = self.table_model(table)
else:
model = self.class_model(table, links[table.name], self.inflect_engine, not nojoined)
classes[model.name] = model
Furthermore, in the documentation it is stated that
A table is considered an association table if it satisfies all of the
following conditions:
has exactly two foreign key constraints
all its columns are involved in said constraints
Although, you can try a quick and dirty hack. Locate those lines in the source code (something like /.../lib/python2.7/site-packages/sqlacodegen/codegen.py) and comment out the first three code lines (and fix indentation):
# Only form model classes for tables that have a primary key and are not association tables
# if noclasses or not table.primary_key or table.name in association_tables:
# model = self.table_model(table)
# else:
model = self.class_model(table, links[table.name], self.inflect_engine, not nojoined)
classes[model.name] = model
I have tried this for one specific table that was generated as a table model. It went from
t_Admin_op = Table(
'Admin_op', metadata,
Column('id_admin', Integer, nullable=False),
Column('id_op', Integer, nullable=False)
)
to
class AdminOp(Base):
__tablename__ = 'Admin_op'
id_admin = Column(Integer, nullable=False)
id_op = Column(Integer, nullable=False)
You can also open an issue about this as a feature request, in the official tracker.
Just in case, if you want the opposite (only table models), you could do so with the --noclasses flag.
I must be missing something trivial with SQLAlchemy's cascade options because I cannot get a simple cascade delete to operate correctly -- if a parent element is a deleted, the children persist, with null foreign keys.
I've put a concise test case here:
from sqlalchemy import Column, Integer, ForeignKey
from sqlalchemy.orm import relationship
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class Parent(Base):
__tablename__ = "parent"
id = Column(Integer, primary_key = True)
class Child(Base):
__tablename__ = "child"
id = Column(Integer, primary_key = True)
parentid = Column(Integer, ForeignKey(Parent.id))
parent = relationship(Parent, cascade = "all,delete", backref = "children")
engine = create_engine("sqlite:///:memory:")
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = Session()
parent = Parent()
parent.children.append(Child())
parent.children.append(Child())
parent.children.append(Child())
session.add(parent)
session.commit()
print "Before delete, children = {0}".format(session.query(Child).count())
print "Before delete, parent = {0}".format(session.query(Parent).count())
session.delete(parent)
session.commit()
print "After delete, children = {0}".format(session.query(Child).count())
print "After delete parent = {0}".format(session.query(Parent).count())
session.close()
Output:
Before delete, children = 3
Before delete, parent = 1
After delete, children = 3
After delete parent = 0
There is a simple, one-to-many relationship between Parent and Child. The script creates a parent, adds 3 children, then commits. Next, it deletes the parent, but the children persist. Why? How do I make the children cascade delete?
The problem is that sqlalchemy considers Child as the parent, because that is where you defined your relationship (it doesn't care that you called it "Child" of course).
If you define the relationship on the Parent class instead, it will work:
children = relationship("Child", cascade="all,delete", backref="parent")
(note "Child" as a string: this is allowed when using the declarative style, so that you are able to refer to a class that is not yet defined)
You might want to add delete-orphan as well (delete causes children to be deleted when the parent gets deleted, delete-orphan also deletes any children that were "removed" from the parent, even if the parent is not deleted)
EDIT: just found out: if you really want to define the relationship on the Child class, you can do so, but you will have to define the cascade on the backref (by creating the backref explicitly), like this:
parent = relationship(Parent, backref=backref("children", cascade="all,delete"))
(implying from sqlalchemy.orm import backref)
#Steven's asnwer is good when you are deleting through session.delete() which never happens in my case. I noticed that most of the time I delete through session.query().filter().delete() (which doesn't put elements in the memory and deletes directly from db).
Using this method sqlalchemy's cascade='all, delete' doesn't work. There is a solution though: ON DELETE CASCADE through db (note: not all databases support it).
class Child(Base):
__tablename__ = "children"
id = Column(Integer, primary_key=True)
parent_id = Column(Integer, ForeignKey("parents.id", ondelete='CASCADE'))
class Parent(Base):
__tablename__ = "parents"
id = Column(Integer, primary_key=True)
child = relationship(Child, backref="parent", passive_deletes=True)
Pretty old post, but I just spent an hour or two on this, so I wanted to share my finding, especially since some of the other comments listed aren't quite right.
TL;DR
Give the child table a foreign or modify the existing one, adding ondelete='CASCADE':
parent_id = db.Column(db.Integer, db.ForeignKey('parent.id', ondelete='CASCADE'))
And one of the following relationships:
a) This on the parent table:
children = db.relationship('Child', backref='parent', passive_deletes=True)
b) Or this on the child table:
parent = db.relationship('Parent', backref=backref('children', passive_deletes=True))
Details
First off, despite what the accepted answer says, the parent/child relationship is not established by using relationship, it's established by using ForeignKey. You can put the relationship on either the parent or child tables and it will work fine. Although, apparently on the child tables, you have to use the backref function in addition to the keyword argument.
Option 1 (preferred)
Second, SqlAlchemy supports two different kinds of cascading. The first, and the one I recommend, is built into your database and usually takes the form of a constraint on the foreign key declaration. In PostgreSQL it looks like this:
CONSTRAINT child_parent_id_fkey FOREIGN KEY (parent_id)
REFERENCES parent_table(id) MATCH SIMPLE
ON DELETE CASCADE
This means that when you delete a record from parent_table, then all the corresponding rows in child_table will be deleted for you by the database. It's fast and reliable and probably your best bet. You set this up in SqlAlchemy through ForeignKey like this (part of the child table definition):
parent_id = db.Column(db.Integer, db.ForeignKey('parent.id', ondelete='CASCADE'))
parent = db.relationship('Parent', backref=backref('children', passive_deletes=True))
The ondelete='CASCADE' is the part that creates the ON DELETE CASCADE on the table.
Gotcha!
There's an important caveat here. Notice how I have a relationship specified with passive_deletes=True? If you don't have that, the entire thing will not work. This is because by default when you delete a parent record SqlAlchemy does something really weird. It sets the foreign keys of all child rows to NULL. So if you delete a row from parent_table where id = 5, then it will basically execute
UPDATE child_table SET parent_id = NULL WHERE parent_id = 5
Why you would want this I have no idea. I'd be surprised if many database engines even allowed you to set a valid foreign key to NULL, creating an orphan. Seems like a bad idea, but maybe there's a use case. Anyway, if you let SqlAlchemy do this, you will prevent the database from being able to clean up the children using the ON DELETE CASCADE that you set up. This is because it relies on those foreign keys to know which child rows to delete. Once SqlAlchemy has set them all to NULL, the database can't delete them. Setting the passive_deletes=True prevents SqlAlchemy from NULLing out the foreign keys.
You can read more about passive deletes in the SqlAlchemy docs.
Option 2
The other way you can do it is to let SqlAlchemy do it for you. This is set up using the cascade argument of the relationship. If you have the relationship defined on the parent table, it looks like this:
children = relationship('Child', cascade='all,delete', backref='parent')
If the relationship is on the child, you do it like this:
parent = relationship('Parent', backref=backref('children', cascade='all,delete'))
Again, this is the child so you have to call a method called backref and putting the cascade data in there.
With this in place, when you delete a parent row, SqlAlchemy will actually run delete statements for you to clean up the child rows. This will likely not be as efficient as letting this database handle if for you so I don't recommend it.
Here are the SqlAlchemy docs on the cascading features it supports.
Alex Okrushko answer almost worked best for me. Used ondelete='CASCADE' and passive_deletes=True combined. But I had to do something extra to make it work for sqlite.
Base = declarative_base()
ROOM_TABLE = "roomdata"
FURNITURE_TABLE = "furnituredata"
class DBFurniture(Base):
__tablename__ = FURNITURE_TABLE
id = Column(Integer, primary_key=True)
room_id = Column(Integer, ForeignKey('roomdata.id', ondelete='CASCADE'))
class DBRoom(Base):
__tablename__ = ROOM_TABLE
id = Column(Integer, primary_key=True)
furniture = relationship("DBFurniture", backref="room", passive_deletes=True)
Make sure to add this code to ensure it works for sqlite.
from sqlalchemy import event
from sqlalchemy.engine import Engine
from sqlite3 import Connection as SQLite3Connection
#event.listens_for(Engine, "connect")
def _set_sqlite_pragma(dbapi_connection, connection_record):
if isinstance(dbapi_connection, SQLite3Connection):
cursor = dbapi_connection.cursor()
cursor.execute("PRAGMA foreign_keys=ON;")
cursor.close()
Stolen from here: SQLAlchemy expression language and SQLite's on delete cascade
Steven is correct in that you need to explicitly create the backref, this results in the cascade being applied on the parent (as opposed to it being applied to the child like in the test scenario).
However, defining the relationship on the Child does NOT make sqlalchemy consider Child the parent. It doesn't matter where the relationship is defined (child or parent), its the foreign key that links the two tables that determines which is the parent and which is the child.
It makes sense to stick to one convention though, and based on Steven's response, I'm defining all my child relationships on the parent.
Steven's answer is solid. I'd like to point out an additional implication.
By using relationship, you're making the app layer (Flask) responsible for referential integrity. That means other processes that access the database not through Flask, like a database utility or a person connecting to the database directly, will not experience those constraints and could change your data in a way that breaks the logical data model you worked so hard to design.
Whenever possible, use the ForeignKey approach described by d512 and Alex. The DB engine is very good at truly enforcing constraints (in an unavoidable way), so this is by far the best strategy for maintaining data integrity. The only time you need to rely on an app to handle data integrity is when the database can't handle them, e.g. versions of SQLite that don't support foreign keys.
If you need to create further linkage among entities to enable app behaviors like navigating parent-child object relationships, use backref in conjunction with ForeignKey.
I struggled with the documentation as well, but found that the docstrings themselves tend to be easier than the manual. For example, if you import relationship from sqlalchemy.orm and do help(relationship), it will give you all the options you can specify for cascade. The bullet for delete-orphan says:
if an item of the child's type with no parent is detected, mark it for deletion.
Note that this option prevents a pending item of the child's class from being
persisted without a parent present.
I realize your issue was more with the way the documentation for defining parent-child relationships. But it seemed that you might also be having a problem with the cascade options, because "all" includes "delete". "delete-orphan" is the only option that's not included in "all".
Even tho this question is very old, it comes up first when searched for in Google so I'll post my solution to add up to what others said (I've spent few hours even after reading all the answers in here).
As d512 explained, it is all about Foreign Keys. It was quite a surprise to me but not all databases / engines support Foreign Keys. I'm running a MySQL database. After long investigation, I noticed that when I create new table it defaults to an engine (MyISAM) that doesn't support Foreign Keys. All I had to do was to set it to InnoDB by adding mysql_engine='InnoDB' when defining a Table. In my project I'm using an imperative mapping and it looks like so:
db.Table('child',
Column('id', Integer, primary_key=True),
# other columns
Column('parent_id',
ForeignKey('parent.id', ondelete="CASCADE")),
mysql_engine='InnoDB')
Answer by Stevan is perfect. But if you are still getting the error. Other possible try on top of that would be -
http://vincentaudebert.github.io/python/sql/2015/10/09/cascade-delete-sqlalchemy/
Copied from the link-
Quick tip if you get in trouble with a foreign key dependency even if you have specified a cascade delete in your models.
Using SQLAlchemy, to specify a cascade delete you should have cascade='all, delete' on your parent table. Ok but then when you execute something like:
session.query(models.yourmodule.YourParentTable).filter(conditions).delete()
It actually triggers an error about a foreign key used in your children tables.
The solution I used it to query the object and then delete it:
session = models.DBSession()
your_db_object = session.query(models.yourmodule.YourParentTable).filter(conditions).first()
if your_db_object is not None:
session.delete(your_db_object)
This should delete your parent record AND all the children associated with it.
TLDR: If the above solutions don't work, try adding nullable=False to your column.
I'd like to add a small point here for some people who may not get the cascade function to work with the existing solutions (which are great). The main difference between my work and the example was that I used automap. I do not know exactly how that might interfere with the setup of cascades, but I want to note that I used it. I am also working with a SQLite database.
I tried every solution described here, but rows in my child table continued to have their foreign key set to null when the parent row was deleted. I'd tried all the solutions here to no avail. However, the cascade worked once I set the child column with the foreign key to nullable = False.
On the child table, I added:
Column('parent_id', Integer(), ForeignKey('parent.id', ondelete="CASCADE"), nullable=False)
Child.parent = relationship("parent", backref=backref("children", passive_deletes=True)
With this setup, the cascade functioned as expected.