Flask SQLAlchemy query join - python

I have 2 table like this:
class Role(db.Model):
__tablename__ = 'roles'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(64), unique=True)
index = db.Column(db.String(64))
users = db.relationship('User',
backref='role', lazy='dynamic')
class User(UserMixin, db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True)
email = db.Column(db.String(64), unique=True, index=True)
role_id = db.Column(db.Integer, db.ForeignKey('roles.id'))
Then I try to making 2 kinds of query to get data for the relationship models.
first, I make it like this:
user = db.session.query(User, Role.index).filter_by(email=form.email.data).first()
and the second one I use join statement on that:
user = db.session.query(User, Role.index).join(Role).filter(User.email==form.email.data).first()
My questions are, what's the difference in that query while in the second one I use the join statement but the result still same.
For the fast query or performance, should I use the first or the second one..?

The difference is that the first query will add both users and roles to FROM list, which results in a CROSS JOIN. In other words every row from users is joined with every row from roles. The second query performs an INNER JOIN and SQLAlchemy deduces the ON clause based on the foreign key relationship between the tables.
You should use the first one when you want a cartesian product, and the second one when you want the role related to the user by the foreign key relationship. That the result happens to be the same for you is just a coincidence.
For future reference, try enabling echo so that you can check from your logs what queries are actually emitted. Also have a look at defining ORM relationships, which would allow you to have a role attribute on User for accessing its related Role.

If your entities are from different classes/tables then joining is implied and SQL Alchemy will add it to actual SQL. You may add custom join if that connection isn't the one that SQL Alchemy uses (retrieved from foreign key or such).

Related

SqlAlchemy doubly linked tables [duplicate]

I'm trying to model the following situation: A program has many versions, and one of the versions is the current one (not necessarily the latest).
This is how I'm doing it now:
class Program(Base):
__tablename__ = 'programs'
id = Column(Integer, primary_key=True)
name = Column(String)
current_version_id = Column(Integer, ForeignKey('program_versions.id'))
current_version = relationship('ProgramVersion', foreign_keys=[current_version_id])
versions = relationship('ProgramVersion', order_by='ProgramVersion.id', back_populates='program')
class ProgramVersion(Base):
__tablename__ = 'program_versions'
id = Column(Integer, primary_key=True)
program_id = Column(Integer, ForeignKey('programs.id'))
timestamp = Column(DateTime, default=datetime.datetime.utcnow)
program = relationship('Filter', foreign_keys=[program_id], back_populates='versions')
But then I get the error: Could not determine join condition between parent/child tables on relationship Program.versions - there are multiple foreign key paths linking the tables. Specify the 'foreign_keys' argument, providing a list of those columns which should be counted as containing a foreign key reference to the parent table.
But what foreign key should I provide for the 'Program.versions' relationship? Is there a better way to model this situation?
Circular dependency like that is a perfectly valid solution to this problem.
To fix your foreign keys problem, you need to explicitly provide the foreign_keys argument.
class Program(Base):
...
current_version = relationship('ProgramVersion', foreign_keys=current_version_id, ...)
versions = relationship('ProgramVersion', foreign_keys="ProgramVersion.program_id", ...)
class ProgramVersion(Base):
...
program = relationship('Filter', foreign_keys=program_id, ...)
You'll find that when you do a create_all(), SQLAlchemy has trouble creating the tables because each table has a foreign key that depends on a column in the other. SQLAlchemy provides a way to break this circular dependency by using an ALTER statement for one of the tables:
class Program(Base):
...
current_version_id = Column(Integer, ForeignKey('program_versions.id', use_alter=True, name="fk_program_current_version_id"))
...
Finally, you'll find that when you add a complete object graph to the session, SQLAlchemy has trouble issuing INSERT statements because each row has a value that depends on the yet-unknown primary key of the other. SQLAlchemy provides a way to break this circular dependency by issuing an UPDATE for one of the columns:
class Program(Base):
...
current_version = relationship('ProgramVersion', foreign_keys=current_version_id, post_update=True, ...)
...
This design is not ideal; by having two tables refer to one another, you cannot effectively insert into either table, because the foreign key required in the other will not exist. One possible solution in outlined in the selected answer of
this question related to microsoft sqlserver, but I will summarize/elaborate on it here.
A better way to model this might be to introduce a third table, VersionHistory, and eliminate your foreign key constraints on the other two tables.
class VersionHistory(Base):
__tablename__ = 'version_history'
program_id = Column(Integer, ForeignKey('programs.id'), primary_key=True)
version_id = Column(Integer, ForeignKey('program_version.id'), primary_key=True)
current = Column(Boolean, default=False)
# I'm not too familiar with SQLAlchemy, but I suspect that relationship
# information goes here somewhere
This eliminates the circular relationship you have created in your current implementation. You could then query this table by program, and receive all existing versions for the program, etc. Because of the composite primary key in this table, you could access any specific program/version combination. The addition of the current field to this table takes the burden of tracking currency off of the other two tables, although maintaining a single current version per program could require some trigger gymnastics.
HTH!

Correct way to implement many-to-many with additional columns in Flask-SQLAlchemy?

I need to implement many-to-many relationship with additional columns in Flask-SQLAlchemy. I am currently using association table to link two models (following this guide https://flask-sqlalchemy.palletsprojects.com/en/master/models/#many-to-many-relationships). My problem is that this relationship need to have additional attached data. My two models and table are:
log = db.Table('log',
db.Column('workout_id', db.Integer, db.ForeignKey('workout.id')),
db.Column('exercise_variant_id', db.Integer, db.ForeignKey('exercise_variant.id')),
db.Column('quantity', db.Integer, nullable=False),
db.Column('series', db.Integer, nullable=False)
)
class ExerciseVariant(db.Model):
__tablename__ = 'exercise_variant'
id = db.Column(db.Integer, primary_key=True)
class Workout(db.Model):
__tablename__ = 'workout'
id = db.Column(db.Integer, primary_key=True)
exercises = db.relationship('ExerciseVariant', secondary=log, lazy='subquery',
backref=db.backref('workouts', lazy=True))
This approach is working ok, but the current method for adding records to log table seems a bit hacky to me, since I have to first query both objects to get id I am looking for and then create custom statement:
statement = log.insert().values(
workout_id=workout.id,
exercise_variant_id=exercise_variant.id,
quantity=exercise_dict['quantity'],
series=exercise_dict['series']
)
db.session.execute(statement)
My questions are:
1. Should this kind of relationship be implemented using Table or Model?
2. If an answer to 1. is Table, can I somehow use backrefs to pass object instances instead of querying and passing their id?

Sqlalchemy One - Many and One-One in one table

I've got two models: User and Group.
User can be in one group so:
class User(db.Model):
# other fields
group_id = db.Column(db.Integer(), db.ForeignKey('group.id'))
but on the other hand I would also have some info about user who create that specific group:
class Group(db.Model):
# other fields
users = db.relationship("User", backref='group')
created_by = db.Column(db.Integer(), db.ForeignKey('user.id'))
Result is:
sqlalchemy.exc.CircularDependencyError: Can't sort tables for DROP; an unresolvable foreign key dependency exists between tables: group, user. Please ensure that the ForeignKey and ForeignKeyConstraint objects involved in the cycle have names so that they can be dropped using DROP CONSTRAINT.
I tried use_alter=True, but it gives me:
sqlalchemy.exc.CompileError: Can't emit DROP CONSTRAINT for constraint ForeignKeyConstraint(
Interestingly I'd expect you to get an AmbiguousForeignKeyError but instead you seem to get a CircularDependencyError? According to the docs this is caused by two scenarios:
In a Session flush operation, if two objects are mutually dependent on each other, they can not be inserted or deleted via INSERT or
DELETE statements alone; an UPDATE will be needed to post-associate or
pre-deassociate one of the foreign key constrained values. The
post_update flag described at Rows that point to themselves / Mutually
Dependent Rows can resolve this cycle.
In a MetaData.sorted_tables
operation, two ForeignKey or ForeignKeyConstraint objects mutually
refer to each other. Apply the use_alter=True flag to one or both, see
Creating/Dropping Foreign Key Constraints via ALTER.
I'm not sure what you're executing that's causing this particular error, but most likely you'll be able to solve it by solving the ambiguous reference.
The ambigious reference is due to SQLAlchemy not being able to figure out how to perform the join when there are multiple references (users and created_by in this case). This can be resolved by specifying how the relationship should join which can be done by either giving the specific foreign key it should use or by explicitly determining the join condition.
You can see these being applied to your example here:
class User(Base):
# Other setup / fields
group_id = Column(Integer, ForeignKey('group.id'))
class Group(Base):
# Other setup / fields
created_by_id = Column(Integer, ForeignKey('user.id'), nullable=False)
created_by = relationship("User", foreign_keys=[created_by_id])
users = relationship("User", backref="group", primaryjoin=id==User.group_id)
Documentation regarding relationship joins: http://docs.sqlalchemy.org/en/latest/orm/join_conditions.html#configuring-how-relationship-joins

flask sqlalchemy UniqueConstraint with foreignkey attribute

I have an app I am building with Flask that contains models for Projects and Plates, where Plates have Project as a foreignkey.
Each project has a year, given as an integer (so 17 for 2017); and each plate has a number and a name, constructed from the plate.project.year and plate.number. For example, Plate 106 from a project done this year would have the name '17-0106'. I would like this name to be unique.
Here are my models:
class Project(Model):
__tablename__ = 'projects'
id = Column(Integer, primary_key=True)
name = Column(String(64),unique=True)
year = Column(Integer,default=datetime.now().year-2000)
class Plate(Model):
__tablename__ = 'plates'
id = Column(Integer, primary_key=True)
number = Column(Integer)
project_id = Column(Integer, ForeignKey('projects.id'))
project = relationship('Project',backref=backref('plates',cascade='all, delete-orphan'))
#property
def name(self):
return str(self.project.year) + '-' + str(self.number).zfill(4)
My first idea was to make the number unique amongst the plates that have the same project.year attribute, so I have tried variations on
__table_args__ = (UniqueConstraint('project.year', 'number', name='_year_number_uc'),), but this needs to access the other table.
Is there a way to do this in the database? Or, failing that, an __init__ method that checks for uniqueness of either the number/project.year combination, or the name property?
There are multiple solutions to your problem. For example, you can de-normalize project.year-number combination and store it as a separate Plate field. Then you can put a unique key on it. The question is how you're going to maintain that value. The two obvious options are triggers (assuming your DB supports triggers and you're ok to use them) or sqla Events, see http://docs.sqlalchemy.org/en/latest/orm/events.html#
Both solutions won't emit an extra SELECT query. Which I believe is important for you.
your question is somewhat similar to Can SQLAlchemy events be used to update a denormalized data cache?

How to order relationship objects on query execution

The following code is for Flask-SQLAlchemy, but would be quite similar in SQLAlchemy.
I have two simple classes:
class Thread(db.Model):
id = db.Column(db.Integer, primary_key=True)
subject = db.Column(db.String)
messages = db.relationship('Message', backref='thread', lazy='dynamic')
class Message(db.Model):
id = db.Column(db.Integer, primary_key=True)
created = db.Column(db.DateTime, default=datetime.utcnow())
text = db.Column(db.String, nullable=False)
I would like to query all Threads and have them ordered by last message created. This is simple:
threads = Thread.query.join(Message).order_by(Message.created.desc()).all()
Threads is now a correctly ordered list I can iterate. However if I iterate over
threads[0].messages then Messages objects are not ordered by Message.created descending.
I can solve this issue while declaring the relationship:
messages = relationship('Message', backref='thread', lazy='dynamic',
order_by='Message.created.desc()')
However this is something I'd rather not do. I want explicitly set this while declaring my query.
I could also call:
threads[0].messages.reverse()
..but this is quite inconvenient in Jinja template.
Is there a good solution for setting order_by for joined model?
You have Thread.messages marked as lazy='dynamic'. This means that after querying for threads, messages is a query object, not a list yet. So iterate over threads[0].messages.order_by(Message.created.desc()).

Categories

Resources