I have a scenario to iterate up session_number column for related user_name. If a user created a session before I'll iterate up the last session_number but if a user created session for the first time session_number should start from 1. I tried to illustrate on below. Right now I handle this by using logic but try to find more elegant way to do that in SqlAlchemy.
id - user_name - session_number
1 user_1 1
2 user_1 2
3 user_2 1
4 user_1 3
5 user_2 2
Here is my python code of the table. My database is PostgreSQL and I'm using alembic to upgrade tables. Right now it continues to iterate up the session_number regardless user_name.
class UserSessions(db.Model):
__tablename__ = 'user_sessions'
id = db.Column(db.Integer, primary_key=True, unique=True)
username = db.Column(db.String, nullable=False)
session_number = db.Column(db.Integer, Sequence('session_number_seq', start=0, increment=1))
created_at = db.Column(db.DateTime)
last_edit = db.Column(db.DateTime)
__table_args__ = (
db.UniqueConstraint('username', 'session_number', name='_username_session_number_idx_'),
)
I've searched on the internet for this situation but those were not like my problem. Is it possible to achieve this with SqlAlchemy/PostgreSQL actions?
First, I do not know of any "pure" solution for this situation by using either SqlAlchemy or Postgresql or a combination of the two.
Although it might not be exactly the solution you are looking for, I hope it will give you some ideas.
If you wanted to calculate the session_number for the whole table without it being stored, i would use the following query or a variation of thereof:
def get_user_sessions_with_rank():
expr = (
db.func.rank()
.over(partition_by=UserSessions.username, order_by=[UserSessions.id])
.label("session_number")
)
subq = db.session.query(UserSessions.id, expr).subquery("subq")
q = (
db.session.query(UserSessions, subq.c.session_number)
.join(subq, UserSessions.id == subq.c.id)
.order_by(UserSessions.id)
)
return q.all()
Alternatively, I would actually add a column_property to the model compute it on the fly for each instance of UserSessions. it is not as efficient in calculation, but for queries filtering by specific user it should be good enough:
class UserSessions(db.Model):
__tablename__ = "user_sessions"
id = db.Column(db.Integer, primary_key=True, unique=True)
username = db.Column(db.String, nullable=False)
created_at = db.Column(db.DateTime)
last_edit = db.Column(db.DateTime)
# must define this outside of the model definition because of need for aliased
US2 = db.aliased(UserSessions)
UserSessions.session_number = db.column_property(
db.select(db.func.count(US2.id))
.where(US2.username == UserSessions.username)
.where(US2.id <= UserSessions.id)
.scalar_subquery()
)
In this case, when you query for UserSessions, the session_number will be fetched from the database, while being None for newly created instances.
Related
In the case of many-to-many relationships, an association table can be used in the form of Association Object pattern.
I have the following setup of two classes having a M2M relationship through UserCouncil association table.
class Users(Base):
name = Column(String, nullable=False)
email = Column(String, nullable=False, unique=True)
created_at = Column(DateTime, default=datetime.utcnow)
password = Column(String, nullable=False)
salt = Column(String, nullable=False)
councils = relationship('UserCouncil', back_populates='user')
class Councils(Base):
name = Column(String, nullable=False)
created_at = Column(DateTime, default=datetime.utcnow)
users = relationship('UserCouncil', back_populates='council')
class UserCouncil(Base):
user_id = Column(UUIDType, ForeignKey(Users.id, ondelete='CASCADE'), primary_key=True)
council_id = Column(UUIDType, ForeignKey(Councils.id, ondelete='CASCADE'), primary_key=True)
role = Column(Integer, nullable=False)
user = relationship('Users', back_populates='councils')
council = relationship('Councils', back_populates='users')
However, in this situation, suppose I want to search for a council with a specific name cname for a given user user1. I can do the following:
for council in user1.councils:
if council.name == cname:
dosomething(council)
Or, alternatively, this:
session.query(UserCouncil) \
.join(Councils) \
.filter((UserCouncil.user_id == user1.id) & (Councils.name == cname)) \
.first() \
.council
While the second one is more similar to raw SQL queries and performs better, the first one is simpler. Is there any other, more idiomatic way of expressing this query which is better performing while also utilizing the relationship linkages instead of explicitly writing traditional joins?
First, I think even the SQL query you bring as an example might need to go to fetch the UserCouncil.council relationship again to the DB if it is not loaded in the memory already.
I think that given you want to search directly for the Council instance given its .name and the User instance, this is exactly what you should ask for. Below is the query for that with 2 options on how to filter on user_id (you might be more familiar with the second option, so please use it):
q = (
select(Councils)
.filter(Councils.name == councils_name)
.filter(Councils.users.any(UserCouncil.user_id == user_id)) # v1: this does not require JOIN, but produces the same result as below
# .join(UserCouncil).filter(UserCouncil.user_id == user_id) # v2: join, very similar to original SQL
)
council = session.execute(q).scalars().first()
As to making it more simple and idiomatic, I can only suggest to wrap it in a method or property on the User instance:
class Users(...):
...
def get_council_by_name(self, councils_name):
q = (
select(Councils)
.filter(Councils.name == councils_name)
.join(UserCouncil).filter(with_parent(self, Users.councils))
)
return object_session(self).execute(q).scalars().first()
so that you can later call it user.get_council_by_name('xxx')
Edit-1: added SQL queries
v1 of the first q query above will generate following SQL:
SELECT councils.id,
councils.name
FROM councils
WHERE councils.name = :name_1
AND (EXISTS
(SELECT 1
FROM user_councils
WHERE councils.id = user_councils.council_id
AND user_councils.user_id = :user_id_1
)
)
while v2 option will generate:
SELECT councils.id,
councils.name
FROM councils
JOIN user_councils ON councils.id = user_councils.council_id
WHERE councils.name = :name_1
AND user_councils.user_id = :user_id_1
I have a table to log user actions. I dont have different columns for different action types yet for my newest task, I have to find the average time difference inbetween some actions.
class UserAction(db.Model):
__tablename__ = "user_action"
id = Column(Integer, nullable=False, primary_key=True)
user_id = Column(Integer, ForeignKey('user.id))
case_id = Column(Integer, ForeignKey('case.id'))
category_id = Column(Integer)
action_time = Column(DateTime(), server_default=func.now())
So I want to do something like,
x = session.query(func.avg(UserAction.action_time)).filter(UserAction.category_id == 1).all()
y = session.query(func.avg(UserAction.action_time)).filter(UserAction.category_id == 2).all()
dif = x - y
or
session.query(func.avg(func.justify_hours(timestamp_column1) - func.justify_hours(timestamp_column1))).all()
I can create a new table to log each case's actions' in different columns to do it like thw first way but that wont be efficient. But if i do it like the second way, the output won't be correct becasue not all actions are done by all cases. So im a little bit stuck. How can i achieve this?
Thanks
So I am currently trying to build a new app using flask-restful for the backend, and I am still learning since all of this is quite new to me. I already set up everything with several different mySQL tables detailed next and all the relationships between them, and everything seems to be working fine, except than the insert process of new data is quite slow.
Here is a (simplified) explanation of the current setup I have. Basically, I first of all have one Flights table.
class FlightModel(db.Model):
__tablename__ = "Flights"
flightid = db.Column(db.Integer, primary_key = True, nullable=False)
[Other properties]
reviewid = db.Column(db.Integer, db.ForeignKey('Reviews.reviewid'), index = True, nullable = False)
review = db.relationship("ReviewModel", back_populates="flight", lazy='joined')
This table then points to a Reviews table, in which I store the global review left by the user.
class ReviewModel(db.Model):
__tablename__ = "Reviews"
reviewid = db.Column(db.Integer, primary_key = True, nullable = False)
[Other properties]
depAirportReviewid = db.Column(db.Integer, db.ForeignKey('DepAirportReviews.reviewid'), index=True, nullable = False)
arrAirportReviewid = db.Column(db.Integer, db.ForeignKey('ArrAirportReviews.reviewid'), index=True, nullable = False)
airlineReviewid = db.Column(db.Integer, db.ForeignKey('AirlineReviews.reviewid'), index=True, nullable = False)
flight = db.relationship("FlightModel", uselist=False, back_populates="review", lazy='joined')
depAirportReview = db.relationship("DepAirportReviewModel", back_populates="review", lazy='joined')
arrAirportReview = db.relationship("ArrAirportReviewModel", back_populates="review", lazy='joined')
airlineReview = db.relationship("AirlineReviewModel", back_populates="review", lazy='joined')
Then, a more detailed review can be left regarding different aspects of the flights, stored in yet another table (for example, in the following DepAirportReviews table: there are three tables in total at this level).
class DepAirportReviewModel(db.Model):
__tablename__ = "DepAirportReviews"
reviewid = db.Column(db.Integer, primary_key = True, nullable = False)
[Other properties]
review = db.relationship("ReviewModel", uselist=False, back_populates="depAirportReview", lazy='joined')
The insert process is slow (it typically takes 1 second per flight to insert, which is a problem when I try to bulk insert a few hundreds of them).
I understand this is because of all these relationships and all the trips to the database it implies, in order to retrieve the different ids for the different tables. Is it correct? Is there anything I could do to solve this, or will I need to redesign the tables to remove some of these relationships?
Thanks for any help!
EDIT: displaying the SQL executed directly showed what I expected: 7 simple queries in total are executed for each insertion, each taking ~300ms. It's quite long, I guess it is mostly due to the time to reach the server. Nothing to be done except remvoing some foreign keys, right?
I am creating a website using Flask and SQLAlchemy. This website keeps track of classes that a student has taken. I would like to find a way to search my database using SQLAlchemy to find all unique classes that have been entered. Here is code from my models.py for Class:
class Class(db.Model):
__tablename__ = 'classes'
id = db.Column(db.Integer, primary_key=True)
title = db.Column(db.String(100))
body = db.Column(db.Text)
created = db.Column(db.DateTime, default=datetime.datetime.now)
user_email = db.Column(db.String(100), db.ForeignKey(User.email))
user = db.relationship(User)
In other words, I would like to get all unique values from the title column and pass that to my views.py.
Using the model query structure you could do this
Class.query.with_entities(Class.title).distinct()
query = session.query(Class.title.distinct().label("title"))
titles = [row.title for row in query.all()]
titles = [r.title for r in session.query(Class.title).distinct()]
As #van has pointed out, what you are looking for is:
session.query(your_table.column1.distinct()).all(); #SELECT DISTINCT(column1) FROM your_table
but I will add that in most cases, you are also looking to add another filter on the results. In which case you can do
session.query(your_table.column1.distinct()).filter_by(column2 = 'some_column2_value').all();
which translates to sql
SELECT DISTINCT(column1) FROM your_table WHERE column2 = 'some_column2_value';
So I'm quite new to SQLAlchemy.
I have a model Showing which has about 10,000 rows in the table. Here is the class:
class Showing(Base):
__tablename__ = "showings"
id = Column(Integer, primary_key=True)
time = Column(DateTime)
link = Column(String)
film_id = Column(Integer, ForeignKey('films.id'))
cinema_id = Column(Integer, ForeignKey('cinemas.id'))
def __eq__(self, other):
if self.time == other.time and self.cinema == other.cinema and self.film == other.film:
return True
else:
return False
Could anyone give me some guidance on the fastest way to insert a new showing if it doesn't exist already. I think it is slightly more complicated because a showing is only unique if the time, cinmea, and film are unique on a showing.
I currently have this code:
def AddShowings(self, showing_times, cinema, film):
all_showings = self.session.query(Showing).options(joinedload(Showing.cinema), joinedload(Showing.film)).all()
for showing_time in showing_times:
tmp_showing = Showing(time=showing_time[0], film=film, cinema=cinema, link=showing_time[1])
if tmp_showing not in all_showings:
self.session.add(tmp_showing)
self.session.commit()
all_showings.append(tmp_showing)
which works, but seems to be very slow. Any help is much appreciated.
If any such object is unique based on a combination of columns, you need to mark these as a composite primary key. Add the primary_key=True keyword parameter to each of these columns, dropping your id column altogether:
class Showing(Base):
__tablename__ = "showings"
time = Column(DateTime, primary_key=True)
link = Column(String)
film_id = Column(Integer, ForeignKey('films.id'), primary_key=True)
cinema_id = Column(Integer, ForeignKey('cinemas.id'), primary_key=True)
That way your database can handle these rows more efficiently (no need for an incrementing column), and SQLAlchemy now automatically knows if two instances of Showing are the same thing.
I believe you can then just merge your new Showing back into the session:
def AddShowings(self, showing_times, cinema, film):
for showing_time in showing_times:
self.session.merge(
Showing(time=showing_time[0], link=showing_time[1],
film=film, cinema=cinema)
)