Can I "touch" a SQLAlchemy record to trigger "onupdate"? - python

Here's a SQLAlchemy class:
class MyGroup(Base):
__tablename__ = 'my_group'
group_id = Column(Integer, Sequence('my_seq'), primary_key=True)
group_name = Column(String(200), nullable=False, index=True)
date_created = Column(DateTime, default=func.now())
date_updated = Column(DateTime, default=func.now(), onupdate=func.now())
Anytime I add a group_name or (for example) update the group_name, the date_updated field will get updated. That's great.
But sometimes there are cases where I want to mark a group as "updated" even if the group record itself did not change (for example, if data in a different but related table is updated).
I could do it manually:
group = session.query(MyGroup).filter(MyGroup.group_name=='Some Group').one()
group.date_updated = datetime.datetime.utcnow()
session.commit()
but I'd really rather let the model do it in its own way, rather than recreate a Python process to manually update the date. (For example, to avoid mistakes like where maybe the model uses now() while the manual function mistakenly uses utcnow())
Is there a way with SQLAlchemy to "touch" a record (kind of like UNIX touch) that wouldn't alter any of the record's other values but would trigger the onupdate= function?

Just to add to this answer, the documentation states the following:
The Column.default and Column.onupdate keyword arguments also accept Python functions. These functions are invoked at the time of insert or update if no other value for that column is supplied, and the value returned is used for the column’s value.
Key part being: are invoked at the time of insert or update if no other value for that column is supplied.
Key part of the key part: if no other value for that column is supplied
So with a simple update statement with empty values, does the trick:
from sqlalchemy import update
stmt = update(ModelName).where(ModelName.column.in_(column_values)).values()
db.engine.execute(update_product_mapping_info)
I am using the sqlalchemy.sql.expression.update for this, documentation here.
Here's the Model column definition I have:
from datetime import datetime
last_updated = Column(db.DateTime, onupdate=datetime.utcnow)

To show a complete example, building on #Chayemor's answer I did the following:
import sqlalchemy.sql.functions as func
from sqlalchemy import Column, Integer, String, update
from . import database as db
Base = declarative_base()
class Object(Base):
__tablename__ = "objects"
id = Column(Integer, primary_key=True, nullable=False)
last_update = Column(
DateTime,
server_default=func.now(),
onupdate=func.current_timestamp()
)
def touch(self):
stmt = update(Game).where(Game.id == self.id)
db.engine.execute(stmt)
From here, running obj.touch() updates its last_update field in the database without changing any other data.

Another way to do this is to call orm.attributes.flag_modified on an instance and attribute. SQLAlchemy will mark the attribute as modified even though it is unchanged and generate an update.
with Session() as s, s.begin():
mg = s.execute(sa.select(MyGroup)).scalar_one()
orm.attributes.flag_modified(mg, 'group_name')
Note that the "dummy" update will be included in the generated SQL's SET clause
UPDATE tbl
SET group_name=%(group_name)s,
date_updated=now()
WHERE tbl.group_id = %(tbl_group_id)s
in contrast with that generated by Chayemor's answer:
UPDATE tbl
SET date_updated=now()
WHERE tbl.group_name = %(group_name_1)s
This may be significant (consider triggers for example).

I haven't looked at the source but from the docs it seems that this is only triggered by issuing a SQL UPDATE command:
onupdate – A scalar, Python callable, or ClauseElement representing a default value to be applied to the column within UPDATE statements, which wil be invoked upon update if this column is not present in the SET clause of the update. This is a shortcut to using ColumnDefault as a positional argument with for_update=True.
If your concern is ensuring that your "touch" uses the same function as the onupdate function you could define a method on your model to perform the touch and have the onupdate parameter point this method.
I think something like this would work:
class MyGroup(Base):
__tablename__ = 'my_group'
group_id = Column(Integer, Sequence('my_seq'), primary_key=True)
group_name = Column(String(200), nullable=False, index=True)
date_created = Column(DateTime, default=func.now())
date_updated = Column(
DateTime,
default=self.get_todays_date,
onupdate=self.get_todays_date)
def get_todays_date(self):
return datetime.datetime.utcnow()
def update_date_updated(self):
self.date_updated = self.get_todays_date()
You could then update your record like this:
group.update_date_updated()
session.commit()

Related

How to construct a SQLAlchemy relationship that takes the record most recently inserted?

Imagine I've got the following:
class User:
id = Column(Integer, primary_key=True)
username = Column(String(20), nullable=False)
password_hash = Column(String(HASH_LENGTH), nullable=False)
class LoginAttempts:
id = Column(Integer, primary_key=True)
user_id = Column(Integer, ForeignKey(User.id))
attempted_at = Column(DateTime, default=datetime.datetime.utcnow)
Now, I want to add a relationship to User called last_attempt that retrieves the most recent login attempt. How might one do this?
This seems like a use case for a relationship to an aliased class, which was added in SQLAlchemy 1.3 – before that you'd use a non primary mapper, or other methods such as a custom primary join. The idea is to create a subquery representing a derived table of latest login attempts per user that is then aliased to LoginAttempts and used as the target of a relationship. The exact query used to derive the latest attempts depends on your DBMS1, but a generic left join "antijoin" will work in most. Start by generating the (sub)query for latest login attempts:
newer_attempts = aliased(LoginAttempts)
# This reads as "find login attempts for which no newer attempt with larger
# attempted_at exists". The same could be achieved using NOT EXISTS as well.
latest_login_attempts_query = select([LoginAttempts]).\
select_from(
outerjoin(LoginAttempts, newer_attempts,
and_(newer_attempts.user_id == LoginAttempts.user_id,
newer_attempts.attempted_at > LoginAttempts.attempted_at))).\
where(newer_attempts.id == None).\
alias()
latest_login_attempts = aliased(LoginAttempts, latest_login_attempts_query)
Then just add the relationship attribute to your User model:
User.last_attempt = relationship(latest_login_attempts, uselist=False,
viewonly=True)
1: For example in Postgresql you could replace the LEFT JOIN subquery with a LATERAL subquery, NOT EXISTS, a query using window functions, or SELECT DISTINCT ON (user_id) ... ORDER BY (user_id, attempted_at DESC).
Although the selected answer is more robust, another way you could accomplish this is to use a lazy=dynamic and order_by:
User.last_attempted = relationship(LoginAttempts, order_by=desc(LoginAttempts.attempted_at), lazy='dynamic')
Be careful though, because this returns a query object (and will require .first() or equivalent), and you will need to use a limit clause:
last_attempted_login = session.query(User).get(my_user_id).last_attempted.limit(1).first()

Prevent deletion of parent row if it's child will be orphaned in SQLAlchemy

I'm currently building a data model using sqlalchemy through flask-sqlalchemy
The database is on a Postgresql server
I am having trouble when deleting rows from a table that has relationships. In this case I have a number of treatment types, and one treatment. the treatment has a single treatment type assigned.
As long as I have one or more treatments assigned a particular treatment Type, I wish that the treatment type cannot be deleted. As it is now it is deleted when I try.
I have the following model:
class treatment(db.Model):
__tablename__ = 'treatment'
__table_args__ = (db.UniqueConstraint('title', 'tenant_uuid'),)
id = db.Column(db.Integer, primary_key=True)
uuid = db.Column(db.String(), nullable=False, unique=True)
title = db.Column(db.String(), nullable=False)
tenant_uuid = db.Column(db.String(), nullable=False)
treatmentType_id = db.Column(db.Integer, db.ForeignKey('treatmentType.id'))
riskResponse_id = db.Column(db.Integer, db.ForeignKey('riskResponse.id'))
class treatmentType(db.Model):
__tablename__ = 'treatmentType'
__table_args__ = (db.UniqueConstraint('title', 'tenant_uuid'),)
id = db.Column(db.Integer, primary_key=True)
uuid = db.Column(db.String(), nullable=False, unique=True)
title = db.Column(db.String(), nullable=False)
tenant_uuid = db.Column(db.String(), nullable=False)
treatments = db.relationship('treatment', backref='treatmentType', lazy='dynamic')
I can build some logic in my "delete" view that checks for assigned treatments, before deleting the treatment type, but in my opinion this should be a standard feature of a relational database. So in other words I must be doing something wrong.
I delete the treatment type like so:
entry = treatmentType.query.filter_by(tenant_uuid=session['tenant_uuid']).all()
try:
db.session.delete(entry)
db.session.commit()
return {'success': 'Treatment Type deleted'}
except Exception as E:
return {'error': unicode(E)}
As I said it is possible for me to do a check before deleting the treatment Type, but I would rather have sqlalchemy throw an error if there are relationship issues prior to deletion.
When deleting the TreatmentType (parent), by default, SQLAlchemy will update the child by setting the Treatment.treatmentType_id = None. As you stated, you are left with a Treatment without a TreatmentType. The child record is now an "orphan".
There are 2 ways to prevent orphaned records from being created in SQLAlchemy.
1. Using a Non-NULL constraint on the child column
When deleting the TreatmentType (parent), by default, SQLAlchemy will set the Treatment.treatmentType_id (child) to None (or null in SQL) when you perform this operation, and as you stated, you are left with a Treatment without a TreatmentType.
A solution to this is to update the treatmentType_id column to be non-nullable, meaning that it MUST have a non-null value. We use the nullable=False keyword to do this:
treatmentType_id = db.Column(db.Integer, db.ForeignKey('treatmentType.id'), nullable=False)
Now, when the default cascade logic executes, SQLAlchemy tries to set Treatment.treatmentType_id = None, and an IntegrityError is raised due to the non-null constraint being violated.
2. Using passive_deletes='all'
treatments = db.relationship('treatment', backref='treatmentType', passive_deletes='all')
When a TreatmentType gets delete, the passive_deletes='all' keyword on the treatments relationship "will disable the “nulling out” of the child foreign keys". It basically disables the default behavior outlined in the first paragraph above. So, when the ORM tries to delete the TreatmentType without first setting the child's Treatment.treatmentType_id = None, the database will throw an IntegrityError complaining that the child's ForeignKey references a non-existent parent!
*Note: The underlying database MUST support foreign keys to use this option

Creating second model upon instantiation of the first within sqlachemy

I am currently working with some legacy code that looks as follows:
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Unicode
from sqlalchemy.dialects.postgresql import ARRAY, TEXT
Base = declarative_base()
class Book(Base):
__tablename__ = 'book'
id = Column(Integer, primary_key=True)
title = Column(Unicode)
keywords = Column('keywords', ARRAY(TEXT), primary_key=False)
The keywords are currently being kept as an array, but I'd like to flatten this out and have them be in their own separate model
class Keyword():
__tablename__ = 'keyword'
id = Column(Integer, primary_key=True)
book_id = Column(Integer, ForeignKey('book.id', ondelete='cascade'),
nullable=False)
keyword = Column(Unicode)
How can I make it such that when a Book() is created, it also creates the
accompanying keywords? As an intermediate step for migrating the API, I'd like to keep the current array column, but also have the accompanying Keyword() instances be created.
I could do this within an __init__ method, but would need to know what the current Session() was, in order to run a commit. I could also perhaps use a property attribute, attached to keywords, but am not sure how that would work given that I am working with a class that inherits from SQLAlchemy's base, and not with a regular class that I have defined. What's the correct way to do this?
You can use object_session to find out the session of a given instance.
But if you define relationship between a Book and Keywords, you should not need even bother:
class Book(Base):
# ...
rel_keywords = relationship('Keyword', backref='book')
def init_keyword_relationship(self):
for kw in self.keywords:
self.rel_keywords.add(Keyword(keyword=kw))
sess = # ... get_session...
books = sess.query(Book).all()
for book in books:
book.init_keyword_relationship()
sess.commit()
However, I would do a migration once and get rid of the keywords array in order not to add a logic to keep those in sync.

SqlAlchemy: Self referencing default value as query

Let's say I have the following structure (using Flask-SqlAlchemy):
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String, nullable=False, index=True)
# The following line throws an error at runtime.
variant = db.Column(db.Integer, nullable=False, index=True,
default=select(func.count(User.id)).where(User.name == self.name))
def __init__(self, name):
super(User, self).__init__()
self.name = name
#property
def clause(self):
return '/'.join([str(self.variant), self.name])
Problem is, "User is not defined." I would like to model a system with Users who may choose the same name but add a field to differentiate between users in a systemic way without using (thereby exposing) the "id" field.
Anyone know how to make a self-referential query to use to populate a default value?
The issue of the default not referring to User here is solved by just assigning "default" to the Column once User is available. However, that's not going to solve the problem here because "self" means nothing either, there is no User method being called so you can't just refer to "self". The challenge with this statement is that you want it to be rendered as an inline sub-SELECT but it still needs to know the in-memory value of ".name". So you have to assign that sub-SELECT per-object in some way.
The usual way people approach ORM-level INSERT defaults like this is usually by using a before_insert handler.
Another way that's worth pointing out is by creating a SQL level INSERT trigger. This is overall the most "traditional" approach, as here you need to have access to the row being inserted; triggers define a means of getting at the row values that are being inserted.
As far as using a default at the column level, you'd need to use a callable function as the default which can look at the current value of the row being inserted, but at the moment that means that your SELECT statement will not be rendered inline with the INSERT statement, you'd need to pre-execute the SELECT which is not really what we want here.
Anyway, the basic task of rendering a SQL expression into the INSERT while also having that SQL expression refer to some local per-object state is achieved by assigning that expression to the attribute, the ORM picks up on this at flush time. Below we do this in the constructor, but this can also occur inside of before_insert() as well:
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
name = Column(String, nullable=False, index=True)
variant = Column(Integer, nullable=False, index=True)
def __init__(self, name):
self.name = name
self.variant = select([func.count(User.id)]).where(User.name == self.name)
e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)
s = Session(e)
s.add(User(name='n1'))
s.commit()
s.add(User(name='n1'))
s.commit()
print s.query(User.variant).all()

Trigger in sqlachemy

I have two tables related via a foreign key, here they are using Declarative Mapping
class Task(DeclarativeBase):
__tablename__ = 'task'
id = Column(Integer, primary_key=True)
state = Column(Integer, default=0)
obs_id = Column(Integer, ForeignKey('obs.id'), nullable=False)
class Obs(DeclarativeBase):
__tablename__ = 'obs'
id = Column(Integer, primary_key=True)
state = Column(Integer, default=0)
So, I would like to update the related task.state when obs.state is changed to value 2. Currently I'm doing it by hand (using a relationship called task)
obs.state = 2
obs.task.state = 2
But I would prefer doing it using a trigger. I have checked that this works in sqlite
CREATE TRIGGER update_task_state UPDATE OF state ON obs
BEGIN
UPDATE task SET state = 2 WHERE (obs_id = old.id) and (new.state = 2);
END;
But I can't find how to express this in sqlalchemy. I have read insert update defaults several times, but can't find the way. I don't know if it's even possible.
You can create trigger in the database with DDL class:
update_task_state = DDL('''\
CREATE TRIGGER update_task_state UPDATE OF state ON obs
BEGIN
UPDATE task SET state = 2 WHERE (obs_id = old.id) and (new.state = 2);
END;''')
event.listen(Obs.__table__, 'after_create', update_task_state)
This is the most reliable way: it will work for bulk updates when ORM is not used and even for updates outside your application. However there disadvantages too:
You have to take care your trigger exists and up to date;
It's not portable so you have to rewrite it if you change database;
SQLAlchemy won't change the new state of already loaded object unless you expire it (e.g. with some event handler).
Below is a less reliable (it will work when changes are made at ORM level only), but much simpler solution:
from sqlalchemy.orm import validates
class Obs(DeclarativeBase):
__tablename__ = 'obs'
id = Column(Integer, primary_key=True)
state = Column(Integer, default=0)
#validates('state')
def update_state(self, key, value):
self.task.state = value
return value
Both my examples work one way, i.e. they update task when obs is changes, but don't touch obs when task is updated. You have to add one more trigger or event handler to support change propagation in both directions.

Categories

Resources