sqlachemy update value in table doen't update value in associated table - python

I want to update a column in a table with a relation one to many.
but i can't figure out a way to make the update impact also the relation table. Here is a sample of the code, i tried to change it with back_populates, with or without onupdate or cascade but it doesn't seem to have any effect on what i want to do
class Employe(Base):
__tablename__ = 'employes'
id = Column(Integer, primary_key=True)
service_name = Column(String, ForeignKey('services.name', onupdate="CASCADE"))
class Service(Base):
__tablename__ = 'services'
name = Column(String, primary_key=True)
employers = relationship('Employe', backref=backref('service', cascade="all"))
a = session.query(Service).filter_by(name='ECTC').update({'name' : 'SECTC'})
session.commit()
That does change the services.name but let the column service_name untouched.
And so when i want to query session.query(Employe).filter_by(service_name='SECTC') i get no result
Is there a way to make the update impact both tables ?
Otherwise i can just select, delete and recreate the employes everytime i need to update services but that doesn't seem optimal to me.
Thank you for your help !

Related

SqlAlchemy doubly linked tables [duplicate]

I'm trying to model the following situation: A program has many versions, and one of the versions is the current one (not necessarily the latest).
This is how I'm doing it now:
class Program(Base):
__tablename__ = 'programs'
id = Column(Integer, primary_key=True)
name = Column(String)
current_version_id = Column(Integer, ForeignKey('program_versions.id'))
current_version = relationship('ProgramVersion', foreign_keys=[current_version_id])
versions = relationship('ProgramVersion', order_by='ProgramVersion.id', back_populates='program')
class ProgramVersion(Base):
__tablename__ = 'program_versions'
id = Column(Integer, primary_key=True)
program_id = Column(Integer, ForeignKey('programs.id'))
timestamp = Column(DateTime, default=datetime.datetime.utcnow)
program = relationship('Filter', foreign_keys=[program_id], back_populates='versions')
But then I get the error: Could not determine join condition between parent/child tables on relationship Program.versions - there are multiple foreign key paths linking the tables. Specify the 'foreign_keys' argument, providing a list of those columns which should be counted as containing a foreign key reference to the parent table.
But what foreign key should I provide for the 'Program.versions' relationship? Is there a better way to model this situation?
Circular dependency like that is a perfectly valid solution to this problem.
To fix your foreign keys problem, you need to explicitly provide the foreign_keys argument.
class Program(Base):
...
current_version = relationship('ProgramVersion', foreign_keys=current_version_id, ...)
versions = relationship('ProgramVersion', foreign_keys="ProgramVersion.program_id", ...)
class ProgramVersion(Base):
...
program = relationship('Filter', foreign_keys=program_id, ...)
You'll find that when you do a create_all(), SQLAlchemy has trouble creating the tables because each table has a foreign key that depends on a column in the other. SQLAlchemy provides a way to break this circular dependency by using an ALTER statement for one of the tables:
class Program(Base):
...
current_version_id = Column(Integer, ForeignKey('program_versions.id', use_alter=True, name="fk_program_current_version_id"))
...
Finally, you'll find that when you add a complete object graph to the session, SQLAlchemy has trouble issuing INSERT statements because each row has a value that depends on the yet-unknown primary key of the other. SQLAlchemy provides a way to break this circular dependency by issuing an UPDATE for one of the columns:
class Program(Base):
...
current_version = relationship('ProgramVersion', foreign_keys=current_version_id, post_update=True, ...)
...
This design is not ideal; by having two tables refer to one another, you cannot effectively insert into either table, because the foreign key required in the other will not exist. One possible solution in outlined in the selected answer of
this question related to microsoft sqlserver, but I will summarize/elaborate on it here.
A better way to model this might be to introduce a third table, VersionHistory, and eliminate your foreign key constraints on the other two tables.
class VersionHistory(Base):
__tablename__ = 'version_history'
program_id = Column(Integer, ForeignKey('programs.id'), primary_key=True)
version_id = Column(Integer, ForeignKey('program_version.id'), primary_key=True)
current = Column(Boolean, default=False)
# I'm not too familiar with SQLAlchemy, but I suspect that relationship
# information goes here somewhere
This eliminates the circular relationship you have created in your current implementation. You could then query this table by program, and receive all existing versions for the program, etc. Because of the composite primary key in this table, you could access any specific program/version combination. The addition of the current field to this table takes the burden of tracking currency off of the other two tables, although maintaining a single current version per program could require some trigger gymnastics.
HTH!

postgres/sqlalchemy - how to add unique constraint to existing column of table

I've looked at several other answers but have yet to find one that shows the create statement & the model class together, most just show how to define a column as unique when the table is created.
I have a working line of code that writes rows to postgres & the associated model class below
self.cur.execute("INSERT INTO autos(make,year,model,license) VALUES (%s,%s,%s,%s)",(item['make'],item['year'],item['model'],item['license']))
class AutoItem(db.Model):
__tablename__ = 'autos'
id = db.Column(db.Integer, primary_key=True)
make = db.Column(db.String(255))
year = db.Column(db.String(255))
model = db.Column(db.String(255))
license = db.Column(db.String(255))
in order to prevent duplicate rows I added an ON CONSTRAINT statement to the line of code like so
self.cur.execute("INSERT INTO autos(make,year,model,license) VALUES (%s,%s,%s,%s) ON CONFLICT (license) DO NOTHING;",(item['make'],item['year'],item['model'],item['license']))
adding the ON CONSTRAINT statement generates the following error
psycopg2.errors.InvalidColumnReference: there is no unique or exclusion constraint matching the ON CONFLICT specification
so to update the model class such that the license column is unique what I have now is
class AutoItem(db.Model):
__tablename__ = 'autos'
id = db.Column(db.Integer, primary_key=True)
make = db.Column(db.String(255))
year = db.Column(db.String(255))
model = db.Column(db.String(255))
license = db.Column(db.String(255), unique=True) #THIS WORKS
and I added an import to the top of the model class (file name is models.py)
from sqlalchemy import UniqueConstraint #THIS WORKS
but the error psycopg2.errors.InvalidColumnReference: there is no unique or exclusion constraint matching the ON CONFLICT specification still remains? Previously I dropped the table & created a new one but that generates the same error too. I don't know if its psycopg2, my model class and/or the ON CONSTRAINT statement that's the problem?
EDIT adding screenshot below
EDIT 2 - ISSUE IS RESOLVED
the working solution line of code and model class are above, see the update to the model class as well as the import at the top. I guess I made some mistake/s when creating the table. What worked for me was to completely drop the autos table & create it again...

Regenerate SQLalchemy Sequence on update

Imagine I have a class in my models like this:
class Sample(Base):
__tablename__ = 'sample'
id = Column(Integer, primary_key=True)
firstname = Column(String(50))
lastname = Column(String(50))
auto_generated_code = Column(
Integer,
Sequence('sample_auto_generated_code_sequence'),
unique=True
)
When I add an instance to Sample class, after flushing the session, my instance get an integer number automatically. so far so good.
What I also want is, when I update any of other columns of the mentioned instance, it should get a new auto_generated_code automatically.
In simple word I want my Sequence to generate another code on update too. How can I achieve this?
I found an answer that we can get the Sequence object in Sqlalchemy like this:
from sqlalchemy import Sequence
seq = Sequence('sample_auto_generated_code_sequence')
Then we can get the next Sequence by executing it on our connection or session
instance.auto_generated_code = session.execute(seq) # or conn.execute(seq)
Then by adding it to the session it will go just fine.

Flask SQLAlchemy query join

I have 2 table like this:
class Role(db.Model):
__tablename__ = 'roles'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(64), unique=True)
index = db.Column(db.String(64))
users = db.relationship('User',
backref='role', lazy='dynamic')
class User(UserMixin, db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True)
email = db.Column(db.String(64), unique=True, index=True)
role_id = db.Column(db.Integer, db.ForeignKey('roles.id'))
Then I try to making 2 kinds of query to get data for the relationship models.
first, I make it like this:
user = db.session.query(User, Role.index).filter_by(email=form.email.data).first()
and the second one I use join statement on that:
user = db.session.query(User, Role.index).join(Role).filter(User.email==form.email.data).first()
My questions are, what's the difference in that query while in the second one I use the join statement but the result still same.
For the fast query or performance, should I use the first or the second one..?
The difference is that the first query will add both users and roles to FROM list, which results in a CROSS JOIN. In other words every row from users is joined with every row from roles. The second query performs an INNER JOIN and SQLAlchemy deduces the ON clause based on the foreign key relationship between the tables.
You should use the first one when you want a cartesian product, and the second one when you want the role related to the user by the foreign key relationship. That the result happens to be the same for you is just a coincidence.
For future reference, try enabling echo so that you can check from your logs what queries are actually emitted. Also have a look at defining ORM relationships, which would allow you to have a role attribute on User for accessing its related Role.
If your entities are from different classes/tables then joining is implied and SQL Alchemy will add it to actual SQL. You may add custom join if that connection isn't the one that SQL Alchemy uses (retrieved from foreign key or such).

flask sqlalchemy UniqueConstraint with foreignkey attribute

I have an app I am building with Flask that contains models for Projects and Plates, where Plates have Project as a foreignkey.
Each project has a year, given as an integer (so 17 for 2017); and each plate has a number and a name, constructed from the plate.project.year and plate.number. For example, Plate 106 from a project done this year would have the name '17-0106'. I would like this name to be unique.
Here are my models:
class Project(Model):
__tablename__ = 'projects'
id = Column(Integer, primary_key=True)
name = Column(String(64),unique=True)
year = Column(Integer,default=datetime.now().year-2000)
class Plate(Model):
__tablename__ = 'plates'
id = Column(Integer, primary_key=True)
number = Column(Integer)
project_id = Column(Integer, ForeignKey('projects.id'))
project = relationship('Project',backref=backref('plates',cascade='all, delete-orphan'))
#property
def name(self):
return str(self.project.year) + '-' + str(self.number).zfill(4)
My first idea was to make the number unique amongst the plates that have the same project.year attribute, so I have tried variations on
__table_args__ = (UniqueConstraint('project.year', 'number', name='_year_number_uc'),), but this needs to access the other table.
Is there a way to do this in the database? Or, failing that, an __init__ method that checks for uniqueness of either the number/project.year combination, or the name property?
There are multiple solutions to your problem. For example, you can de-normalize project.year-number combination and store it as a separate Plate field. Then you can put a unique key on it. The question is how you're going to maintain that value. The two obvious options are triggers (assuming your DB supports triggers and you're ok to use them) or sqla Events, see http://docs.sqlalchemy.org/en/latest/orm/events.html#
Both solutions won't emit an extra SELECT query. Which I believe is important for you.
your question is somewhat similar to Can SQLAlchemy events be used to update a denormalized data cache?

Categories

Resources