I've looked at several other answers but have yet to find one that shows the create statement & the model class together, most just show how to define a column as unique when the table is created.
I have a working line of code that writes rows to postgres & the associated model class below
self.cur.execute("INSERT INTO autos(make,year,model,license) VALUES (%s,%s,%s,%s)",(item['make'],item['year'],item['model'],item['license']))
class AutoItem(db.Model):
__tablename__ = 'autos'
id = db.Column(db.Integer, primary_key=True)
make = db.Column(db.String(255))
year = db.Column(db.String(255))
model = db.Column(db.String(255))
license = db.Column(db.String(255))
in order to prevent duplicate rows I added an ON CONSTRAINT statement to the line of code like so
self.cur.execute("INSERT INTO autos(make,year,model,license) VALUES (%s,%s,%s,%s) ON CONFLICT (license) DO NOTHING;",(item['make'],item['year'],item['model'],item['license']))
adding the ON CONSTRAINT statement generates the following error
psycopg2.errors.InvalidColumnReference: there is no unique or exclusion constraint matching the ON CONFLICT specification
so to update the model class such that the license column is unique what I have now is
class AutoItem(db.Model):
__tablename__ = 'autos'
id = db.Column(db.Integer, primary_key=True)
make = db.Column(db.String(255))
year = db.Column(db.String(255))
model = db.Column(db.String(255))
license = db.Column(db.String(255), unique=True) #THIS WORKS
and I added an import to the top of the model class (file name is models.py)
from sqlalchemy import UniqueConstraint #THIS WORKS
but the error psycopg2.errors.InvalidColumnReference: there is no unique or exclusion constraint matching the ON CONFLICT specification still remains? Previously I dropped the table & created a new one but that generates the same error too. I don't know if its psycopg2, my model class and/or the ON CONSTRAINT statement that's the problem?
EDIT adding screenshot below
EDIT 2 - ISSUE IS RESOLVED
the working solution line of code and model class are above, see the update to the model class as well as the import at the top. I guess I made some mistake/s when creating the table. What worked for me was to completely drop the autos table & create it again...
Related
I'm trying to model the following situation: A program has many versions, and one of the versions is the current one (not necessarily the latest).
This is how I'm doing it now:
class Program(Base):
__tablename__ = 'programs'
id = Column(Integer, primary_key=True)
name = Column(String)
current_version_id = Column(Integer, ForeignKey('program_versions.id'))
current_version = relationship('ProgramVersion', foreign_keys=[current_version_id])
versions = relationship('ProgramVersion', order_by='ProgramVersion.id', back_populates='program')
class ProgramVersion(Base):
__tablename__ = 'program_versions'
id = Column(Integer, primary_key=True)
program_id = Column(Integer, ForeignKey('programs.id'))
timestamp = Column(DateTime, default=datetime.datetime.utcnow)
program = relationship('Filter', foreign_keys=[program_id], back_populates='versions')
But then I get the error: Could not determine join condition between parent/child tables on relationship Program.versions - there are multiple foreign key paths linking the tables. Specify the 'foreign_keys' argument, providing a list of those columns which should be counted as containing a foreign key reference to the parent table.
But what foreign key should I provide for the 'Program.versions' relationship? Is there a better way to model this situation?
Circular dependency like that is a perfectly valid solution to this problem.
To fix your foreign keys problem, you need to explicitly provide the foreign_keys argument.
class Program(Base):
...
current_version = relationship('ProgramVersion', foreign_keys=current_version_id, ...)
versions = relationship('ProgramVersion', foreign_keys="ProgramVersion.program_id", ...)
class ProgramVersion(Base):
...
program = relationship('Filter', foreign_keys=program_id, ...)
You'll find that when you do a create_all(), SQLAlchemy has trouble creating the tables because each table has a foreign key that depends on a column in the other. SQLAlchemy provides a way to break this circular dependency by using an ALTER statement for one of the tables:
class Program(Base):
...
current_version_id = Column(Integer, ForeignKey('program_versions.id', use_alter=True, name="fk_program_current_version_id"))
...
Finally, you'll find that when you add a complete object graph to the session, SQLAlchemy has trouble issuing INSERT statements because each row has a value that depends on the yet-unknown primary key of the other. SQLAlchemy provides a way to break this circular dependency by issuing an UPDATE for one of the columns:
class Program(Base):
...
current_version = relationship('ProgramVersion', foreign_keys=current_version_id, post_update=True, ...)
...
This design is not ideal; by having two tables refer to one another, you cannot effectively insert into either table, because the foreign key required in the other will not exist. One possible solution in outlined in the selected answer of
this question related to microsoft sqlserver, but I will summarize/elaborate on it here.
A better way to model this might be to introduce a third table, VersionHistory, and eliminate your foreign key constraints on the other two tables.
class VersionHistory(Base):
__tablename__ = 'version_history'
program_id = Column(Integer, ForeignKey('programs.id'), primary_key=True)
version_id = Column(Integer, ForeignKey('program_version.id'), primary_key=True)
current = Column(Boolean, default=False)
# I'm not too familiar with SQLAlchemy, but I suspect that relationship
# information goes here somewhere
This eliminates the circular relationship you have created in your current implementation. You could then query this table by program, and receive all existing versions for the program, etc. Because of the composite primary key in this table, you could access any specific program/version combination. The addition of the current field to this table takes the burden of tracking currency off of the other two tables, although maintaining a single current version per program could require some trigger gymnastics.
HTH!
I've got two models: User and Group.
User can be in one group so:
class User(db.Model):
# other fields
group_id = db.Column(db.Integer(), db.ForeignKey('group.id'))
but on the other hand I would also have some info about user who create that specific group:
class Group(db.Model):
# other fields
users = db.relationship("User", backref='group')
created_by = db.Column(db.Integer(), db.ForeignKey('user.id'))
Result is:
sqlalchemy.exc.CircularDependencyError: Can't sort tables for DROP; an unresolvable foreign key dependency exists between tables: group, user. Please ensure that the ForeignKey and ForeignKeyConstraint objects involved in the cycle have names so that they can be dropped using DROP CONSTRAINT.
I tried use_alter=True, but it gives me:
sqlalchemy.exc.CompileError: Can't emit DROP CONSTRAINT for constraint ForeignKeyConstraint(
Interestingly I'd expect you to get an AmbiguousForeignKeyError but instead you seem to get a CircularDependencyError? According to the docs this is caused by two scenarios:
In a Session flush operation, if two objects are mutually dependent on each other, they can not be inserted or deleted via INSERT or
DELETE statements alone; an UPDATE will be needed to post-associate or
pre-deassociate one of the foreign key constrained values. The
post_update flag described at Rows that point to themselves / Mutually
Dependent Rows can resolve this cycle.
In a MetaData.sorted_tables
operation, two ForeignKey or ForeignKeyConstraint objects mutually
refer to each other. Apply the use_alter=True flag to one or both, see
Creating/Dropping Foreign Key Constraints via ALTER.
I'm not sure what you're executing that's causing this particular error, but most likely you'll be able to solve it by solving the ambiguous reference.
The ambigious reference is due to SQLAlchemy not being able to figure out how to perform the join when there are multiple references (users and created_by in this case). This can be resolved by specifying how the relationship should join which can be done by either giving the specific foreign key it should use or by explicitly determining the join condition.
You can see these being applied to your example here:
class User(Base):
# Other setup / fields
group_id = Column(Integer, ForeignKey('group.id'))
class Group(Base):
# Other setup / fields
created_by_id = Column(Integer, ForeignKey('user.id'), nullable=False)
created_by = relationship("User", foreign_keys=[created_by_id])
users = relationship("User", backref="group", primaryjoin=id==User.group_id)
Documentation regarding relationship joins: http://docs.sqlalchemy.org/en/latest/orm/join_conditions.html#configuring-how-relationship-joins
I want to update a column in a table with a relation one to many.
but i can't figure out a way to make the update impact also the relation table. Here is a sample of the code, i tried to change it with back_populates, with or without onupdate or cascade but it doesn't seem to have any effect on what i want to do
class Employe(Base):
__tablename__ = 'employes'
id = Column(Integer, primary_key=True)
service_name = Column(String, ForeignKey('services.name', onupdate="CASCADE"))
class Service(Base):
__tablename__ = 'services'
name = Column(String, primary_key=True)
employers = relationship('Employe', backref=backref('service', cascade="all"))
a = session.query(Service).filter_by(name='ECTC').update({'name' : 'SECTC'})
session.commit()
That does change the services.name but let the column service_name untouched.
And so when i want to query session.query(Employe).filter_by(service_name='SECTC') i get no result
Is there a way to make the update impact both tables ?
Otherwise i can just select, delete and recreate the employes everytime i need to update services but that doesn't seem optimal to me.
Thank you for your help !
I have an app I am building with Flask that contains models for Projects and Plates, where Plates have Project as a foreignkey.
Each project has a year, given as an integer (so 17 for 2017); and each plate has a number and a name, constructed from the plate.project.year and plate.number. For example, Plate 106 from a project done this year would have the name '17-0106'. I would like this name to be unique.
Here are my models:
class Project(Model):
__tablename__ = 'projects'
id = Column(Integer, primary_key=True)
name = Column(String(64),unique=True)
year = Column(Integer,default=datetime.now().year-2000)
class Plate(Model):
__tablename__ = 'plates'
id = Column(Integer, primary_key=True)
number = Column(Integer)
project_id = Column(Integer, ForeignKey('projects.id'))
project = relationship('Project',backref=backref('plates',cascade='all, delete-orphan'))
#property
def name(self):
return str(self.project.year) + '-' + str(self.number).zfill(4)
My first idea was to make the number unique amongst the plates that have the same project.year attribute, so I have tried variations on
__table_args__ = (UniqueConstraint('project.year', 'number', name='_year_number_uc'),), but this needs to access the other table.
Is there a way to do this in the database? Or, failing that, an __init__ method that checks for uniqueness of either the number/project.year combination, or the name property?
There are multiple solutions to your problem. For example, you can de-normalize project.year-number combination and store it as a separate Plate field. Then you can put a unique key on it. The question is how you're going to maintain that value. The two obvious options are triggers (assuming your DB supports triggers and you're ok to use them) or sqla Events, see http://docs.sqlalchemy.org/en/latest/orm/events.html#
Both solutions won't emit an extra SELECT query. Which I believe is important for you.
your question is somewhat similar to Can SQLAlchemy events be used to update a denormalized data cache?
Executing this command:
sqlacodegen <connection-url> --outfile db.py
The db.py contains generated tables:
t_table1 = Table(...)
and classes too:
Table2(Base):
__tablename__ = 'table2'
The problem is that a table is generated in one way only - either a table or a class.
I would like to make it generate models (classes) only but in the provided flags I couldn't find such an option. Any idea?
It looks like what you're describing is a feature itself. sqlacodegenwill not always generate class models.
It will only form model classes for tables that have a primary key and are not association tables, as you can see in the source code:
# Only form model classes for tables that have a primary key and are not association tables
if noclasses or not table.primary_key or table.name in association_tables:
model = self.table_model(table)
else:
model = self.class_model(table, links[table.name], self.inflect_engine, not nojoined)
classes[model.name] = model
Furthermore, in the documentation it is stated that
A table is considered an association table if it satisfies all of the
following conditions:
has exactly two foreign key constraints
all its columns are involved in said constraints
Although, you can try a quick and dirty hack. Locate those lines in the source code (something like /.../lib/python2.7/site-packages/sqlacodegen/codegen.py) and comment out the first three code lines (and fix indentation):
# Only form model classes for tables that have a primary key and are not association tables
# if noclasses or not table.primary_key or table.name in association_tables:
# model = self.table_model(table)
# else:
model = self.class_model(table, links[table.name], self.inflect_engine, not nojoined)
classes[model.name] = model
I have tried this for one specific table that was generated as a table model. It went from
t_Admin_op = Table(
'Admin_op', metadata,
Column('id_admin', Integer, nullable=False),
Column('id_op', Integer, nullable=False)
)
to
class AdminOp(Base):
__tablename__ = 'Admin_op'
id_admin = Column(Integer, nullable=False)
id_op = Column(Integer, nullable=False)
You can also open an issue about this as a feature request, in the official tracker.
Just in case, if you want the opposite (only table models), you could do so with the --noclasses flag.