How to make dynamic queries in SqlAlchemy ORM (if it is a correct name for them).
I used SqlAlchemy as abstraction for database, with queries in python code, but what if I need to generate these queries dynamically, not only set the parameters of query like "id"?
For example, I need to generate query from list (table names, column names, joined columns) that links three tables like "organisation", "people", "staff". How can I do it properly?
For example, i meant this list:
[{'table':'organisation', 'column':'staff_id'},
{'table':'staff', 'column':'id'}]
And output for example may contain:
organisation.id, organisation.name, organisation.staff_id, staff.id, staff.name
(name column is presented only in output, because I need simple example, recieving all columns of tables, and array must just set joins)
You can use mapper on the result of a call to sqlalchemy.sql.join and/or sqlalchemy.select. This is roughly equivalent to using mapper on a database view; you can query against such classes naturally, but not necessarily create new records. You can also use sqlalchemy.orm.column_property to map computed values to object attributes. As I read your question, a combination of these three techniques should meet your needs.
Haven't tested, but it with the SQLAlchemy ORM, you can link tables together like:
from sqlalchemy import create_engine, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, ForeignKey
from sqlalchemy.orm import relationship
from asgportal.database import Session
Engine = create_engine('mysql+mysqldb://user:password#localhost:3306/mydatabase', pool_recycle=3600)
Base = declarative_base(bind=Engine)
session = Session()
session.configure(bind=Engine)
class DBOrganization(Base):
__tablename__ = 'table_organization'
id = Column(Integer(), primary_key=True)
name = Column(ASGType.sa(ASGType.STRING))
class DBEmployee(Base):
__tablename__ = 'table_employee'
id = Column(Integer(), primary_key=True)
name = Column(String(255))
organization_id = Column(Integer(), ForeignKey('table_organization.id'))
# backref below will be an array[] unless you specify uselist=False
organization = relationship(DBOrganization, backref='employees')
Base.metadata.create_all()
# From here, you can query:
rs = session.query(DBEmployee).join(DBEmployee.organization).filter(DBOrganization.name=='my organization')
for employees in rs:
print '{0} works for {1}'.format(employees.name,employees.organization.name)
Related
I am having problems understanding the benefit of the usage of declarative classes in SQLAlchemy.
As I understand the ORM is a way to apply the concept of database tables to the class system of OOP. However I don't understand why the table class doesn't already satisfy this requirement.
So to form my question via an example:
What is the benefit of using this:
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
from sqlalchemy import Column, Integer, String
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String(16))
fullname = Column(String(60))
nickname = Column(String(50))
Instead of this:
from sqlalchemy import *
metadata = MetaData()
user = Table('users', metadata,
Column('id', Integer, primary_key=True),
Column('name', String(16)),
Column('fullname ', String(60)),
Column('nickname ', String(50))
)
The latter one is already a class representation, isn't it? Why are we building another class over the already existing table class? What's the benefit?
I have the same question recently, you can refer to SQLAlchemy doc.
Some examples in the documentation still use the classical approach, but note that the classical as well as Declarative approaches are fully interchangeable.
I think the benefit of using Declarative Mapping is that,
more convinent to use foreign key.
when you want to create table with some table_args/mapper_args, just write in all down in the class.
I'm using sqlalchemy to model the following relationship:
There are stops, and many stops can have the same name.
There are translations, and multiple translations use the same stop name (but different languages). So that one stop name could be translated to many languages.
Since the stop_name is not unique among stops, sqlaclhemy+postgres don't like it when I try to create a one-to-many relationship (see below). But this is not exactly one-to-many. What I want, when I access stop.translations, is to get all of the translations that match this query: SELECT * from translation WHERE translation.stop_name == stop.stop_name. So I accept the actual many-to-many relationship here, but want to hide it from my users, to make it look like one-to-many.
I thought of using hybrid attributes, but they seem to be scalar only, so that's not really an option. I probably did a bad job trying to prefill a many-to-many relationship, because that took forever and timedout.
Some context: this is part of pygtfs, but here is the minimal example of when this goes wrong. When I run the following script:
import sqlalchemy
import sqlalchemy.orm
from sqlalchemy import Column
from sqlalchemy.types import Unicode
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class Stop(Base):
__tablename__ = 'stop'
stop_id = Column(Unicode, primary_key=True)
stop_name = Column(Unicode)
# What I'd like:
translations = sqlalchemy.orm.relationship('Translation', viewonly=True,
primaryjoin="stop.c.stop_name==translation.c.stop_name")
class Translation(Base):
__tablename__ = 'translation'
stop_name = Column(Unicode, primary_key=True)
lang = Column(Unicode, primary_key=True)
translation = Column(Unicode)
if __name__ == "__main__":
engine = sqlalchemy.create_engine("postgresql://postgres#localhost:5432")
Session = sqlalchemy.orm.sessionmaker(bind=engine)
session = Session()
session.add(Stop(stop_id="hrld", stop_name="Herald Square"))
I get:
[...]
sqlalchemy.exc.ArgumentError: Could not locate any relevant foreign key columns for primary join condition 'stop.stop_name = translation.stop_name' on relationship Stop.translations. Ensure that referencing columns are associated with a ForeignKey or ForeignKeyConstraint, or are annotated in the join condition with the foreign() annotation.
What can I do to map this in sqlaclhemy?
[edit after a comment]:
If I add a ForeignKey, it fails, because stop_name is not unique (and I don't want it to be unique!):
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) there is no unique constraint matching given keys for referenced table "translation"
[SQL: '\nCREATE TABLE stop (\n\tstop_id VARCHAR NOT NULL, \n\tstop_name VARCHAR, \n\tPRIMARY KEY (stop_id), \n\tFOREIGN KEY(stop_name) REFERENCES translation (stop_name)\n)\n\n'] (Background on this error at: http://sqlalche.me/e/f405)
I am using MySQL (running InnoDB), and wrapped the entire thing using sqlalchemy. Now, I would like to generate changes in my database by using (see docs)
sqlalchemy_utils.functions.create_database(...)
Generally the above function does what it is supposed to. The only exception being the generation of unique indexes.
Say, I define a table like this:
## ...
# DeclBase = declarative_base()
## ...
class MyTable(DeclBase):
__tablename__ = 'my_table'
id = Column(Integer, primary_key=True)
attr_1 = Column(String(32))
attr_2 = Column(Integer, nullable=False)
attr_3 = Column(DateTime)
attr_4 = Column(
Integer,
ForeignKey('other_table.id', onupdate='CASCADE', ondelete='CASCADE'),
nullable=False
)
u_idx = UniqueConstraint(attr_2, attr_3, 'my_table_uidx')
when I call create_database I will get sqlalchemy to create the table 'my_table' with all columns as specified. The foreign key is also setup fine, but no unique index can be found on the database side. I then tried using a Index(unique=True) instead. So instead of
u_idx = UniqueConstraint(attr_2, attr_3, 'my_table_uidx')
I put
u_idx_1 = Index('my_table_uidx', attr_2, attr_3, unique=True)
My impression was this logically produces a similar result. This time sqlalchemy indeed created the unique index on the db.
Maybe I am miserably misunderstanding something about the difference between UniqueConstraint and Index(unique=True), or the way sqlalchemy uses them to automate generation of databases.
Can anyone shed some light on this?
The main difference is that while the Index API allows defining an index outside of a table definition as long as it can reference the table through the passed SQL constructs, a UniqueConstraint and constraints in general must be defined inline in the table definition:
To apply table-level constraint objects such as ForeignKeyConstraint to a table defined using Declarative, use the __table_args__ attribute, described at Table Configuration.
The thing to understand is that during construction of a declarative class a new Table is constructed, if not passed an explicit __table__. In your example model class the UniqueConstraint instance is bound to a class attribute, but the declarative base does not include constraints in the created Table instance from attributes. You must pass it in the table arguments:
class MyTable(DeclBase):
__tablename__ = 'my_table'
...
# A positional argument tuple, passed to Table constructor
__table_args__ = (
UniqueConstraint(attr_2, attr_3, name='my_table_uidx'),
)
Note that you must pass the constraint name as a keyword argument. You could also pass the constraint using Table.append_constraint(), if called before any attempts to create the table:
class MyTable(DeclBase):
...
MyTable.__table__.append_constraint(
UniqueConstraint('attr_2', 'attr_3', name='my_table_uidx'))
Suppose I have two schemas in the single PostgreSQL database and each schema contain table with the same name. For example: schema1.table, schema2.table.
I use SQLAlchemy for working with the database.
The first issue is that I can't reflect table from database specifying concrete schema into explicitly created class. For example:
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.declarative import DeferredReflection
Base = declarative_base()
class Table(DeferredReflection, Base):
__tablename__ = 'table'
## somehow specify schema for table
engine = create_engine(
'postgresql+psycopg2://localhost/postgres',
isolation_level='READ UNCOMMITTED'
)
DeferredReflection.prepare(engine)
## do something using reflected table
The second issue is that I am looking for a way to bind one explicitly created class with tables from different schemas and use it as follows:
session = Session()
with schema_context('schema1'):
data = session.query(Table).all() # Table refers to schema1.table
...
with schema_context('schema2'):
data = session.query(Table).all() # Table refers to schema2.table
...
Is there some way to work around or solve described issues?
The SQLAlchemy Table object allows you to pass a schema argument.
Using declarative, arguments are passed to the underlying Table object using __table_args__, as documented here.
class MyTable(DeferredReflection, Base):
__tablename__ = 'my_table'
__table_args__ = {'schema': 'schema2'}
You must create separate tables for different schemas.
I want to collect statistical information for songs (rating, play-count, last time played) from a couple of players over different devices and from different users. I use Python and SQL Alchemy.
I came up with the following table layout:
I can access all related Stats objects from my Commit ORM class as a list. I also want to have access to the related Song objects from the Commit class. Following the examples in the SQLALchemy Documentation I came up with an association table for the mtm relationship.
In code it looks like this (full source):
songcommits_table = Table(
'songcommits', Base.metadata,
Column('commit_id', Integer, ForeignKey('commits.commit_id')),
Column('song_id', Integer, ForeignKey('songs.song_id'))
)
class Commit(Base):
__tablename__ = 'commits'
commit_id = Column(Integer, primary_key=True)
# ...
songs = relationship(
"Song", secondary=songcommits_table, backref="commits"
)
stats = relationship('Stat', backref='commit')
def __repr__(self):
return "<Commit {0.commit_id}>".format(self)
It works. But I have the feeling it might work without the table, using the info already stored in the stats table, but I have no idea how to formulate this in the ORM.
So how can I use the information (commit_id and song_id) already present on the Stats ORM class instead of the helper table?