SQLAlchemy - Mapper configuration and declarative base - python

I am writing a multimedia archive database backend and I want to use joined table inheritance. I am using Python with SQLAlchemy with the declarative extension. The table holding the media record is as follows:
_Base = declarative_base()
class Record(_Base):
__tablename__ = 'records'
item_id = Column(String(M_ITEM_ID), ForeignKey('items.id'))
storage_id = Column(String(M_STORAGE_ID), ForeignKey('storages.id'))
id = Column(String(M_RECORD_ID), primary_key=True)
uri = Column(String(M_RECORD_URI))
type = Column(String(M_RECORD_TYPE))
name = Column(String(M_RECORD_NAME))
The column type is a discriminator. Now I want to define the child class AudioRecord from the Record class, but I don't how to setup the polymorphic mapper using the declarative syntax. I am looking for an equivalent for the following code (from SQLAlchemy documentation):
mapper(Record, records, polymorphic_on=records.c.type, polymorphic_identity='record')
mapper(AudioRecord, audiorecords, inherits=Record, polymorphic_identity='audio_record')
How can I pass the polymorphic_on, polymorphic_identity and inherits keywords to the mapper created by the declarative extension?
Thank you
Jan

I finally found the answer in the manual.
http://www.sqlalchemy.org/docs/05/reference/ext/declarative.html#joined-table-inheritance

Related

sqlalchemy-continuum with Automap Base

I am trying to integrate SqlAlchemy with SQLAlchemy-continuum.
I use Automap feature instead of making declarative classes, I have been unable to use continuum with the automap feature. Also there are no examples or citations in the docs regarding the same.
Has anybody used SqlAlchemy-continuum with this feature.
Note - I am using custom schemas for postrgesql, the schema is not the default the postrges ships with.
Also adding some code for reference:
#Imports
from sqlalchemy.ext.automap import automap_base
from sqlalchemy_continuum import make_versioned
from sqlalchemy_continuum import version_class, parent_class
make_versioned(user_cls=None) #Currently trying to not make user relationship with transactions table.
#Created the engine and queried the schema metadata from there in _metdadata.
_metdadata.reflect(views=True)
Base = automap_base(metadata=_metdadata)
class Map1(Base):
__tablename__ = 'test_pg_audit'
__table_args__ = {'extend_existing': 'True'}
__versioned__ = {}
id = Column(Integer, primary_key=True)
Base.prepare()
rec = TestPGAudit(name='test')
dbsession.add(rec)
dbsession.flush()
dbsession.commit()
#The History Class is not found, and sqlalchemy_continuum.exc.ClassNotVersioned is raise here.
at = version_class(TestPGAudit)
recs = dbsession.query(at).all()
print recs
I have also tried configuring the mappers at different places including after Base.prepare, but to no avail.
Also tried creating history tables in database manually.
Any help is appreciated.

Register schema for sqlalchemy query in python-eve

I am attempting to create an endpoint for a query joining multiple tables. registerSchema takes a sqlalchemy Base object. The solution I came up with was to create a database view for the sql statement, and use the model to reference the view.
Is this supported more natively with a registered schema? I would rather not maintain database views dependencies in my migrations.
sql for the dataview(replaced the table names with contrived examples)
CREATE VIEW v_user_offices AS
SELECT b.id AS building_id,
b.name AS building_name,
o.id AS office_id,
o.name AS office_name,
uo.user_id AS user_id
FROM buildings AS b
INNER JOIN office_buildings AS ob
ON ob.building_id=b.id
INNER JOIN offices AS o
ON o.id=ob.office_id
INNER JOIN user_offices AS uo
ON uo.office_id=o.id;
sqlalchemy Model:
class ViewUserOffices(CommonColumns):
__tablename__ = 'v_user_offices'
building_id = Column(Integer)
building_name = Column(String)
office_id = Column(Integer, primary_key=True)
office_name = Column(String)
user_id = Column(Integer)
settings.py
# The DOMAIN dict explains which resources will be available and how they will
# be accessible to the API consumer.
registerSchema('v_user_offices')(ViewUserOffices)
DOMAIN = {
'user_offices': ViewUserOffices._eve_schema['v_user_offices']
}
DOMAIN['user_offices'].update({
'item_title': 'user_office',
'item_lookup_field': 'user_id',
'resource_methods': ['GET']
})
The sqlalchemy-eve plugin is trying to be as close to sqlalchemy declarative approach as possible. If you can define your models in sqlalchemy you should be able to register them in python-eve.
In your case IMHO the only reasonable approach is to create a view and declare it in sqlalchemy as you did.
If I understand correctly, you can define this schema by taking advantage of sqlalchemy relationships. If your UserOffices class has all of the correct relationships defined, then registerSchema should automagically take care of the relationships between associated objects. All you will have to do is set the embedded parameter in your requests if you would like to embed the associated objects in the response.
A View makes sense if you do not want your client to have to worry about declaring embedded objects.

Db2 with SQLAlchemy, how to specify default schema

I'm trying to map an existing DB2 database to new python ORM objects.
I wrote a very simple mapper class:
class Storage(Base):
__tablename__ = 'T_RES_STORAGE_SUBSYSTEM'
id = Column(Integer,primary_key=True,name='SUBSYSTEM_ID')
name = Column(String(255),name='NAME')
namealias = Column(String(256),name='NAME_ALIAS')
But when I try to map it, by executing a query it puts the DB2ADMIN.tablename in front of every query, which of course lead to errors. If I execute the query manually by prepending TPC.tablename to it, then everything works without issues.
How can I specify in a table definition which schema to use?
Ok so after the help of mustaccio, I found out that in the table_args you have to add schema:
class Storage(Base):
__tablename__ = 'T_RES_STORAGE_SUBSYSTEM'
__table_args__ = {'schema' : 'TPC'}
id = Column(Integer,primary_key=True,name='SUBSYSTEM_ID')
name = Column(String(255),name='NAME')
namealias = Column(String(256),name='NAME_ALIAS')

SqlAlchemy add tables versioning to existing tables

Imagine that I have one table in my project with some rows in it.
For example:
# -*- coding: utf-8 -*-
import sqlalchemy as sa
from app import db
class Article(db.Model):
__tablename__ = 'article'
id = sa.Column(sa.Integer, primary_key=True, autoincrement=True)
name = sa.Column(sa.Unicode(255))
content = sa.Column(sa.UnicodeText)
I'm using Flask-SQLAlchemy, so db.session is scoped session object.
I saw in https://github.com/zzzeek/sqlalchemy/blob/master/examples/versioned_history/history_meta.py
but i can't understand how to use it with my existing tables and anymore how to start it. (I get ArgumentError: Session event listen on a scoped_session requires that its creation callable is associated with the Session class. error when I pass db.session in versioned_session func)
From versioning I need the following:
1) query for old versions of object
2) query old versions by date range when they changed
3) revert old state to existing object
4) add additional info to history table when version is creating (for example editor user_id, date_edit, remote_ip)
Please, tell me what are the best practicies for my case and if you can add a little working example for it.
You can work around that error by attaching the event handler to the SignallingSession class[1] instead of the created session object:
from flask.ext.sqlalchemy import SignallingSession
from history_meta import versioned_session, Versioned
# Create your Flask app...
versioned_session(SignallingSession)
db = SQLAlchemy(app)
class Article(Versioned, db.Model):
__tablename__ = 'article'
id = sa.Column(sa.Integer, primary_key=True, autoincrement=True)
name = sa.Column(sa.Unicode(255))
content = sa.Column(sa.UnicodeText)
The sample code creates parallel tables with a _history suffix and an additional changed datetime column. Querying for old versions is just a matter of looking in that table.
For managing the extra fields, I would put them on your main table, and they'll automatically be kept track of in the history table.
[1] Note, if you override SQLAlchemy.create_session() to use a different session class, you should adjust the class you pass to versioned_session.
I think the problem is you're running into this bug: https://github.com/mitsuhiko/flask-sqlalchemy/issues/182
One workaround would be to stop using flask-sqlalchemy and configure sqlalchemy yourself.

SQLAlchemy declarative syntax with autoload (reflection) in Pylons

I would like to use autoload to use an existings database. I know how to do it without declarative syntax (model/_init_.py):
def init_model(engine):
"""Call me before using any of the tables or classes in the model"""
t_events = Table('events', Base.metadata, schema='events', autoload=True, autoload_with=engine)
orm.mapper(Event, t_events)
Session.configure(bind=engine)
class Event(object):
pass
This works fine, but I would like to use declarative syntax:
class Event(Base):
__tablename__ = 'events'
__table_args__ = {'schema': 'events', 'autoload': True}
Unfortunately, this way I get:
sqlalchemy.exc.UnboundExecutionError: No engine is bound to this Table's MetaData. Pass an engine to the Table via autoload_with=<someengine>, or associate the MetaData with an engine via metadata.bind=<someengine>
The problem here is that I don't know where to get the engine from (to use it in autoload_with) at the stage of importing the model (it's available in init_model()). I tried adding
meta.Base.metadata.bind(engine)
to environment.py but it doesn't work. Anyone has found some elegant solution?
OK, I think I figured it out. The solution is to declare the model objects outside the model/__init__.py. I concluded that __init__.py gets imported as the first file when importing something from a module (in this case model) and this causes problems because the model objects are declared before init_model() is called.
To avoid this I created a new file in the model module, e.g. objects.py. I then declared all my model objects (like Event) in this file.
Then, I can import my models like this:
from PRJ.model.objects import Event
Furthermore, to avoid specifying autoload-with for each table, I added this line at the end of init_model():
Base.metadata.bind = engine
This way I can declare my model objects with no boilerplate code, like this:
class Event(Base):
__tablename__ = 'events'
__table_args__ = {'schema': 'events', 'autoload': True}
event_identifiers = relationship(EventIdentifier)
def __repr__(self):
return "<Event(%s)>" % self.id
I just tried this using orm module.
Base = declarative_base(bind=engine)
Base.metadata.reflect(bind=engine)
Accessing tables manually or through loop or whatever:
Base.metadata.sorted_tables
Might be useful.
from sqlalchemy import MetaData,create_engine,Table
engine = create_engine('postgresql://postgres:********#localhost/db_name')
metadata = MetaData(bind=engine)
rivers = Table('rivers',metadata,autoload=True,auto_load_with=engine)
from sqlalchemy import select
s = select([rivers]).limit(5)
engine.execute(s).fetchall()
worked for me. I was getting the error because of not specifying bind when creating MetaData() object.
Check out the Using SQLAlchemy with Pylons tutorial on how to bind metadata to the engine in the init_model function.
If the meta.Base.metadata.bind(engine) statement successfully binds your model metadata to the engine, you should be able to perform this initialization in your own init_model function. I guess you didn't mean to skip the metadata binding in this function, did you?

Categories

Resources