I would like to use autoload to use an existings database. I know how to do it without declarative syntax (model/_init_.py):
def init_model(engine):
"""Call me before using any of the tables or classes in the model"""
t_events = Table('events', Base.metadata, schema='events', autoload=True, autoload_with=engine)
orm.mapper(Event, t_events)
Session.configure(bind=engine)
class Event(object):
pass
This works fine, but I would like to use declarative syntax:
class Event(Base):
__tablename__ = 'events'
__table_args__ = {'schema': 'events', 'autoload': True}
Unfortunately, this way I get:
sqlalchemy.exc.UnboundExecutionError: No engine is bound to this Table's MetaData. Pass an engine to the Table via autoload_with=<someengine>, or associate the MetaData with an engine via metadata.bind=<someengine>
The problem here is that I don't know where to get the engine from (to use it in autoload_with) at the stage of importing the model (it's available in init_model()). I tried adding
meta.Base.metadata.bind(engine)
to environment.py but it doesn't work. Anyone has found some elegant solution?
OK, I think I figured it out. The solution is to declare the model objects outside the model/__init__.py. I concluded that __init__.py gets imported as the first file when importing something from a module (in this case model) and this causes problems because the model objects are declared before init_model() is called.
To avoid this I created a new file in the model module, e.g. objects.py. I then declared all my model objects (like Event) in this file.
Then, I can import my models like this:
from PRJ.model.objects import Event
Furthermore, to avoid specifying autoload-with for each table, I added this line at the end of init_model():
Base.metadata.bind = engine
This way I can declare my model objects with no boilerplate code, like this:
class Event(Base):
__tablename__ = 'events'
__table_args__ = {'schema': 'events', 'autoload': True}
event_identifiers = relationship(EventIdentifier)
def __repr__(self):
return "<Event(%s)>" % self.id
I just tried this using orm module.
Base = declarative_base(bind=engine)
Base.metadata.reflect(bind=engine)
Accessing tables manually or through loop or whatever:
Base.metadata.sorted_tables
Might be useful.
from sqlalchemy import MetaData,create_engine,Table
engine = create_engine('postgresql://postgres:********#localhost/db_name')
metadata = MetaData(bind=engine)
rivers = Table('rivers',metadata,autoload=True,auto_load_with=engine)
from sqlalchemy import select
s = select([rivers]).limit(5)
engine.execute(s).fetchall()
worked for me. I was getting the error because of not specifying bind when creating MetaData() object.
Check out the Using SQLAlchemy with Pylons tutorial on how to bind metadata to the engine in the init_model function.
If the meta.Base.metadata.bind(engine) statement successfully binds your model metadata to the engine, you should be able to perform this initialization in your own init_model function. I guess you didn't mean to skip the metadata binding in this function, did you?
Related
I'm trying to setup multiple databases with the same model in flask-sqlalchemy
A sample model looks like below
db = flask.ext.sqlalchemy.SQLAlchemy(app)
app.config['SQLALCHEMY_DATABASE_URI'] = 'your_default_schema_db_uri'
app.config['SQLALCHEMY_BINDS'] = {'other_schema': ''mysql+pymysql://'+UNMAE+':'+PASS+'#'+SERVERURL+':3306/'+ DBNAME,'##your_other_db_uri}
class TableA(db.Model):
# This belongs to Default schema, it doesn't need specify __bind_key__
...
class TableB(db.Model) :
# This belongs to other_schema
__bind_key__ = 'other_schema'
...
db.create_all() works fine and creates the tables in their individual schemas.
I was following https://stackoverflow.com/a/34240889/8270017 and wanted to create a single table using:
TableB.__table__.create(db.session.bind, checkfirst=True)
The table gets created in the default bind and not other_schema.
Is there something I'm missing here? How can I fix it so that it gets created in the other schema.
You need to supply the correct engine to the create function. The correct bind can be retrieved in the following way:
from sqlalchemy.orm import object_mapper
TableB.__table__.create(db.session().get_bind(object_mapper(TableB())), checkfirst=True)
For PG at least I do it like this:
class TableB(db.Model):
__table_args__ = {"schema":"schema_name"}
I am trying to integrate SqlAlchemy with SQLAlchemy-continuum.
I use Automap feature instead of making declarative classes, I have been unable to use continuum with the automap feature. Also there are no examples or citations in the docs regarding the same.
Has anybody used SqlAlchemy-continuum with this feature.
Note - I am using custom schemas for postrgesql, the schema is not the default the postrges ships with.
Also adding some code for reference:
#Imports
from sqlalchemy.ext.automap import automap_base
from sqlalchemy_continuum import make_versioned
from sqlalchemy_continuum import version_class, parent_class
make_versioned(user_cls=None) #Currently trying to not make user relationship with transactions table.
#Created the engine and queried the schema metadata from there in _metdadata.
_metdadata.reflect(views=True)
Base = automap_base(metadata=_metdadata)
class Map1(Base):
__tablename__ = 'test_pg_audit'
__table_args__ = {'extend_existing': 'True'}
__versioned__ = {}
id = Column(Integer, primary_key=True)
Base.prepare()
rec = TestPGAudit(name='test')
dbsession.add(rec)
dbsession.flush()
dbsession.commit()
#The History Class is not found, and sqlalchemy_continuum.exc.ClassNotVersioned is raise here.
at = version_class(TestPGAudit)
recs = dbsession.query(at).all()
print recs
I have also tried configuring the mappers at different places including after Base.prepare, but to no avail.
Also tried creating history tables in database manually.
Any help is appreciated.
The SQLAlchemy provides the Connection.execution_options.schema_translate_map for change the schemas in execution time, as said in docs.
In the examples is shown how to use to perform queries, but want to know how to use it with create_all().
I'm using Flask-Sqlaclhemy and postgresql as database. Let's say I have this:
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
def create_app():
app = Flask(...)
...
db.init_app(app)
...
return app
class User(db.Model):
__tablename__ = 'user'
__table_args__ = {'schema':'public'}
company = db.Column(db.String(10))
class SomePublicModel(db.Model):
__tablename__ = 'some_public'
__table_args__ = {'schema':'public'}
...
class SomeModelByDynamicSchema(db.Model):
__tablename__ = 'some_dynamic'
__table_args__ = {'schema':'dynamic'}
...
The dynamic schema will be replace for other value according the user's company in execution time.
Assuming I already have in database the schemas public and dynamic and a I want to create a new schema with the tables, something like this:
def create_new():
user = User(company='foo')
db.session.execute("CREATE SCHEMA IF NOT EXISTS %s" % user.company)
db.session.connection().execution_options(schema_translate_map={'dynamic':user.company})
#I would like to do something of the kind
db.create_all()
I expected the tables to be created in the foo schema as foo.some_dynamic, but the SQLAlchemy still try to create in dynamic schema.
Can someone help me?
When you set execution options, you create copy of connection. This mean what create_all run without schema_translate_map.
>>> c = Base.session.connection()
>>> w = c.execution_options(schema_translate_map={'dynamic':'kek'})
>>> c._execution_options
immutabledict({})
>>> w._execution_options
immutabledict({'schema_translate_map': {'dynamic': 'kek'}})
to achieve your goal, you could try another approach.
get the tables from old grammmar and adapt for new metadata.
metadata = MetaData(bind=engine, schema=db_schema)
for table in db.Model.metadata.tables.values():
table.tometadata(metadata)
metadata.drop_all()
metadata.create_all()
Imagine that I have one table in my project with some rows in it.
For example:
# -*- coding: utf-8 -*-
import sqlalchemy as sa
from app import db
class Article(db.Model):
__tablename__ = 'article'
id = sa.Column(sa.Integer, primary_key=True, autoincrement=True)
name = sa.Column(sa.Unicode(255))
content = sa.Column(sa.UnicodeText)
I'm using Flask-SQLAlchemy, so db.session is scoped session object.
I saw in https://github.com/zzzeek/sqlalchemy/blob/master/examples/versioned_history/history_meta.py
but i can't understand how to use it with my existing tables and anymore how to start it. (I get ArgumentError: Session event listen on a scoped_session requires that its creation callable is associated with the Session class. error when I pass db.session in versioned_session func)
From versioning I need the following:
1) query for old versions of object
2) query old versions by date range when they changed
3) revert old state to existing object
4) add additional info to history table when version is creating (for example editor user_id, date_edit, remote_ip)
Please, tell me what are the best practicies for my case and if you can add a little working example for it.
You can work around that error by attaching the event handler to the SignallingSession class[1] instead of the created session object:
from flask.ext.sqlalchemy import SignallingSession
from history_meta import versioned_session, Versioned
# Create your Flask app...
versioned_session(SignallingSession)
db = SQLAlchemy(app)
class Article(Versioned, db.Model):
__tablename__ = 'article'
id = sa.Column(sa.Integer, primary_key=True, autoincrement=True)
name = sa.Column(sa.Unicode(255))
content = sa.Column(sa.UnicodeText)
The sample code creates parallel tables with a _history suffix and an additional changed datetime column. Querying for old versions is just a matter of looking in that table.
For managing the extra fields, I would put them on your main table, and they'll automatically be kept track of in the history table.
[1] Note, if you override SQLAlchemy.create_session() to use a different session class, you should adjust the class you pass to versioned_session.
I think the problem is you're running into this bug: https://github.com/mitsuhiko/flask-sqlalchemy/issues/182
One workaround would be to stop using flask-sqlalchemy and configure sqlalchemy yourself.
I am writing a multimedia archive database backend and I want to use joined table inheritance. I am using Python with SQLAlchemy with the declarative extension. The table holding the media record is as follows:
_Base = declarative_base()
class Record(_Base):
__tablename__ = 'records'
item_id = Column(String(M_ITEM_ID), ForeignKey('items.id'))
storage_id = Column(String(M_STORAGE_ID), ForeignKey('storages.id'))
id = Column(String(M_RECORD_ID), primary_key=True)
uri = Column(String(M_RECORD_URI))
type = Column(String(M_RECORD_TYPE))
name = Column(String(M_RECORD_NAME))
The column type is a discriminator. Now I want to define the child class AudioRecord from the Record class, but I don't how to setup the polymorphic mapper using the declarative syntax. I am looking for an equivalent for the following code (from SQLAlchemy documentation):
mapper(Record, records, polymorphic_on=records.c.type, polymorphic_identity='record')
mapper(AudioRecord, audiorecords, inherits=Record, polymorphic_identity='audio_record')
How can I pass the polymorphic_on, polymorphic_identity and inherits keywords to the mapper created by the declarative extension?
Thank you
Jan
I finally found the answer in the manual.
http://www.sqlalchemy.org/docs/05/reference/ext/declarative.html#joined-table-inheritance