I'm using postgresql and python sqllachmey for creating the partitioned table. Below is the example code.
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.types import Text
Base = declarative_base()
class Students(Base):
__tablename__ = "Students"
id = Column(Text, primary_key=True)
name = Column(Text)
type = Column(Text)
desc = Column(Text)
creation_time = Column(DateTime, default=func.now(), nullable=False, primary_key=True)
__table_args__ = {
'postgresql_partition_by': 'RANGE (creation_time)'
}
table_list = [Students.__table__]
Base.metadata.create_all(self.engine, tables=table_list)
The above code creates a paritioned table and i'm creating the partitions using pg_partman extension.
Now, my need is that I want to use the class 'Students' for creating the student table under different databases, and some databases don't need the partitioned table. How can I make the above code dynamic so that I can take out 'postgresql_partition_by' from table_args?
According to the documents, it looks like, I have to use dictionary for table column definition but I'm not quite sure about it.
The keyword arguemnts of Table (or __table_args__) are dialect specific, so your postgresql_partition_by will only be used with postgresql dialects, not sqlite or others.
class sqlalchemy.schema.Table(*args, **kw)
[...]
**kw – Additional keyword arguments not mentioned above are dialect specific, and passed in the form <dialectname>_<argname>. See the documentation regarding an individual dialect at Dialects for detail on documented arguments.
Related
Basing this question off this similar post, but about SORT ORDER
I understand that you can change the lazy to dynamic in the relatonship, and then that will allow you to query against the relationship before loading, but is there a way to LIMIT the return results directly from a selectin or on of the other loading techniques?
Use case is, Im trying to pass the record into Marshmallow and limit the number of nested records returned. dynamic at that point doesnt work, as Marshmallow includes it as an all() and selectin appears to just included it unqueryable at time of load, and again Marshmallow get the entire record set.
from sqlalchemy import Column, Integer, ForeignKey
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship
Base = declarative_base()
class Example(Base):
__tablename__ = 'examples'
id = Column(Integer, primary_key=True)
related_items = relationship('RelatedItem', back_populates='example', order_by='RelatedItem.id')
class RelatedItem(Base):
__tablename__ = 'related_items'
id = Column(Integer, primary_key=True)
example_id = Column(Integer, ForeignKey('examples.id'), nullable=False)
example = relationship('Example', back_populates='related_items')
I am currently working with some legacy code that looks as follows:
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Unicode
from sqlalchemy.dialects.postgresql import ARRAY, TEXT
Base = declarative_base()
class Book(Base):
__tablename__ = 'book'
id = Column(Integer, primary_key=True)
title = Column(Unicode)
keywords = Column('keywords', ARRAY(TEXT), primary_key=False)
The keywords are currently being kept as an array, but I'd like to flatten this out and have them be in their own separate model
class Keyword():
__tablename__ = 'keyword'
id = Column(Integer, primary_key=True)
book_id = Column(Integer, ForeignKey('book.id', ondelete='cascade'),
nullable=False)
keyword = Column(Unicode)
How can I make it such that when a Book() is created, it also creates the
accompanying keywords? As an intermediate step for migrating the API, I'd like to keep the current array column, but also have the accompanying Keyword() instances be created.
I could do this within an __init__ method, but would need to know what the current Session() was, in order to run a commit. I could also perhaps use a property attribute, attached to keywords, but am not sure how that would work given that I am working with a class that inherits from SQLAlchemy's base, and not with a regular class that I have defined. What's the correct way to do this?
You can use object_session to find out the session of a given instance.
But if you define relationship between a Book and Keywords, you should not need even bother:
class Book(Base):
# ...
rel_keywords = relationship('Keyword', backref='book')
def init_keyword_relationship(self):
for kw in self.keywords:
self.rel_keywords.add(Keyword(keyword=kw))
sess = # ... get_session...
books = sess.query(Book).all()
for book in books:
book.init_keyword_relationship()
sess.commit()
However, I would do a migration once and get rid of the keywords array in order not to add a logic to keep those in sync.
Whilst upgrading from sqlalchemy 0.8 to 1.0.4 my ORM has broken with the error Can't redefine 'quote' or 'quote_schema' arguments
I connect to a sybase db, and use a declarative_base
Base = declarative_base()
Using a standard method to create the mapping below
class RiskAggregationGroup(Base):
__tablename__ = 'RISK_AGGREGATION_GROUP'
__table_args__ = {'quote':False,'extend_existing':True}
id = Column(Integer, name='id_risk_agg', primary_key=True)
name = Column(String(50), name='nm_risk_agg')
description = Column(String(100), name='tx_desc')
This worked fine in sqlalchemy 0.8 but breaks in 1.0.4 as it doesn't like me specifying quote as a table arg. I've tried a whole host of things to get around this, setting it in the base, e.g.
class Base(object):
__table_args__ = {'quote':False,'extend_existing':True}
Base = declarative_base(cls=Base)
throws the same error. If I change it to use the #declared_attr the quoting is not turned off. I'm unable to change the sybase settings and my table names are all caps (which is the cause of the quoting). I've got about 20 tables defined here, so am loathe to change them all to Table creations, such as:
class RiskAggregationGroup(Base):
__tablename__ = 'RISK_AGGREGATION_GROUP'
__table__ = Table(__tablename__, Base.metadata,
Column(Integer, name='id_risk_agg', primary_key=True, key='id'),
Column(String(50), name='nm_risk_agg', key='name'), quote=False)
Does anyone have a more elegant solution, so far google has failed me?
Got an answer to this from the sqlalchemy google group
https://groups.google.com/forum/#!topic/sqlalchemy/xIPnU89GKFI
Huge thanks Michael Bayer. Solution is to Not set quote:False, but to set the quote characters to [].
e = create_engine("sybase://")
# if not using quote=False, this will change the quoting character
e.dialect.identifier_preparer.initial_quote = '['
e.dialect.identifier_preparer.final_quote = ']'
I have the following ORM mapping using SqlAlchemy:
class Foo(Base):
__tablename__ = "foo"
id = Column(Integer, primary_key=True)
name = Column(String)
date_imported = Column(DateTime)
However, how can I either get the CREATE TABLE sql syntax or how I can I have it create the table for me?
Use Foo.__table__.create(bind=engine, checkfirst=True) to issue the statement for that table, or metadata.create_all(bind=engine) to issue the statements for all tables registered on that metadata. If you are using Flask-SQLAlchemy, use db.create_all() to honor binds correctly.
Let's say I have the following structure (using Flask-SqlAlchemy):
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String, nullable=False, index=True)
# The following line throws an error at runtime.
variant = db.Column(db.Integer, nullable=False, index=True,
default=select(func.count(User.id)).where(User.name == self.name))
def __init__(self, name):
super(User, self).__init__()
self.name = name
#property
def clause(self):
return '/'.join([str(self.variant), self.name])
Problem is, "User is not defined." I would like to model a system with Users who may choose the same name but add a field to differentiate between users in a systemic way without using (thereby exposing) the "id" field.
Anyone know how to make a self-referential query to use to populate a default value?
The issue of the default not referring to User here is solved by just assigning "default" to the Column once User is available. However, that's not going to solve the problem here because "self" means nothing either, there is no User method being called so you can't just refer to "self". The challenge with this statement is that you want it to be rendered as an inline sub-SELECT but it still needs to know the in-memory value of ".name". So you have to assign that sub-SELECT per-object in some way.
The usual way people approach ORM-level INSERT defaults like this is usually by using a before_insert handler.
Another way that's worth pointing out is by creating a SQL level INSERT trigger. This is overall the most "traditional" approach, as here you need to have access to the row being inserted; triggers define a means of getting at the row values that are being inserted.
As far as using a default at the column level, you'd need to use a callable function as the default which can look at the current value of the row being inserted, but at the moment that means that your SELECT statement will not be rendered inline with the INSERT statement, you'd need to pre-execute the SELECT which is not really what we want here.
Anyway, the basic task of rendering a SQL expression into the INSERT while also having that SQL expression refer to some local per-object state is achieved by assigning that expression to the attribute, the ORM picks up on this at flush time. Below we do this in the constructor, but this can also occur inside of before_insert() as well:
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
name = Column(String, nullable=False, index=True)
variant = Column(Integer, nullable=False, index=True)
def __init__(self, name):
self.name = name
self.variant = select([func.count(User.id)]).where(User.name == self.name)
e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)
s = Session(e)
s.add(User(name='n1'))
s.commit()
s.add(User(name='n1'))
s.commit()
print s.query(User.variant).all()