The situation:
I have one data table 'my_table' with several columns (lets call them 'a', 'b' and 'c').
Using SQLAlchemy I've created 3 classes within my model:
class MyTableBase():
a = Column(int)
class MyClassOne(MyTableBase):
b = Column(int)
class MyObjectTwo(MyTableBase):
c = Column(int)
I want to override default query(MyClassOne).all() to return only records with a=10 and query(MyClassTwo).all() to return only records with a=20.
That should provide quite elegant way to store many different (but very similar) kinds of objects in one table while playing with them separately at the class/object level.
Any suggestions? Maybe it's my lack of searching skills but simply can't find it :/
Update:
My code now:
class MyTableBase:
__tablename__ = u'my_table_base'
a = Column(VARCHAR(length=20)) #to distinct different types
__mapper_args__ = {"polymorphic_on": a}
class MyClassOne(MyTableBase):
__mapper_args__ = {"polymorphic_identity": "aaaa"}
b = Column(int)
class MyClassTwo(MyTableBase):
__mapper_args__ = {"polymorphic_identity": "bbbb"}
c = Column(int)
My DB table has two records - one has the value of "aaaa" and the other "bbbb" in the "a" column. Query for MyClassOne and MyClassTwo returns both rows. Any ideas?
UPDATE:
To select the data I do use:
cls_query = Session.query(MyClassOne)
print 'cls_query: ',cls_query
cls_query.all()
The print result is:
cls_query: SELECT my_table_base.id AS my_table_base_id,
my_table_base.a AS my_table_base_a,
my_table_base.b AS my_table_base_b
FROM my_table_base
Please notice it's taking only columns from required class MyClassOne (column c is not present in the SELECT statement), but still it lacks WHERE part.
As found in here:
Flask-SQLAlchemy Single Table Inheritance
The correct answer is __tablename__ = None in all subclasses.
class MyTableBase:
__tablename__ = u'my_table_base'
a = Column(VARCHAR(length=20)) #to distinct different types
__mapper_args__ = {"polymorphic_on": a}
class MyClassOne(MyTableBase):
__tablename__ = None
__mapper_args__ = {"polymorphic_identity": "aaaa"}
b = Column(int)
class MyClassTwo(MyTableBase):
__tablename__ = None
__mapper_args__ = {"polymorphic_identity": "bbbb"}
c = Column(int)
This information is not present in the relevant SQLAlchemy docs :(
Works fine now!
Related
I'm trying to create multiple classes in SQLAlchemy to generate specific tables.
I found different things here and more on https://docs.python.org/3/library/functions.html#import and https://python-course.eu/oop/dynamically-creating-classes-with-type.php
It seems clear enough, but their examples is using global opeartions, and I don't understand exactly how can I use as a sub function in something like this:
class _Table(Base):
__tablename__ = '_table'
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(String)
table_list = ['Table1', 'Table2']
def table_class_generator(table_list):
# here I need to create the tables
def tables_operators():
# here I make operations on tables
So I need this one to convert globally to something like:
class Table1(Base):
__tablename__ = 'table1'
id = ...
name = ...
class Table2(Base):
__tablename__ = 'table2'
id = ...
name = ...
def table_operators():
#
Thanks #Gord Thompson because he gave me another track.
I believe the solution will be this, I only test the init of database.
def tables_constructor(names: list) -> list:
"""
Creates all tables in the database
:return:
"""
engine = get_engine()
metadata_obj = MetaData()
table_names = []
for name in names:
name_obj = Table(
name,
metadata_obj,
Column('id', Integer, primary_key=True, autoincrement=True),
Column('timestamp', Integer)
)
table_names.append(name_obj)
metadata_obj.create_all(engine)
return table_names
def do_something()
#
def start_app():
names = ["Table_1", "Table_2"]
tables = tables_constructor(symbols)
do_something(tables)
Now I think I can do things in tables as they are mapped to table objects.
Would be cool to improve to define table columns somewhere else. I tried but isn't working.
I have defined some classes inheriting the declarative_base, like this:
class SomeClass(Base):
__tablename__ = 'some_table'
id = Column("some_table_id", Integer, primary_key=True)
data = Column(Text)
some_other_data = "some_other_data"
When doing a report, I'd like to automatically grab those Column attribute while leaving non-Column attributes alone. Is there a way to test this?
You can gt all the Model class columns using:
for c in SomeClass.__table__.columns:
print(c)
Output:
some_table.some_table_id
some_table.data
This will not give you other plain attributes of that class.
If you want to fetch data of your table fields (Columns), use above code in following way
result = session.query(*[c for c in SomeClass.__table__.c]).all()
print(result)
Where session is the DB connection.
I have defined a python class "Students", like this:
class Students(DeclarativeBase):
__tablename__ = 'students'
id_ = Column('id', Integer, primary_key=True)
name = Column('nombre', Unicode(50))
date_of_birth = Column(Date)
If I do select * from students, I can see all of these columns plus a few more, namely: _created and _updated.
I need to use the values stored in the columns _created and _updated. So I try to access them like this:
#get student with id = 1
>>> s = dbs.query(Students).get(1)
# print its name
>>> print(s.name)
Richard
# try to print when it was created
>>> print (s._created)
AttributeError: 'Students' object has no attribute '_created'
Of course I get that message because the attribute _created is not defined in the model.
How can I access the value stored in the table Students even though it is not an attribute of the class Student?
SQLAlchemy needs the definition of each column it will access. (There are ways to auto-discover by reflecting the database, but explicit is better than implicit.) Add the column definitions to the model. I'm assuming they're DateTimes. You can use default= and onupdate= to provide new values when a row is inserted or updated.
class Student(Base):
__tablename__ = 'student'
id = Column('id_', Integer, primary_key=True)
# other columns...
created = Column('_created', DateTime, nullable=False, default=datetime.utcnow)
updated = Column('_updated', DateTime, onupdate=datetime.utcnow)
In declarative approach, I want to exclude one property, its working properly when my column name and property name are same. But if I give different name then its not working.
Here is my sample code.
Base = declarative_base()
class tblUser(Base):
__tablename__ = 'tblUser'
User_Id = Column('User_Id', String(100), primary_key=True)
SequenceNo = Column('Sequence_No', Integer)
FullName = Column('FullName', String(50))
__mapper_args__ = {'exclude_properties' :['Sequence_No']}
user = tblUser()
user.User_Id = '1000001'
user.SequenceNo = 101
session.add(user)
session.commit()
In the above sample I don't want the SequenceNo property to be updated in database even if I assign some value to it. So I used exclude_properties but still its updating the value in db. But if I change the property name from SequenceNo to Sequence_No (same as the column name) then its working as per the behaviour. Can anyone help me?
Thanks
Adhi
Unfortunately, __mapper_args__ is probably the wrong approach. It is intended to control the reflection of an existing database table into a mapper, not make a column 'read-only'.
I think a better approach would be to use a hybrid property:
from sqlalchemy.ext.hybrid import hybrid_property
Base = declarative_base()
class tblUser(Base):
__tablename__ = 'tblUser'
User_Id = Column('User_Id', String(100), primary_key=True)
FullName = Column('FullName', String(50))
_Sequence_No = Column('Sequence_No', Integer)
_local_Sequence_No = None
#hybrid_property
#property
def SequenceNo(self):
if self._local_Sequence_No is not None:
return self._local_Sequence_No
return self._SequenceNo
#SequenceNo.setter
def SequenceNo(self, value):
self._local_Sequence_No = value
The original Sequence_No column is available via a private attribute, and the SequenceNo property intercepts writes and stores them on the instance to be re-used later, but not written to the database.
Hi I have a simple question - i have 2 tables (addresses and users - user has one address, lot of users can live at the same address)... I created a sqlalchemy mapping like this:
when I get my session and try to query something like
class Person(object):
'''
classdocs
'''
idPerson = Column("idPerson", Integer, primary_key = True)
name = Column("name", String)
surname = Column("surname", String)
idAddress = Column("idAddress", Integer, ForeignKey("pAddress.idAddress"))
idState = Column("idState", Integer, ForeignKey("pState.idState"))
Address = relationship(Address, primaryjoin=idAddress==Address.idAddress)
class Address(object):
'''
Class to represent table address object
'''
idAddress = Column("idAddress", Integer, primary_key=True)
street = Column("street", String)
number = Column("number", Integer)
postcode = Column("postcode", Integer)
country = Column("country", String)
residents = relationship("Person",order_by="desc(Person.surname, Person.name)", primaryjoin="idAddress=Person.idPerson")
self.tablePerson = sqlalchemy.Table("pPerson", self.metadata, autoload=True)
sqlalchemy.orm.mapper(Person, self.tablePerson)
self.tableAddress = sqlalchemy.Table("pAddress", self.metadata, autoload=True)
sqlalchemy.orm.mapper(Address, self.tableAddress)
myaddress = session.query(Address).get(1);
print myaddress.residents[1].name
=> I get TypeError: 'RelationshipProperty' object does not support indexing
I understand residents is there to define the relationship but how the heck can I get the list of residents that the given address is assigned to?!
Thanks
You define a relationship in a wrong place. I think you are mixing Declarative Extension with non-declarative use:
when using declarative, you define your relations in your model.
otherwise, you define them when mapping model to a table
If option-2 is what you are doing, then you need to remove both relationship definitions from the models, and add it to a mapper (only one is enought):
mapper(Address, tableAddress,
properties={'residents': relationship(Person, order_by=(desc(Person.name), desc(Person.surname)), backref="Address"),}
)
Few more things about the code above:
Relation is defined only on one side. The backref takes care about the other side.
You do not need to specify the primaryjoin (as long as you have a ForeignKey specified, and SA is able to infer the columns)
Your order_by configuration is not correct, see code above for the version which works.
You might try defining Person after Address, with a backref to Address - this will create the array element:
class Address(object):
__tablename__ = 'address_table'
idAddress = Column("idAddress", Integer, primary_key=True)
class Person(object):
idPerson = Column("idPerson", Integer, primary_key = True)
...
address_id = Column(Integer, ForeignKey('address_table.idAddress'))
address = relationship(Address, backref='residents')
Then you can query:
myaddress = session.query(Address).get(1);
for residents in myaddress.residents:
print name
Further, if you have a lot of residents at an address you can further filter using join:
resultset = session.query(Address).join(Address.residents).filter(Person.name=='Joe')
# or
resultset = session.query(Person).filter(Person.name=='Joe').join(Person.address).filter(Address.state='NY')
and resultset.first() or resultset[0] or resultset.get(...) etc...