Maybe my previous question was too much long and endless to answer, sorry for that... I will try to be more specific shortening my previous question
I can extract from an API query (json format as output) the following information:
GENE1
Experiment1
Experiment2
Experiment3
Experiment4
GENE2
Experiment5
Experiment2
Experiment3
Experiment8
Experiment9
[...]
So I obtain genes and their related experiments in which they have been studied... One gene can have more than one experiment, and 1 experiment can have more than one gene (many to many)
I have this schema in SQL Alchemy:
from sqlalchemy import create_engine, Column, Integer, String, Date, ForeignKey, Table, Float
from sqlalchemy.orm import sessionmaker, relationship, backref
from sqlalchemy.ext.declarative import declarative_base
import requests
Base = declarative_base()
Genes2experiments = Table('genes2experiments',Base.metadata,
Column('gene_id', String, ForeignKey('genes.id')),
Column('experiment_id', String, ForeignKey('experiments.id'))
)
class Genes(Base):
__tablename__ = 'genes'
id = Column(String(45), primary_key=True)
experiments = relationship("Experiments", secondary=Genes2experiments, backref="genes")
def __init__(self, id=""):
self.id= id
def __repr__(self):
return "<genes(id:'%s')>" % (self.id)
class Experiments(Base):
__tablename__ = 'experiments'
id = Column(String(45), primary_key=True)
def __init__(self, id=""):
self.id= id
def __repr__(self):
return "<experiments(id:'%s')>" % (self.id)
def setUp():
global Session
engine=create_engine('mysql://root:password#localhost/db_name?charset=utf8', pool_recycle=3600,echo=False)
Session=sessionmaker(bind=engine)
def add_data():
session=Session()
for i in range(0,1000,200):
request= requests.get('http://www.ebi.ac.uk/gxa/api/v1',params={"updownInOrganism_part":"brain","rows":200,"start":i})
result = request.json
for item in result['results']:
gene_to_add = item['gene']['ensemblGeneId']
session.commit()
session.close()
setUp()
add_data()
With this code I just add to my database all the genes from the API query to the Genes table...
1st question: how and when should I add the experiments information to keep their relationship someway???
2nd question: should I add a new secondary relationship in the Experiments class, as in the Genes class, or is it enough putting just one?
Thank you
(for more context/info: my previous question)
Whenever you records the results of an experiment, or even when you plan an experiment, you can already add instances to the database and the relationships as well.
having backref will effectively add the other side of the relationship, so that having an instance of Experiments, you can get the Genes[] via my_experiment.genes
Note: I would remove plural from the names of your entities: class Gene, class Experiment instead of class Genes, class Experiments.
Related
We have 1 table with a large amount of data and DBA's partitioned it based on a particular parameter. This means I ended up with Employee_TX, Employee_NY kind of table names. Earlier the models.py was simple as in --
class Employee(Base):
__tablename__ = 'Employee'
name = Column...
state = Column...
Now, I don't want to create 50 new classes for the newly partitioned tables as anyways my columns are the same.
Is there a pattern where I can create a single class and then use it in query dynamically? session.query(<Tablename>).filter().all()
Maybe some kind of Factory pattern or something is what I'm looking for.
So far I've tried by running a loop as
for state in ['CA', 'TX', 'NY']:
class Employee(Base):
__qualname__ = __tablename__ = 'Employee_{}'.format(state)
name = Column...
state = Column...
but this doesn't work and I get a warning as - SAWarning: This declarative base already contains a class with the same class name and module name as app_models.employee, and will be replaced in the string-lookup table.
Also it can't find the generated class when I do from app_models import Employee_TX
This is a flask app with PostgreSQL as a backend and sqlalchemy is used as an ORM
Got it by creating a custom function like -
def get_model(state):
DynamicBase = declarative_base(class_registry=dict())
class MyModel(DynamicBase):
__tablename__ = 'Employee_{}'.format(state)
name = Column...
state = Column...
return MyModel
And then from my services.py, I just call with get_model(TX)
Whenever you think of dynamically constructing classes think of type() with 3 arguments (see this answer for a demonstration, and the docs more generally).
In your case, it's just a matter of constructing the classes and keeping a reference to them so you can access them again later.
Here's an example:
from sqlalchemy import Column, Integer, String
from sqlalchemy.engine import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
Base = declarative_base()
# this produces the set of common attributes that each class should have
def attribute_factory():
return dict(
id=Column(Integer, primary_key=True),
name=Column(String, nullable=False),
state=Column(String, nullable=False),
CLASS_VAR=12345678,
)
states = ["CA", "TX", "NY"]
# here we map the state abbreviation to the generated model, notice the templated
# class and table names
model_map = {
state: type(
f"Employee_{state}",
(Base,),
dict(**attribute_factory(), __tablename__=f"Employee_{state}"),
)
for state in states
}
engine = create_engine("sqlite:///", echo=True)
Session = sessionmaker(bind=engine)
Base.metadata.create_all(engine)
if __name__ == "__main__":
# inserts work
s = Session()
for state, model in model_map.items():
s.add(model(name="something", state=state))
s.commit()
s.close()
# queries work
s = Session()
for state, model in model_map.items():
inst = s.query(model).first()
print(inst.state, inst.CLASS_VAR)
Is there a way to generate db model dynamically based on the columns in the database table for Flask SQLAlchemy?
I have an application to display data in from a database table, but the column name would change at times and break my app. I would like to have a way to generate the data model dynamically based on the actual column names in the database.
I currently explicitly declare all the columns as below.
class MyDbModel(db.Model):
__tablename__ = 'my_table'
id = db.Column('id', db.NVARCHAR(length=300), primary_key=True)
name= db.Column('name', db.NVARCHAR(length=300),
nullable=True)
I'm not very familiar with sqlalchamy, I have tried the following but is getting an error as below, and I'm not sure if this is the right way to do it.
could not assemble any primary key columns for mapped table
from sqlalchemy import Table, Column
from sqlalchemy.orm import mapper
db = SQLAlchemy()
class MyDbModel(db.Model):
__tablename__ = 'my_table'
def __init__(self):
#helper function to get column headers of a db table
table_cols = get_headers_or_columns(
'my_table'
)
t = Table(
'my_table', db.metadata,
Column('id', db.NVARCHAR(length=300), primary_key=True),
*(Column(table_col, db.NVARCHAR(length=300)) for table_col in table_cols)
)
mapper(self, t)
Any help is appreciated!
I figured this out, in case anyone needed. It's really simple, use a helper function to get the column headers of your table, then just use setattr to set the attributes of the class.
from app import db
class MyDbModel(db.Model):
pass
def map_model_attrs(model, table):
"""
:param model: your db Model Class
:param table your db table name
"""
table_cols = get_headers_or_columns(table)
for col in table_cols:
setattr(
model, col, db.Column(col, db.NVARCHAR(length=300),
nullable=True)
)
map_mode_attrs(MyDbModel, 'my_table')
I have many (~2000) locations with time series data. Each time series has millions of rows. I would like to store these in a Postgres database. My current approach is to have a table for each location time series, and a meta table which stores information about each location (coordinates, elevation etc). I am using Python/SQLAlchemy to create and populate the tables. I would like to have a relationship between the meta table and each time series table to do queries like "select all locations that have data between date A and date B" and "select all data for date A and export a csv with coordinates". What is the best way to create many tables with the same structure (only the name is different) and have a relationship with a meta table? Or should I use a different database design?
Currently I am using this type of approach to generate a lot of similar mappings:
from sqlalchemy import create_engine, MetaData
from sqlalchemy.types import Float, String, DateTime, Integer
from sqlalchemy import Column, ForeignKey
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker, relationship, backref
Base = declarative_base()
def make_timeseries(name):
class TimeSeries(Base):
__tablename__ = name
table_name = Column(String(50), ForeignKey('locations.table_name'))
datetime = Column(DateTime, primary_key=True)
value = Column(Float)
location = relationship('Location', backref=backref('timeseries',
lazy='dynamic'))
def __init__(self, table_name, datetime, value):
self.table_name = table_name
self.datetime = datetime
self.value = value
def __repr__(self):
return "{}: {}".format(self.datetime, self.value)
return TimeSeries
class Location(Base):
__tablename__ = 'locations'
id = Column(Integer, primary_key=True)
table_name = Column(String(50), unique=True)
lon = Column(Float)
lat = Column(Float)
if __name__ == '__main__':
connection_string = 'postgresql://user:pw#localhost/location_test'
engine = create_engine(connection_string)
metadata = MetaData(bind=engine)
Session = sessionmaker(bind=engine)
session = Session()
TS1 = make_timeseries('ts1')
# TS2 = make_timeseries('ts2') # this breaks because of the foreign key
Base.metadata.create_all(engine)
session.add(TS1("ts1", "2001-01-01", 999))
session.add(TS1("ts1", "2001-01-02", -555))
qs = session.query(Location).first()
print qs.timeseries.all()
This approach has some problems, most notably that if I create more than one TimeSeries the foreign key doesn't work. Previously I've used some work arounds, but it all seems like a big hack and I feel that there must be a better way of doing this. How should I organise and access my data?
Alternative-1: Table Partitioning
Partitioning immediately comes to mind as soon as I read exactly the same table structure. I am not a DBA, and do not have much production experience using it (even more so on PostgreSQL), but
please read PostgreSQL - Partitioning documentation. Table partitioning seeks to solve exactly the problem you have, but over 1K tables/partitions sounds challenging; therefore please do more research on forums/SO for scalability related questions on this topic.
Given that both of your mostly used search criterias, datetime component is very important, therefore there must be solid indexing strategy on it. If you decide to go with partitioning root, the obvious partitioning strategy would be based on date ranges. This might allow you to partition older data in different chunks compared to most recent data, especially assuming that old data is (almost never) updated, so physical layouts would be dense and efficient; while you could employ another strategy for more "recent" data.
Alternative-2: trick SQLAlchemy
This basically makes your sample code work by tricking SA to assume that all those TimeSeries are children of one entity using Concrete Table Inheritance. The code below is self-contained and creates 50 table with minimum data in it. But if you have a database already, it should allow you to check the performance rather quickly, so that you can make a decision if it is even a close possibility.
from datetime import date, datetime
from sqlalchemy import create_engine, Column, String, Integer, DateTime, Float, ForeignKey, func
from sqlalchemy.orm import sessionmaker, relationship, configure_mappers, joinedload
from sqlalchemy.ext.declarative import declarative_base, declared_attr
from sqlalchemy.ext.declarative import AbstractConcreteBase, ConcreteBase
engine = create_engine('sqlite:///:memory:', echo=True)
Session = sessionmaker(bind=engine)
session = Session()
Base = declarative_base(engine)
# MODEL
class Location(Base):
__tablename__ = 'locations'
id = Column(Integer, primary_key=True)
table_name = Column(String(50), unique=True)
lon = Column(Float)
lat = Column(Float)
class TSBase(AbstractConcreteBase, Base):
#declared_attr
def table_name(cls):
return Column(String(50), ForeignKey('locations.table_name'))
def make_timeseries(name):
class TimeSeries(TSBase):
__tablename__ = name
__mapper_args__ = { 'polymorphic_identity': name, 'concrete':True}
datetime = Column(DateTime, primary_key=True)
value = Column(Float)
def __init__(self, datetime, value, table_name=name ):
self.table_name = table_name
self.datetime = datetime
self.value = value
return TimeSeries
def _test_model():
_NUM = 50
# 0. generate classes for all tables
TS_list = [make_timeseries('ts{}'.format(1+i)) for i in range(_NUM)]
TS1, TS2, TS3 = TS_list[:3] # just to have some named ones
Base.metadata.create_all()
print('-'*80)
# 1. configure mappers
configure_mappers()
# 2. define relationship
Location.timeseries = relationship(TSBase, lazy="dynamic")
print('-'*80)
# 3. add some test data
session.add_all([Location(table_name='ts{}'.format(1+i), lat=5+i, lon=1+i*2)
for i in range(_NUM)])
session.commit()
print('-'*80)
session.add(TS1(datetime(2001,1,1,3), 999))
session.add(TS1(datetime(2001,1,2,2), 1))
session.add(TS2(datetime(2001,1,2,8), 33))
session.add(TS2(datetime(2002,1,2,18,50), -555))
session.add(TS3(datetime(2005,1,3,3,33), 8))
session.commit()
# Query-1: get all timeseries of one Location
#qs = session.query(Location).first()
qs = session.query(Location).filter(Location.table_name == "ts1").first()
print(qs)
print(qs.timeseries.all())
assert 2 == len(qs.timeseries.all())
print('-'*80)
# Query-2: select all location with data between date-A and date-B
dateA, dateB = date(2001,1,1), date(2003,12,31)
qs = (session.query(Location)
.join(TSBase, Location.timeseries)
.filter(TSBase.datetime >= dateA)
.filter(TSBase.datetime <= dateB)
).all()
print(qs)
assert 2 == len(qs)
print('-'*80)
# Query-3: select all data (including coordinates) for date A
dateA = date(2001,1,1)
qs = (session.query(Location.lat, Location.lon, TSBase.datetime, TSBase.value)
.join(TSBase, Location.timeseries)
.filter(func.date(TSBase.datetime) == dateA)
).all()
print(qs)
# #note: qs is list of tuples; easy export to CSV
assert 1 == len(qs)
print('-'*80)
if __name__ == '__main__':
_test_model()
Alternative-3: a-la BigData
If you do get into performance problems using database, I would probably try:
still keep the data in separate tables/databases/schemas like you do right now
bulk-import data using "native" solutions provided by your database engine
use MapReduce-like analysis.
Here I would stay with python and sqlalchemy and implemnent own distributed query and aggregation (or find something existing). This, obviously, only works if you do not have requirement to produce those results directly on the database.
edit-1: Alternative-4: TimeSeries databases
I have no experience using those on a large scale, but definitely an option worth considering.
Would be fantastic if you could later share your findings and whole decision-making process on this.
I would avoid the database design you mention above. I don't know enough about the data you are working with, but it sounds like you should have two tables. One table for location, and a child table for location_data. The location table would store the data you mention above such as coordinates and elevations. The location_data table would store the location_id from the location table as well as the time series data you want to track.
This would eliminate changing db structure and code changes every time you add another location, and would allow the types of queries you are looking at doing.
Two parts:
only use two tables
there's no need to have dozens or hundreds of identical tables. just have a table for location and one for location_data , where every entry will fkey onto location. also create an index on the location_data table for the location_id, so you have efficient searching.
don't use sqlalchemy to create this
i love sqlalchemy. i use it every day. it's great for managing your database and adding some rows, but you don't want to use it for initial setup that has millions of rows. you want to generate a file that is compatible with postgres' "COPY" statement [ http://www.postgresql.org/docs/9.2/static/sql-copy.html ] COPY will let you pull in a ton of data fast; it's what is used during dump/restore operations.
sqlalchemy will be great for querying this and adding rows as they come in. if you have bulk operations, you should use COPY.
In order to handle a growing database table, we are sharding on table name. So we could have database tables that are named like this:
table_md5one
table_md5two
table_md5three
All tables have the exact same schema.
How do we use SQLAlchemy and dynamically specify the tablename for the class that corresponds to this? Looks like the declarative_base() classes need to have tablename pre-specified.
There will eventually be too many tables to manually specify derived classes from a parent/base class. We want to be able to build a class that can have the tablename set up dynamically (maybe passed as a parameter to a function.)
OK, we went with the custom SQLAlchemy declaration rather than the declarative one.
So we create a dynamic table object like this:
from sqlalchemy import MetaData, Table, Column
def get_table_object(self, md5hash):
metadata = MetaData()
table_name = 'table_' + md5hash
table_object = Table(table_name, metadata,
Column('Column1', DATE, nullable=False),
Column('Column2', DATE, nullable=False)
)
clear_mappers()
mapper(ActualTableObject, table_object)
return ActualTableObject
Where ActualTableObject is the class mapping to the table.
In Augmenting the Base you find a way of using a custom Base class that can, for example, calculate the __tablename__ attribure dynamically:
class Base(object):
#declared_attr
def __tablename__(cls):
return cls.__name__.lower()
The only problem here is that I don't know where your hash comes from, but this should give a good starting point.
If you require this algorithm not for all your tables but only for one you could just use the declared_attr on the table you are interested in sharding.
Because I insist to use declarative classes with their __tablename__ dynamically specified by given parameter, after days of failing with other solutions and hours of studying SQLAlchemy internals, I come up with the following solution that I believe is simple, elegant and race-condition free.
def get_model(suffix):
DynamicBase = declarative_base(class_registry=dict())
class MyModel(DynamicBase):
__tablename__ = 'table_{suffix}'.format(suffix=suffix)
id = Column(Integer, primary_key=True)
name = Column(String)
...
return MyModel
Since they have their own class_registry, you will not get that warning saying:
This declarative base already contains a class with the same class name and module name as mypackage.models.MyModel, and will be replaced in the string-lookup table.
Hence, you will not be able to reference them from other models with string lookup. However, it works perfectly fine to use these on-the-fly declared models for foreign keys as well:
ParentModel1 = get_model(123)
ParentModel2 = get_model(456)
class MyChildModel(BaseModel):
__tablename__ = 'table_child'
id = Column(Integer, primary_key=True)
name = Column(String)
parent_1_id = Column(Integer, ForeignKey(ParentModel1.id))
parent_2_id = Column(Integer, ForeignKey(ParentModel2.id))
parent_1 = relationship(ParentModel1)
parent_2 = relationship(ParentModel2)
If you only use them to query/insert/update/delete without any reference left such as foreign key reference from another table, they, their base classes and also their class_registry will be garbage collected, so no trace will be left.
you can write a function with tablename parameter and send back the class with setting appropriate attributes.
def get_class(table_name):
class GenericTable(Base):
__tablename__ = table_name
ID= Column(types.Integer, primary_key=True)
def funcation(self):
......
return GenericTable
Then you can create a table using:
get_class("test").__table__.create(bind=engine) # See sqlachemy.engine
Try this
import zlib
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, BigInteger, DateTime, String
from datetime import datetime
BASE = declarative_base()
ENTITY_CLASS_DICT = {}
class AbsShardingClass(BASE):
__abstract__ = True
def get_class_name_and_table_name(hashid):
return 'ShardingClass%s' % hashid, 'sharding_class_%s' % hashid
def get_sharding_entity_class(hashid):
"""
#param hashid: hashid
#type hashid: int
#rtype AbsClientUserAuth
"""
if hashid not in ENTITY_CLASS_DICT:
class_name, table_name = get_class_name_and_table_name(hashid)
cls = type(class_name, (AbsShardingClass,),
{'__tablename__': table_name})
ENTITY_CLASS_DICT[hashid] = cls
return ENTITY_CLASS_DICT[hashid]
cls = get_sharding_entity_class(1)
print session.query(cls).get(100)
Instead of using imperative creating Table object, you can use usual declarative_base and make a closure to set a table name as the following:
def make_class(Base, table_name):
class User(Base):
__tablename__ = table_name
id = Column(Integer, primary_key=True)
name= Column(String)
return User
Base = declarative_base()
engine = make_engine()
custom_named_usertable = make_class(Base, 'custom_name')
Base.metadata.create_all(engine)
session = make_session(engine)
new_user = custom_named_usertable(name='Adam')
session.add(new_user)
session.commit()
session.close()
engine.dispose()
just you need to create class object for Base.
from sqlalchemy.ext.declarative import declarative_base, declared_attr
class Base(object):
#declared_attr
def __tablename__(cls):
return cls.__name.lower()
Base = declarative_base(cls=Base)
I am a relative newcomer to SQLAlchemy and have read the basic docs. I'm currently following Mike Driscoll's MediaLocker tutorial and modifying/extending it for my own purpose.
I have three tables (loans, people, cards). Card to Loan and Person to Loan are both one-to-many relationships and modelled as such:
from sqlalchemy import Table, Column, DateTime, Integer, ForeignKey, Unicode
from sqlalchemy.orm import backref, relation
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
engine = create_engine("sqlite:///cardsys.db", echo=True)
DeclarativeBase = declarative_base(engine)
metadata = DeclarativeBase.metadata
class Loan(DeclarativeBase):
"""
Loan model
"""
__tablename__ = "loans"
id = Column(Integer, primary_key=True)
card_id = Column(Unicode, ForeignKey("cards.id"))
person_id = Column(Unicode, ForeignKey("people.id"))
date_issued = Column(DateTime)
date_due = Column(DateTime)
date_returned = Column(DateTime)
issue_reason = Column(Unicode(50))
person = relation("Person", backref="loans", cascade_backrefs=False)
card = relation("Card", backref="loans", cascade_backrefs=False)
class Card(DeclarativeBase):
"""
Card model
"""
__tablename__ = "cards"
id = Column(Unicode(50), primary_key=True)
active = Column(Boolean)
class Person(DeclarativeBase):
"""
Person model
"""
__tablename__ = "people"
id = Column(Unicode(50), primary_key=True)
fname = Column(Unicode(50))
sname = Column(Unicode(50))
When I try to create a new loan (using the below method in my controller) it works fine for unique cards and people, but once I try to add a second loan for a particular person or card it gives me a "non-unique" error. Obviously it's not unique, that's the point, but I thought SQLAlchemy would take care of the behind-the-scenes stuff for me, and add the correct existing person or card id as the FK in the new loan, rather than trying to create new person and card records. Is it up to me to query to the db to check PK uniqueness and handle this manually? I got the impression this should be something SQLAlchemy might be able to handle automatically?
def addLoan(session, data):
loan = Loan()
loan.date_due = data["loan"]["date_due"]
loan.date_issued = data["loan"]["date_issued"]
loan.issue_reason = data["loan"]["issue_reason"]
person = Person()
person.id = data["person"]["id"]
person.fname = data["person"]["fname"]
person.sname = data["person"]["sname"]
loan.person = person
card = Card()
card.id = data["card"]["id"]
loan.card = card
session.add(loan)
session.commit()
In the MediaLocker example new rows are created with an auto-increment PK (even for duplicates, not conforming to normalisation rules). I want to have a normalised database (even in a small project, just for best practise in learning) but can't find any examples online to study.
How can I achieve the above?
It's up to you to retrieve and assign the existing Person or Card object to the relationship before attempting to add a new one with a duplicate primary key. You can do this with a couple of small changes to your code.
def addLoan(session, data):
loan = Loan()
loan.date_due = data["loan"]["date_due"]
loan.date_issued = data["loan"]["date_issued"]
loan.issue_reason = data["loan"]["issue_reason"]
person = session.query(Person).get(data["person"]["id"])
if not person:
person = Person()
person.id = data["person"]["id"]
person.fname = data["person"]["fname"]
person.sname = data["person"]["sname"]
loan.person = person
card = session(Card).query.get(data["card"]["id"])
if not card:
card = Card()
card.id = data["card"]["id"]
loan.card = card
session.add(loan)
session.commit()
There are also some solutions for get_or_create functions, if you want to wrap it into one step.
If you're loading large numbers of records into a new database from scratch, and your query is more complex than a get (the session object is supposed to cache get lookups on its own), you could avoid the queries altogether at the cost of memory by adding each new Person and Card object to a temporary dict by ID, and retrieving the existing objects there instead of hitting the database.