SqlAlchemy "contains_eager" does not appear to load nested relationships - python

I have a schema as follows:
Thing # Base class for below tables
- id
Ball (Thing)
- color
Bin (Thing)
- ball -> Ball.id
Court (Thing)
- homeBin -> Bin.id
- awayBin -> Bin.id
I'd like to ensure that whenever I load a set of Courts, it includes the latest Ball column values. From what I understand, contains_eager() might be able to help with that:
Indicate that the given attribute should be eagerly loaded from columns stated manually in the query.
I have a test that queries every few seconds for any Courts. I'm finding that, even with contains_eager, I only ever see the same value for Ball.color, even though I've explicitly updated the column's value in the database.
Why does sqlalchemy appear to reuse this old data?
Below is a working example of what's happening:
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.associationproxy import association_proxy
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class Thing(Base):
__tablename__ = "Things"
id = Column(Integer, primary_key=True)
name = Column(String(256))
thingType = Column(String(256))
__mapper_args__ = {
'with_polymorphic':'*',
'polymorphic_on':"thingType",
'polymorphic_identity':"thing"
}
class Ball(Thing):
__tablename__ = "Balls"
id = Column('id', Integer, ForeignKey('Things.id'), primary_key=True)
color = Column('color', String(256))
__mapper_args__ = {
'polymorphic_identity':'ball'
}
class Bin(Thing):
__tablename__ = "Bins"
id = Column('id', Integer, ForeignKey('Things.id'), primary_key=True)
shape = Column('shape', String(256))
ballId = Column('ballId', Integer, ForeignKey('Balls.id'))
ball = relationship(Ball, foreign_keys=[ballId], backref="outputBins")
__mapper_args__ = {
'polymorphic_identity':'bin'
}
pass
class Court(Thing):
__tablename__ = "Courts"
id = Column('id', Integer, ForeignKey('Things.id'), primary_key=True)
homeBinId = Column('homeBinId', Integer, ForeignKey('Bins.id'))
awayBinId = Column('awayBinId', Integer, ForeignKey('Bins.id'))
homeBin = relationship(Bin, foreign_keys=[homeBinId], backref="homeCourts")
awayBin = relationship(Bin, foreign_keys=[awayBinId], backref="awayCourts")
__mapper_args__ = {
'polymorphic_identity':'court'
}
metadata = MetaData()
engine = create_engine("postgresql://localhost:5432/")
Session = sessionmaker(bind=engine)
session = Session()
def courtQuery():
awayBalls = aliased(Ball, name="awayBalls")
homeBalls = aliased(Ball, name="homeBalls")
awayBins = aliased(Bin, name="awayBins")
homeBins = aliased(Bin, name="homeBins")
query = session.query(Court)\
.outerjoin(awayBins, Court.awayBinId == awayBins.id)\
.outerjoin(awayBalls, awayBins.ballId == awayBalls.id)\
.outerjoin(homeBins, Court.homeBinId == homeBins.id)\
.outerjoin(homeBalls, homeBins.ballId == homeBalls.id)\
.options(contains_eager(Court.awayBin, alias=awayBins).contains_eager(awayBins.ball, alias=awayBalls))\
.options(contains_eager(Court.homeBin, alias=homeBins).contains_eager(homeBins.ball, alias=homeBalls))
return [r for r in query]
import time
while(True):
results = courtQuery()
court = results[0]
ball = court.homeBin.ball
print(ball.color) # does not change
time.sleep(2)
Environment:
Python 2.7.14
SqlAlchemy 1.3.0b1
PostGres 11.3 (though I've seen this
on Oracle as well)

Related

SQLAlchemy Complex Polymorphic Mapping And Deep Table Inheritance

I have the following table structure (I have simplified it as much as possible, narrowed down the child/inheriting tables [there are additional] and removed all irrelevant columns from the provided tables):
## Base is my declarative_base
class AbstractQuestion(Base):
questionTypeId: Column = Column(
Integer, ForeignKey("luQuestionTypes.id"), index=True, nullable=False
)
__mapper_args__ = {
"polymorphic_identity": 0,
"polymorphic_on": questionTypeId,
}
class MultiChoiceQuestion(AbstractQuestion):
id: Column = Column(Integer, ForeignKey(AbstractQuestion.id), primary_key=True)
__mapper_args__ = {"polymorphic_identity": 1}
class AbstractSurveyQuestion(AbstractQuestion):
id: Column = Column(Integer, ForeignKey(AbstractQuestion.id), primary_key=True)
surveyQuestionTypeId: Column = Column(
Integer, ForeignKey("luSurveyQuestionTypes.id"), index=True, nullable=False
)
__mapper_args__ = {"polymorphic_identity": 2}
class RatingQuestion(AbstractSurveyQuestion):
id: Column = Column(
Integer, ForeignKey(AbstractSurveyQuestion.id), primary_key=True
)
The challenge I'm facing is, that I'm trying to make AbstractSurveyQuestion have two types of polymorphic mappings - one as a child of AbstractQuestion with a polymorphic_identity that matches the questionTypeId, but I also need it to have a separate polymorphic_on mapper for its own child table, which is RatingQuestion.
The closest thing I could find was this question, but it doesn't seem to be aiming at exactly what I'm looking for.
I also looked at the official docs about inheritance, but again couldn't find an accurate example to what I'm trying to achieve.
Can anyone please help me with this?
Thanks!
I posted the same question on SQLAlchemy's GitHub repo. Got this answer from the maintainer:
https://github.com/sqlalchemy/sqlalchemy/discussions/8089#discussioncomment-2878725
I'll paste the contents below as well:
it sounds like you are looking for mult-level polymorphic_on. We don't support that right now without workarounds, and that's #2555 which is a feature we're unlikely to implement, or if we did it would be a long time from now.
It looks like you are using joined inheritance....so...two ways. The more SQL efficient one is to have an extra "supplemetary" column on your base table that can discriminate for AbstractSurveyQuestion...because if you query for all the AbstractQuestion objects, by default it's just going to query that one table, and needs to know from each row if that row is in fact a RatingQuestion.
the more convoluted way is to use mapper-configured with_polymorphic so that all queries for AbstractQuestion include all the tables (or a subset of tables, can be configured, but at minimum you'd need to join out to AbstractSurveyQuestion) using a LEFT OUTER JOIN (or if you really wanted to go crazy it can be a UNION ALL).
the workarounds are a little ugly since it's not very easy to get a "composed" value out of two columns in SQL, but they are contained to the base classes. Below examples work on SQLite and might need tweaking for other databases.
Here's the discriminator on base table demo, a query here looks like:
SELECT aq.id AS aq_id, aq.d1 AS aq_d1, aq.d2 AS aq_d2, CAST(aq.d1 AS VARCHAR) || ? || CAST(coalesce(aq.d2, ?) AS VARCHAR) AS _sa_polymorphic_on
FROM aq
from typing import Tuple, Optional
from sqlalchemy import cast
from sqlalchemy import Column
from sqlalchemy import create_engine
from sqlalchemy import event
from sqlalchemy import ForeignKey
from sqlalchemy import inspect
from sqlalchemy import Integer, func
from sqlalchemy import String
from sqlalchemy.orm import declarative_base
from sqlalchemy.orm import Session
Base = declarative_base()
class ident_(str):
"""describe a composed identity.
Using a string for easy conversion to a string SQL composition.
"""
_tup: Tuple[int, Optional[int]]
def __new__(cls, d1, d2=None):
self = super().__new__(cls, f"{d1}, {d2 or ''}")
self._tup = d1, d2
return self
def _as_tuple(self):
return self._tup
class AbstractQuestion(Base):
__tablename__ = "aq"
id = Column(Integer, primary_key=True)
d1 = Column(
Integer, nullable=False
) # this can be your FK to the other table etc.
d2 = Column(
Integer, nullable=True
) # this is a "supplementary" discrim column
__mapper_args__ = {
"polymorphic_identity": ident_(0),
"polymorphic_on": cast(d1, String)
+ ", "
+ cast(func.coalesce(d2, ""), String),
}
#event.listens_for(AbstractQuestion, "init", propagate=True)
def _setup_poly(target, args, kw):
"""receive new AbstractQuestion objects when they are constructed and
set polymorphic identity"""
# this is the ident_() object
ident = inspect(target).mapper.polymorphic_identity
d1, d2 = ident._as_tuple()
kw["d1"] = d1
if d2:
kw["d2"] = d2
class MultiChoiceQuestion(AbstractQuestion):
__tablename__ = "mcq"
id: Column = Column(
Integer, ForeignKey(AbstractQuestion.id), primary_key=True
)
__mapper_args__ = {"polymorphic_identity": ident_(1)}
class AbstractSurveyQuestion(AbstractQuestion):
__tablename__ = "acq"
id: Column = Column(
Integer, ForeignKey(AbstractQuestion.id), primary_key=True
)
__mapper_args__ = {"polymorphic_identity": ident_(2)}
class RatingQuestion(AbstractSurveyQuestion):
__tablename__ = "rq"
id: Column = Column(
Integer, ForeignKey(AbstractSurveyQuestion.id), primary_key=True
)
__mapper_args__ = {"polymorphic_identity": ident_(2, 1)}
e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)
s = Session(e)
s.add(MultiChoiceQuestion())
s.add(RatingQuestion())
s.commit()
s.close()
for q in s.query(AbstractQuestion):
print(q)
then there's the one that maintains your schema fully, a query here looks like:
SELECT aq.id AS aq_id, aq.d1 AS aq_d1, CAST(aq.d1 AS VARCHAR) || ? || CAST(coalesce(acq.d2, ?) AS VARCHAR) AS _sa_polymorphic_on, acq.id AS acq_id, acq.d2 AS acq_d2
FROM aq LEFT OUTER JOIN acq ON aq.id = acq.id
from typing import Tuple, Optional
from sqlalchemy import cast
from sqlalchemy import Column
from sqlalchemy import create_engine
from sqlalchemy import event
from sqlalchemy import ForeignKey
from sqlalchemy import func
from sqlalchemy import inspect
from sqlalchemy import Integer
from sqlalchemy import String
from sqlalchemy.orm import declarative_base
from sqlalchemy.orm import Session
Base = declarative_base()
class ident_(str):
"""describe a composed identity.
Using a string for easy conversion to a string SQL composition.
"""
_tup: Tuple[int, Optional[int]]
def __new__(cls, d1, d2=None):
self = super().__new__(cls, f"{d1}, {d2 or ''}")
self._tup = d1, d2
return self
def _as_tuple(self):
return self._tup
class AbstractQuestion(Base):
__tablename__ = "aq"
id = Column(Integer, primary_key=True)
d1 = Column(
Integer, nullable=False
) # this can be your FK to the other table etc.
__mapper_args__ = {
"polymorphic_identity": ident_(0),
}
#event.listens_for(AbstractQuestion, "init", propagate=True)
def _setup_poly(target, args, kw):
"""receive new AbstractQuestion objects when they are constructed and
set polymorphic identity"""
# this is the ident_() object
ident = inspect(target).mapper.polymorphic_identity
d1, d2 = ident._as_tuple()
kw["d1"] = d1
if d2:
kw["d2"] = d2
class MultiChoiceQuestion(AbstractQuestion):
__tablename__ = "mcq"
id: Column = Column(
Integer, ForeignKey(AbstractQuestion.id), primary_key=True
)
__mapper_args__ = {"polymorphic_identity": ident_(1)}
class AbstractSurveyQuestion(AbstractQuestion):
__tablename__ = "acq"
id: Column = Column(
Integer, ForeignKey(AbstractQuestion.id), primary_key=True
)
d2 = Column(Integer, nullable=False)
__mapper_args__ = {
"polymorphic_identity": ident_(2),
"polymorphic_load": "inline", # adds ASQ to all AQ queries
}
# after ASQ is set up, set the discriminator on the base class
# that includes ASQ column
inspect(AbstractQuestion)._set_polymorphic_on(
cast(AbstractQuestion.d1, String)
+ ", "
+ cast(func.coalesce(AbstractSurveyQuestion.d2, ""), String)
)
class RatingQuestion(AbstractSurveyQuestion):
__tablename__ = "rq"
id: Column = Column(
Integer, ForeignKey(AbstractSurveyQuestion.id), primary_key=True
)
__mapper_args__ = {"polymorphic_identity": ident_(2, 1)}
e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)
s = Session(e)
s.add(MultiChoiceQuestion())
s.add(RatingQuestion())
s.commit()
s.close()
for q in s.query(AbstractQuestion):
print(q)

Creating a join on multiple foreign keys using Sqlalchemy

I am trying to query my database using Sqlalchemy ORM methods. I have created the tables and the engine as well as testing raw sql against it. I want to be able to use a Location code as a parameter from the locations table and pull the origin / destination from the trip table. Here is the code below:
Base = declarative_base () # generated base class orm
class Trip(Base):
__tablename__="trips"
id = Column("id", Integer, primary_key=True, autoincrement=True)
route =Column("Route", String(25))
origin_id = Column("origin_id", String(10), ForeignKey("locations.locationCode"))
destination_id = Column("destination_id", String(10), ForeignKey("locations.locationCode"))
origin = relationship("Location", foreign_keys=[origin_id])
destination = relationship("Location", foreign_keys=[destination_id])
class Location(Base):
__tablename__ = "locations"
locationCode = Column("locationCode",String(10), primary_key = True)
latitude = Column("latitude", String(25))
longitude = Column("longitude", String(25))
facilityOwnedByCarvana = Column("facilityOwnedByCarvana",Integer)
engine = create_engine("sqlite:///carvana.db")
Session = sessionmaker(bind=engine)
session = Session()
Base.metadata.create_all(engine)
locationsDF = pd.read_csv("data/locations.csv")
tripsDf = pd.read_csv("data/trips.csv")
locationsDF.to_sql(con=engine, name=Location.__tablename__, if_exists="replace", index=False)
tripsDf.to_sql(con=engine, name=Trip.__tablename__,if_exists="replace", index=False)
Here is my attempt at the query
q = (
session.query(Location)
.outerjoin(Trip, Location.locationCode == Trip.destination_id)
.filter(Location.locationCode == "BALT")
.order_by(Location.locationCode)
.limit(10)
)
General:
db_session.query(class_table1)
.join(class_table2,
class_table2.key_table2 == class_table1.key_table1,
isouter=True)
Specific:
db_session.query(Location)
.join(Trip,
Trip.destination_id== Location.locationCode ,
isouter=True)

In SQLalchemy, can I have a column with multiple strings?

For the following code, I have the mapping and an example of a "cookie" that contains some info I want to add to the SQLalchemy through python. As you can see some fields like Farmloc, TreasureMap, and Crafts all have multiple entries they point to. I searched and couldn't find much other than Enum, but that was giving me some trouble. Is there a better means than what I've done here. Using Python3.6.1 and SQLalchemy 1.1.11
import csv, sqlalchemy
from sqlalchemy import Column, String, Integer, ForeignKey, Numeric, Boolean
from sqlalchemy.ext.declarative import declarative_base
import sqlite3
from enum import Enum, auto
import enum
from sqlalchemy import create_engine
engine = create_engine('sqlite:///:memory:', echo=True) #TEST DB
Base = declarative_base()
class Crystal(Base):
__tablename__ = 'crystals'
dbID = Column(Integer, primary_key=True)
ItemName = Column(String, index=True, nullable=False) #Full name, (ie. Earth Shard, Wind Crystal, Fire Cluster)
ItemType = Column(String) #Fire, Earth, Wind, Ice, Water ONLY
ItemPow = Column(String) #Shard, Crystal, Cluster ONLY
Farmloc = Column(String) #Where player can farm
Retainer = Column(Boolean) #If retainer can farm
RetainerQ = Column(Integer) #Quantity retainer gets
Levee = Column(Boolean) #Any Levees rewards these.
TreasureMap = Column(String) #False=NO, otherwise Types listed
Desynthed = Column(String) #False=NO, otherwise Herb1, Herb2, Ore1, etc
Crafts = Column(String) #Crafts associated with (ie Earth w LWR)
Price = Column(Integer) #MB price (Should be used as ref for all craftables)
from sqlalchemy.orm import sessionmaker
Session = sessionmaker(bind=engine)
session = Session()
cc_cookie = Crystal(dbID = 1,
ItemName= 'Wind Cluster',
ItemType = 'Wind',
ItemPow = 'Cluster',
Farmloc = 'The Dravanian Hinterlands, Mor Dhona',
Retainer = False,
RetainerQ = 0,
Levee = False,
TreasureMap = 'Dragonskin Treasure Map,Gaganaskin Treasure Map,Gazelleskin Treasure Map,Leather Buried Treasure Map',
Desynthed = 'Clary Sage,Furymint,Highland Oregano,Windtea Leaves',
Crafts = 'CRP,GSM,LWR,WVR',
Price = 500)
Base.metadata.create_all(engine)
session.add(cc_cookie)
session.commit()
Check out the built-in class sqlalchemy.types.ARRAY
For example:
TreasureMap = Column(ARRAY(String))
Side note: as per convention all column names are all lowercase and under score separated (e.g. treasure_map over TreasureMap.)

SQLAlchemy mapping a table union to a class

I'm using SQLAlchemy to query a number of similar tables, and union the results. The tables are rows of customer information, but our current database structures it so that different groups of customers are in their own tables e.g. client_group1, client_group2, client_group3:
client_group1:
| id | name | email |
| 1 | john | johnsmith#gmail.com |
| 2 | greg | gregjones#gmail.com |
Each of the other tables have identical columns. If I'm using SQLAlchemy declarative_base, I can have a class for client_group1 like the following:
def ClientGroup1(Base):
__tablename__ = 'client_group1'
__table_args__ = {u'schema': 'clients'}
id = Column(Integer, primary_key=True)
name = Column(String(32))
email = Column(String(32))
Then I can do queries such as:
session.query(ClientGroup1.name)
However, if I use union_all to combine a bunch of client tables into a viewport, such as:
query1 = session.query(ClientGroup1.name)
query2 = session.query(ClientGroup2.name)
viewport = union_all(query1, query2)
then I'm not sure how to map a viewport to an object, and instead I have to access viewport columns using:
viewport.c.name
Is there any way to map the viewport to a specific table structure? Especially considering the fact that each class points to a different __table_name__
Read Concrete Table Inheritance documentation for the idea how this can be done. The code below is a running example of how this can be done:
from sqlalchemy import create_engine, Column, String, Integer
from sqlalchemy.orm import sessionmaker, configure_mappers
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.declarative import AbstractConcreteBase
engine = create_engine('sqlite:///:memory:', echo=True)
Session = sessionmaker(bind=engine)
session = Session()
Base = declarative_base(engine)
class ClientGroupBase(AbstractConcreteBase, Base):
pass
class ClientGroup1(ClientGroupBase):
__tablename__ = 'client_group1'
# __table_args__ = {'schema': 'clients'}
__mapper_args__ = {
'polymorphic_identity': 'client_group1',
'concrete': True,
}
id = Column(Integer, primary_key=True)
name = Column(String(32))
email = Column(String(32))
class ClientGroup2(ClientGroupBase):
__tablename__ = 'client_group2'
# __table_args__ = {'schema': 'clients'}
__mapper_args__ = {
'polymorphic_identity': 'client_group2',
'concrete': True,
}
id = Column(Integer, primary_key=True)
name = Column(String(32))
email = Column(String(32))
def _test_model():
# generate classes for all tables
Base.metadata.create_all()
print('-'*80)
# configure mappers (see documentation)
configure_mappers()
print('-'*80)
# add some test data
session.add(ClientGroup1(name="name1"))
session.add(ClientGroup1(name="name1"))
session.add(ClientGroup2(name="name1"))
session.add(ClientGroup2(name="name1"))
session.commit()
print('-'*80)
# perform a query
q = session.query(ClientGroupBase).all()
for r in q:
print(r)
if __name__ == '__main__':
_test_model()
The above example has an added benefit that you can also create new objects, as well as query only some tables.
You could do it mapping an SQL VIEW to a class, but you need to specify a primary key explicitly (see Is possible to mapping view with class using mapper in SqlAlchemy?). In you case, I am afraid, this might not work because of the same PK value in multiple tables, and using a multi-column PK might not be the best idea.

Creating a query with few related tables in Pyramid with SQLAlchemy

I have defined few tables in Pyramid like this:
# coding: utf-8
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Integer, Float, DateTime, ForeignKey, ForeignKeyConstraint, String, Column
from sqlalchemy.orm import scoped_session, sessionmaker, relationship, backref,
from zope.sqlalchemy import ZopeTransactionExtension
DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
Base = declarative_base()
class Codes(Base):
__tablename__ = 'Code'
__table_args__ = {u'schema': 'Locations'}
id = Column(Integer, nullable=False)
code_str = Column(String(9), primary_key=True)
name = Column(String(100))
incoming = relationship(u'Voyages', primaryjoin='Voyage.call == Codes.code_str', backref=backref('Code'))
class Locations(Base):
__tablename__ = 'Location'
__table_args__ = {u'schema': 'Locations'}
unit_id = Column(ForeignKey(u'Structure.Definition.unit_id', ondelete=u'RESTRICT', onupdate=u'CASCADE'), primary_key=True, nullable=False)
timestamp = Column(DateTime, primary_key=True, nullable=False)
longitude = Column(Float)
latitude = Column(Float)
class Voyages(Base):
__tablename__ = 'Voyage'
__table_args__ = (ForeignKeyConstraint(['unit_id', 'Voyage_id'], [u'Locations.Voyages.unit_id', u'Locations.Voyages.voyage_id'], ondelete=u'RESTRICT', onupdate=u'CASCADE'), {u'schema': 'Locations'}
)
uid = Column(Integer, primary_key=True)
unit_id = Column(Integer)
voyage_id = Column(Integer)
departure = Column(ForeignKey(u'Locations.Code.code_str', ondelete=u'RESTRICT', onupdate=u'CASCADE'))
call = Column(ForeignKey(u'Locations.Code.code_str', ondelete=u'RESTRICT', onupdate=u'CASCADE'))
departure_date = Column(DateTime)
voyage_departure = relationship(u'Codes', primaryjoin='Voyage.departure == Codes.code_str')
voyage_call = relationship(u'Codes', primaryjoin='Voyage.call == Codes.code_str')
class Definitions(Base):
__tablename__ = 'Definition'
__table_args__ = {u'schema': 'Structure'}
unit_id = Column(Integer, primary_key=True)
name = Column(String(90))
type = Column(ForeignKey(u'Structure.Type.id', ondelete=u'RESTRICT', onupdate=u'CASCADE'))
locations = relationship(u'Locations', backref=backref('Definition'))
dimensions = relationship(u'Dimensions', backref=backref('Definition'))
types = relationship(u'Types', backref=backref('Definition'))
voyages = relationship(u'Voyages', backref=backref('Definition'))
class Dimensions(Base):
__tablename__ = 'Dimension'
__table_args__ = {u'schema': 'Structure'}
unit_id = Column(ForeignKey(u'Structure.Definition.unit_id', ondelete=u'RESTRICT', onupdate=u'CASCADE'), primary_key=True, nullable=False)
length = Column(Float)
class Types(Base):
__tablename__ = 'Type'
__table_args__ = {u'schema': 'Structure'}
id = Column(SmallInteger, primary_key=True)
type_name = Column(String(255))
type_description = Column(String(255))
What I am trying to do here is to find a specific row from Codes table (filter it by code_str) and get all related tables in return, but under the condition that Location table returns only the last row by timestamp, Voyage table must return only the last row by departure, and it must have all information from Definitions table.
I started to create a query from the scratch and came across something like this:
string_to_search = request.matchdict.get('code')
sub_dest = DBSession.query(func.max(Voyage.departure).label('latest_voyage_timestamp'), Voyage.unit_id, Voyage.call.label('destination_call')).\
filter(Voyage.call== string_to_search).\
group_by(Voyage.unit_id, Voyage.call).\
subquery()
query = DBSession.query(Codes, Voyage).\
join(sub_dest, sub_dest.c.destination_call == Codes.code_str).\
outerjoin(Voyage, sub_dest.c.latest_voyage_timestamp == Voyage.departure_date)
but I have notice that when I iterate through my results (like for code, voyage in query) I am actually iterating every Voyage I get in return. In theory it is not a big problem for me but I am trying to construct some json response with basic information from Codes table which would include all possible Voyages (if there is any at all).
For example:
code_data = {}
all_units = []
for code, voyage in query:
if code_data is not {}:
code_data = {
'code_id': code.id,
'code_str': code.code_str,
'code_name': code.name,
}
single_unit = {
'unit_id': voyage.unit_id,
'unit_departure': str(voyage.departure_date) if voyage.departure_date else None,
}
all_units.append(single_unit)
return {
'code_data': exception.message if exception else code_data,
'voyages': exception.message if exception else all_units,
}
Now, this seems a bit wrong because I don't like rewriting this code_data in each loop, so I put if code_data is not {} line here, but I suppose it would be much better (logical) to iterate in a way similar to this:
for code in query:
code_data = {
'code_id': code.id,
'code_str': code.code_str,
'code_name': code.name,
}
for voyage in code.voyages:
single_unit = {
'unit_id': voyage.unit_id,
'unit_departure': str(voyage.departure) if voyage.departure else None,
}
all_units.append(single_unit)
return {
'code_data': exception.message if exception else code_data,
}
So, to get only single Code in return (since I queried the db for that specific Code) which would then have all Voyages related to it as a nested value, and of course, in each Voyage all other information related to Definition of the particular Unit...
Is my approach good at all in the first place, and how could I construct my query in order to iterate it in this second way?
I'm using Python 2.7.6, SQLAlchemy 0.9.7 and Pyramid 1.5.1 with Postgres database.
Thanks!
Try changing the outer query like so:
query = DBSession.query(Codes).options(contains_eager('incoming')).\
join(sub_dest, sub_dest.c.destination_call == Codes.code_str).\
outerjoin(Voyage, sub_dest.c.latest_voyage_timestamp == Voyage.departure_date)
In case of problems, try calling the options(...) part like so:
(...) .options(contains_eager(Codes.incoming)). (...)
This should result in a single Codes instance being returned with Voyages objects accessible via the relationship you've defined (incoming), so you could proceed with:
results = query.all()
for code in results:
print code
# do something with code.incoming
# actually, you should get only one code so if it proves to work, you should
# use query.one() so that in case something else than a single Code is returned,
# an exception is thrown
of course you need an import, e.g.: from sqlalchemy.orm import contains_eager

Categories

Resources