Python SQLAlchemy - Cannot Access Primary Key Column of SELECT Statement's Result - python

s = select([stations.c.name]).where(stations.c.name == station_name)
stations = connection.execute(s).fetchone()
I have the above code to run SELECT on a SQL table. However, while other columns of the matched entry were accessible, trying to access its primary key column by stations['id'] gives the error:
"Could not locate column in row for column 'id'"
Why is that?
Table definition:
stations = Table('stations', metadata,
Column('id', Integer, primary_key = True),
Column('name', String(16), nullable = False)
)

Note: you should avoid giving the same name to different objects because in your case after statement
stations = connection.execute(s).fetchone()
initial stations Table object is no longer accessible. You can rename fetched object to station_record, or rename stations Table object to stations_table, or both.
Answer
If you want to get id of a record – than you should query it:
s = select([stations.c.id, stations.c.name]).where(stations.c.name == station_name)
or
s = select(stations.columns).where(stations.c.name == station_name)
Finally we can have something like
from sqlalchemy import MetaData, Table, Column, Integer, String, create_engine, select
db_uri = 'sqlite://'
engine = create_engine(db_uri)
metadata = MetaData(bind=engine)
stations = Table('stations', metadata,
Column('id', Integer, primary_key=True),
Column('name', String(16), nullable=False)
)
stations.create(checkfirst=True)
connection = engine.connect()
station_name = 'sample text'
connection.execute(stations.insert().values(name=station_name))
s = select(stations.columns).where(stations.c.name == station_name)
station_record = connection.execute(s).fetchone()
station_record_id = station_record['id']
station_record_name = station_record['name']

Related

How do I insert all the elements in a Python list into individual rows of an SQLAlchemy column?

Trying to insert the elements of exp_data into individual rows of SQLAlchemy column exp_list inside the Expiration table programmatically using Python within the FastAPI framework:
exp_data = ['2020-08-27', '2020-09-03', '2020-09-10', '2020-09-17']
for i in exp_data:
exp = Expiration(symbol=stock.symbol, exp_list=exp_data)
db.add_all([stock, exp])
db.commit()
And my SQLAlchemy model is:
from sqlalchemy import Boolean, Column, ForeignKey, Numeric, Integer, String, Date, Float
from sqlalchemy.orm import relationship, backref
from database import Base
class Stock(Base):
__tablename__ = "stocks"
id = Column(Integer, primary_key=True, index=True)
symbol = Column(String)
price = Column(Float)
class Expiration(Base):
__tablename__ = "expirations"
id = Column(Integer, primary_key=True, index=True)
symbol = Column(String, ForeignKey(Stock.symbol), index=True)
exp_list = Column(String)
I put exp_data in the code to show the size - I am scraping this data, and I want the program to automatically insert the data into the database by simply posting an HTTP Request of the stock ticker to the server. Been receiving this error:
sqlalchemy.exc.InterfaceError: (sqlite3.InterfaceError) Error binding parameter 1 - probably unsupported type.
[SQL: INSERT INTO expirations (symbol, exp_list) VALUES (?, ?)]
[parameters: ('FB', ['2020-08-27', '2020-09-03', '2020-09-10', '2020-09-17'])]
I think the issue is with the for loop in the first code block - I have been trying to find strategies to iterate through each value, such as '2020-08-27', and insert it into individual rows. Any help is much appreciated - thank you!
Your column :exp_list defined as string/text column
exp_list = Column(String)
And then you pass a python list to it
exp_data = ['2020-08-27', '2020-09-03', '2020-09-10', '2020-09-17']
exp = Expiration(symbol=stock.symbol, exp_list=exp_data)
You should pass the list as string
exp = Expiration(symbol=stock.symbol, exp_list=','.join(exp_data))
You probably want something like
exp_data = ['2020-08-27', '2020-09-03', '2020-09-10', '2020-09-17']
exps = []
for date in exp_data:
exps.append(Expiration(symbol=stock.symbol, exp_list=date))
instances = [stock]
instances.extend(exps)
db.add_all(instances)
db.commit()
that is, looping over the list of dates and creating an Expiration instance for each.

SQLAlchemy query not iterating over all results from a query

I'm experiencing some odd behavior with SQLAlchemy not iterating over all of the results from a query.
For example, I have the following python code:
engine = create_engine(<connection string>)
Session = sessionmaker(bind=engine)
session = Session()
columns = session.query(Column)
counter = 1
for c in columns
print(counter)
counter = counter + 1
print('count: ' + str(columns.count()))
where Column is a class that I've defined and mapped in the usual SQLAlchemy way:
from base import Base
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, Boolean
class Column(Base):
__tablename__ = 'COLUMNS'
__table_args__ = {'schema' : 'INFORMATION_SCHEMA'}
table_catalog = Column(String)
table_schema = Column(String)
table_name = Column(String)
column_name = Column(String, primary_key=True)
data_type = Column(String)
From my query, I'm expecting 7034 rows to be returned and this is what the final print statement prints out (for columns.count()), but the for loop only ever gets up to 2951 printing out counter.
If I do anything else with the returned data in the for loop, only the 2951 get processed, not all 7034.
Does anyone know why I'm experiencing this discrepancy, and how can I iterate over all 7034 rows, not just the 2951?
I've figured out why I wasn't getting the results I was expecting (I did something silly).
The 'column_name' field in the table the Column class maps to ins't unique, therefore picking it as a primary key filtered out only unique values - which since there are duplicates results in less rows being returned than I expected.
I fixed it by updating the definition of the Column mapping to:
from base import Base
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, Boolean
class Column(Base):
__tablename__ = 'COLUMNS'
__table_args__ = {'schema' : 'INFORMATION_SCHEMA'}
table_catalog = Column(String, primary_key=True)
table_schema = Column(String, primary_key=True)
table_name = Column(String, primary_key=True)
column_name = Column(String, primary_key=True)
data_type = Column(String)

How to properly use SQLAlchemy's '#aggregated' class attribute decorator

I'm trying to use SQLAlchemy's #aggregated decorator to define an attribute ('gross_amount)' for a class, Receipt. This gross_amount attribute is the sum of the Item.gross_amount for all Item instances associated with the Receipt instance by a foreign id.
I.E., a receipt is made up of items, and I want to define a receipt 'gross_amount' value which is just the total $ of all of the items on the receipt.
I've modeled my code after this document http://sqlalchemy-utils.readthedocs.io/en/latest/aggregates.html
So it looks like this...
from sqlalchemy import Column, Integer, ForeignKey
from sqlalchemy.sql import func
from sqlalchemy import orm
class Receipt(Base):
__tablename__ = "receipts"
__table_args__ = {'extend_existing': True}
id = Column(Integer, index = True, primary_key = True, nullable = False)
#aggregated('itemz', Column(Integer))
def gross_amount(self):
return func.sum(Item.gross_amount)
itemz = orm.relationship(
'Item',
backref='receipts'
)
class Item(Base):
__tablename__ = "items"
id = Column(Integer, index = True, primary_key = True, nullable = False)
'''
FE relevant
'''
gross_amount = Column(Integer)
receipt_id = Column(Integer, ForeignKey("receipts.id"), nullable=False)
In my migration, am I supposed to have a column in the receipts table for gross_amount?
1) When I DO define this column in the receipts table, any Receipt.gross_amount for any instance just points to the gross_amount values defined in the receipts table.
2) When I DO NOT define this column in the receipts table, I get a SQLAlchemy error whenever I execute a SELECT against the database:
ProgrammingError: (psycopg2.ProgrammingError) column receipts.gross_amount does not exist
FWIW, my SQLAlchemy package is the latest distributed thru PIP...
SQLAlchemy==1.1.11
SQLAlchemy-Utils==0.32.14
And my local db on which I'm running this for now is PostgreSQL 9.6.2
What am I doing wrong here? Any patient help would be greatly appreciated!
Yes, you do need to add the column to table:
CREATE TABLE receipts (
id INTEGER NOT NULL,
gross_amount INTEGER, -- <<< See, it's here :)
PRIMARY KEY (id)
);
INSERT INTO receipts VALUES(1,7);
INSERT INTO receipts VALUES(2,7);
CREATE TABLE items (
id INTEGER NOT NULL,
gross_amount INTEGER,
receipt_id INTEGER NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY(receipt_id) REFERENCES receipts (id)
);
Tested with this self-contained snippet:
from sqlalchemy import Column, Integer, ForeignKey, create_engine, orm
from sqlalchemy.orm import sessionmaker
from sqlalchemy.sql import func
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy_utils import aggregated
Base = declarative_base()
class Receipt(Base):
__tablename__ = "receipts"
__table_args__ = {'extend_existing': True}
id = Column(Integer, index = True, primary_key = True, nullable = False)
#aggregated('itemz', Column(Integer))
def gross_amount(self):
return func.sum(Item.gross_amount)
itemz = orm.relationship('Item', backref='receipts')
class Item(Base):
__tablename__ = "items"
id = Column(Integer, index = True, primary_key = True, nullable = False)
gross_amount = Column(Integer)
receipt_id = Column(Integer, ForeignKey("receipts.id"), nullable=False)
def __init__(self, amount):
self.gross_amount=amount
engine = create_engine('sqlite:///xxx.db', echo=True)
Base.metadata.create_all(engine)
session = sessionmaker(bind=engine)()
receipt = Receipt()
receipt.itemz.append(Item(5))
receipt.itemz.append(Item(2))
session.add(receipt)
session.commit()
print (receipt.gross_amount)
Of course, there's also another approach called hybrid_property, which basically allows you to do both orm- and database level queries without adding extra column do your database:
#hybrid_property
def gross_sum(self):
return sum(i.gross_amount for i in self.itemz)
#gross_sum.expression
def gross_sum(cls):
return select([func.sum(Item.gross_amount)]).\
where(Item.receipt_id==cls.id).\
label('gross_sum')
The reason you're getting this error is because the new column you're adding (gross_amount) has not been created in the receipts table in the database.
Meaning, your current database table only has one created column (id). For the aggregated column to work, it needs to contain an additional column called gross_amount.
This additional column has to allow null values.
One way to go about doing that is through SQL directly in PostgreSQL:
ALTER TABLE receipts ADD gross_amount int;
Alternatively, if there's no data yet, you can drop and recreate the table via SQLAlchemy. It should create this extra column automatically.
I'm not sure what you mean by the last part:
When I DO define this column in the receipts table, any
Receipt.gross_amount for any instance just points to the gross_amount
values defined in the receipts table.
That's where it's supposed to point. I'm not sure what you mean by that. Do you mean that it doesn't contain any values, even though there are values for this receipt's items in Item? If so, I would double check that this is the case (and per their examples here, refresh the database session before seeing the results).

Get last inserted record's Primary Key in "declarative_base()"

I wanna to get Primary Key of last inserted, I already know two way for this :
1) "lastrowid" with "raw SQL"
from sqlalchemy import create_engine, MetaData, Table, Column, Integer, String, text
engine = create_engine('sqlite://')
meta = MetaData()
tbl = Table('tbl', meta,
Column('f1', Integer, primary_key=True),
Column('f2', String(64))
)
tbl.create(engine)
sql = text("INSERT INTO tbl VALUES (NULL, 'some_data')")
res = engine.execute(sql)
print(res.lastrowid)
2) "inserted_primary_key" with "insert()"
from sqlalchemy import create_engine, MetaData, Table, Column, Integer, String
engine = create_engine('sqlite://')
meta = MetaData()
tbl = Table('tbl', meta,
Column('f1', Integer, primary_key=True),
Column('f2', String(64))
)
tbl.create(engine)
ins = tbl.insert().values(f2='some_data')
res = engine.execute(ins)
print(res.inserted_primary_key)
but my problem is "declarative_base()"
from sqlalchemy import create_engine, Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
engine = create_engine('sqlite://')
Base = declarative_base()
Session = sessionmaker(bind=engine)
session = Session()
class TBL(Base):
__tablename__ = 'tbl'
f1 = Column(Integer, primary_key=True)
f2 = Column(String(64))
Base.metadata.create_all(engine)
rcd = TBL(f2='some_data')
session.add(rcd)
session.commit()
If i do this:
res = session.add(rcd)
It give me "None". or if i do this:
res = session.commit()
same thing happend. My question is:
Is there any good way to access "lastrowid" or "inserted_primary_key" in case of "declarative_base()"?
What is the best approach ?
After calling session.commit(), accessing rcd.f1 will return its generated primary key. SQLAlchemy automatically reloads the object from database after it has been expired by the commit.

Strange error in SqlAlchemy-migrate on column.copy() with column type BigInteger .

The situation is a little bit simplified. I have two migration files for sqlalchemy-migrate:
In First I create table volume_usage_cache, then autoload it, create copy of its columns and print it:
from sqlalchemy import Column, DateTime
from sqlalchemy import Boolean, BigInteger, MetaData, Integer, String, Table
def upgrade(migrate_engine):
meta = MetaData()
meta.bind = migrate_engine
# Create new table
volume_usage_cache = Table('volume_usage_cache', meta,
Column('deleted', Boolean(create_constraint=True, name=None)),
Column('id', Integer(), primary_key=True, nullable=False),
Column('curr_write_bytes', BigInteger(), default=0),
mysql_engine='InnoDB',
mysql_charset='utf8'
)
volume_usage_cache.create()
volume_usage_cache = Table('volume_usage_cache', meta, autoload=True)
columns = []
[columns.append(column.copy()) for column in volume_usage_cache.columns]
print columns
And I get in log what I expected:
[Column('deleted', Boolean(), table=None), Column('id', Integer(), table=None,
primary_key=True, nullable=False), Column('curr_write_bytes', BigInteger(),
table=None, default=ColumnDefault(0))]
But if I make a copy of columns in Second migration file (that is runed after First):
from sqlalchemy import MetaData, String, Integer, Boolean, Table, Column, Index
def upgrade(migrate_engine):
meta = MetaData()
meta.bind = migrate_engine
table = Table("volume_usage_cache", meta, autoload=True)
columns = []
for column in table.columns:
columns.append(column.copy())
print columns
I get a different result:
[Column('deleted', INTEGER(), table=None, default=ColumnDefault(0)),
Column(u'id', INTEGER(), table=None, primary_key=True, nullable=False),
Column(u'curr_write_bytes', NullType(), table=None)]
Why curr_write_bytes column has NullType?
The are two problems:
First:
In First file we are using old metadata that already contains all columns with need types
So if we create new MetaData instance, SqlAlchemy will load info about table from database and will get the same result as in Second file.
Second:
There is no support in sqlAlchemy for BigInteger column type (in sqlite). And Sqlite doesn't support types of column at all. So we can create table with column BigInteger (and it will work), but after autoload type of such column will be automatically converted to NullType.

Categories

Resources