Copying data from one sqlalchemy session to another - python

I have a sqlalchemy schema containing three tables, (A, B, and C) related via one-to-many Foreign Key relationships (between A->B) and (B->C) with SQLite as a backend. I create separate database files to store data, each of which use the exact same sqlalchemy Models and run identical code to put data into them.
I want to be able to copy data from all these individual databases and put them into a single new database file, while preserving the Foreign Key relationships. I tried the following code to copy data from one file to a new file:
import sqlalchemy
from sqlalchemy.ext import declarative
from sqlalchemy import Column, String, Integer
from sqlalchemy import orm, engine
Base = declarative.declarative_base()
Session = orm.session_maker()
class A(Base):
__tablename__ = 'A'
a_id = Column(Ingeter, primary_key=True)
adata = Column(String)
b = orm.relationship('B', back_populates='a', cascade='all, delete-orphan', passive_deletes=True)
class B(Base):
__tablename__ = 'B'
b_id = Column(Ingeter, primary_key=True)
a_id = Column(Integer, sqlalchemy.ForeignKey('A.a_id', ondelete='SET NULL')
bdata = Column(String)
a = orm.relationship('A', back_populates='b')
c = orm.relationship('C', back_populates='b', cascade='all, delete-orphan', passive_deletes=True)
class C(Base):
__tablename__ = 'C'
c_id = Column(Ingeter, primary_key=True)
b_id = Column(Integer, sqlalchemy.ForeignKey('B.b_id', ondelete='SET NULL')
cdata = Column(String)
b = orm.relationship('B', back_populates='c')
file_new = 'file_new.db'
resource_new = 'sqlite:////%s' % file_new.lstrip('/')
engine_new = sqlalchemy.create_engine(resource_new, echo=False)
session_new = Session(bind=engine_new)
file_old = 'file_old.db'
resource_old = 'sqlite:////%s' % file_old.lstrip('/')
engine_old = sqlalchemy.create_engine(resource_old, echo=False)
session_old = Session(bind=engine_old)
for arow in session_old.query(A):
session_new.add(arow) # I am assuming that this will somehow know to copy all the child rows from the tables B and C due to the Foreign Key.
When run, I get the error, "Object '' is already attached to session '2' (this is '1')". Any pointers on how to do this using sqlalchemy and sessions? I also want to preserve the Foreign Key relationships within each database.
The use case is where data is first generated locally in non-networked machines and aggregated into a central db on the cloud. While the data will get generated in SQLite, the merge might happen in MySQL or Postgres, although here everything is happening in SQLite for simplicity.

First, the reason you get that error is because the instance arow is still tracked by session_old, so session_new will refuse to deal with it. You can detach it from session_old:
session_old.expunge(arow)
Which will allow you do add arow to session_new without issue, but you'll notice that nothing gets inserted into file_new. This is because SQLAlchemy knows that arow is persistent (meaning there's a row in the db corresponding to this object), and when you detach it and add it to session_new, SQLAlchemy still thinks it's persistent, so it does not get inserted again.
This is where Session.merge comes in. One caveat is that it won't merge unloaded relationships, so you'll need to eager load all the relationships you want to merge:
query = session_old.query(A).options(orm.subqueryload(A.b),
orm.subqueryload(A.b, B.c))
for arow in query:
session_new.merge(arow)

Related

How do I make SQLAlchemy set values for a foreign key by passing a related entity in the constructor?

When using SQLAlchemy I would like the foreign key fields to be filled in on the Python object when I pass in a related object. For example, assume you have network devices with ports, and assume that the device has a composite primary key in the database.
If I already have a reference to a "Device" instance and want to create a new "Port" instance linked to that device without knowing if it already exists in the database I would use the merge operation in SA. However, only setting the device attribute on the port instance is insufficient. The fields of the composite foreign key will not be propagated to the port instance and SA will be unable to determine the existence of the row in the database and unconditionally issue an INSERT statement instead of an UPDATE.
The following code examples demonstrate the issue. They should be run as one .py file so we have the same in-memory SQLite instance! They have only been split for readability.
Model Definition
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Unicode, ForeignKeyConstraint, create_engine
from sqlalchemy.orm import sessionmaker, relation
from textwrap import dedent
Base = declarative_base()
class Device(Base):
__tablename__ = 'device'
hostname = Column(Unicode, primary_key=True)
scope = Column(Unicode, primary_key=True)
poll_ip = Column(Unicode, primary_key=True)
notes = Column(Unicode)
ports = relation('Port', backref='device')
class Port(Base):
__tablename__ = 'port'
__table_args__ = (
ForeignKeyConstraint(
['hostname', 'scope', 'poll_ip'],
['device.hostname', 'device.scope', 'device.poll_ip'],
onupdate='CASCADE', ondelete='CASCADE'
),
)
hostname = Column(Unicode, primary_key=True)
scope = Column(Unicode, primary_key=True)
poll_ip = Column(Unicode, primary_key=True)
name = Column(Unicode, primary_key=True)
engine = create_engine('sqlite://', echo=True)
Base.metadata.bind = engine
Base.metadata.create_all()
Session = sessionmaker(bind=engine)
The model defines a Device class with a composite PK with three fields. The Port class references Device through a composite FK on those three columns. Device also has a relationship to Port which will use that FK.
Using the model
First, we add a new device and port. As we're using an in-memory SQLite DB, these will be the only two entries in the DB. And by inserting one device into the database we have something in the device
table that we expect to be loaded on the subsequent merge in session "sess2"
sess1 = Session()
d1 = Device(hostname='d1', scope='s1', poll_ip='pi1')
p1 = Port(device=d1, name='port1')
sess1.add(d1)
sess1.commit()
sess1.close()
Working example
This block works, but it is not written in a way I would expect it to behave. More precisely, the instance "d1" is instantiated with "hostname", "scope" and "poll_ip", and that instance is passed to the "Port" instance "p2". I would expect that "p2" would "receive" those 3 values through the foreign key. But it doesn't. I am forced to manually assign the values to "p2" before calling "merge". If the values are not assigned, SA does not find the identity and tries to run an "INSERT" query for "p2" which will conflict with the already existing instance.
sess2 = Session()
d1 = Device(hostname='d1', scope='s1', poll_ip='pi1')
p2 = Port(device=d1, name='port1')
p2.hostname=d1.hostname
p2.poll_ip=d1.poll_ip
p2.scope = d1.scope
p2 = sess2.merge(p2)
sess2.commit()
sess2.close()
Broken example (but expecting it to work)
This block shows how I would expect it to work. I would expect that assigning a value to "device" when creating the Port instance should be enough.
sess3 = Session()
d1 = Device(hostname='d1', scope='s1', poll_ip='pi1')
p2 = Port(device=d1, name='port1')
p2 = sess3.merge(p2)
sess3.commit()
sess3.close()
How can I make this last block work?
The FK of the child object isn't updated until you issue a flush() either explicitly or through a commit(). I think the reason for this is that if the parent object of a relationship is also a new instance with an auto-increment PK, SQLAlchemy needs to get the PK from the database before it can update the FK on the child object (but I stand to be corrected!).
According to the docs, a merge():
examines the primary key of the instance. If it’s present, it attempts
to locate that instance in the local identity map. If the load=True
flag is left at its default, it also checks the database for this
primary key if not located locally.
If the given instance has no primary key, or if no instance can be
found with the primary key given, a new instance is created.
As you are merging before flushing, there is incomplete PK data on your p2 instance and so this line p2 = sess3.merge(p2) returns a new Port instance with the same attribute values as the p2 you previously created, that is tracked by the session. Then, sess3.commit() finally issues the flush where the FK data is populated onto p2 and then the integrity error is raised when it tries to write to the port table. Although, inserting a sess3.flush() will only raise the integrity error earlier, not avoid it.
Something like this would work:
def existing_or_new(sess, kls, **kwargs):
inst = sess.query(kls).filter_by(**kwargs).one_or_none()
if not inst:
inst = kls(**kwargs)
return inst
id_data = dict(hostname='d1', scope='s1', poll_ip='pi1')
sess3 = Session()
d1 = Device(**id_data)
p2 = existing_or_new(sess3, Port, name='port1', **id_data)
d1.ports.append(p2)
sess3.commit()
sess3.close()
This question has more thorough examples of existing_or_new style functions for SQLAlchemy.

SQLAlchemy relationship with secondary table joining behaviour changes between lazy and eager loading

I've been playing with SQL Alchemy for a couple of months now and so far been really impressed with it.
There is one issue I've run into now that seems to be a bug, but I'm not sure that I'm doing the right thing. We use MS SQL here, with table reflection to define the table classes, however I can replicate the problem using an in-memory SQLite database, code for which I have included here.
What I am doing is defining a many to many relationship between two tables using a linking table between them. There is one extra piece of information that the linking table contains which I want to use for filtering the links, requiring the use of a primaryjoin statement on the relationship. This works perfectly for lazy loading, however for performance reasons we need eager loading and thats where it all falls over.
If I define the relationship with lazy loading:
activefunds = relationship('Fund', secondary='fundbenchmarklink',
primaryjoin='and_(FundBenchmarkLink.isactive==True,'
'Benchmark.id==FundBenchmarkLink.benchmarkid,'
'Fund.id==FundBenchmarkLink.fundid)')
and query the DB normally:
query = session.query(Benchmark)
The behaviour I need is exactly what I want, though performance is really bad, due to the extra SQL queries when iterating through all of the benchmarks and their respective funds.
If I define the relationship with eager loading:
activefunds = relationship('Fund', secondary='fundbenchmarklink',
primaryjoin='and_(FundBenchmarkLink.isactive==True,'
'Benchmark.id==FundBenchmarkLink.benchmarkid,'
'Fund.id==FundBenchmarkLink.fundid)',
lazy='joined')
and query the DB normally:
query = session.query(Benchmark)
it blows up in my face:
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such column: fund.id
[SQL: 'SELECT benchmark.id AS benchmark_id,
benchmark.name AS benchmark_name,
fund_1.id AS fund_1_id,
fund_1.name AS fund_1_name,
fund_2.id AS fund_2_id,
fund_2.name AS fund_2_name
FROM benchmark
LEFT OUTER JOIN (fundbenchmarklink AS fundbenchmarklink_1
JOIN fund AS fund_1 ON fund_1.id = fundbenchmarklink_1.fundid) ON benchmark.id = fundbenchmarklink_1.benchmarkid
LEFT OUTER JOIN (fundbenchmarklink AS fundbenchmarklink_2
JOIN fund AS fund_2 ON fund_2.id = fundbenchmarklink_2.fundid) ON fundbenchmarklink_2.isactive = 1
AND benchmark.id = fundbenchmarklink_2.benchmarkid
AND fund.id = fundbenchmarklink_2.fundid']
The SQL above clearly shows the linked table is not being joined before attempting to access columns from it.
If I query the DB, specifically joining the linked table:
query = session.query(Benchmark).join(FundBenchmarkLink, Fund, isouter=True)
It works, however it means I now have to make sure that whenever I query the Benchmark table, I always have to define the join to add both of the extra tables.
Is there something I'm missing, is this a potential bug, or is it simply the way the library works?
Full working sample code to replicate issue:
import logging
logging.basicConfig(level=logging.INFO)
logging.getLogger('sqlalchemy.engine.base').setLevel(logging.INFO)
from sqlalchemy import Column, DateTime, String, Integer, Boolean, ForeignKey, create_engine
from sqlalchemy.orm import relationship, sessionmaker
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class FundBenchmarkLink(Base):
__tablename__ = 'fundbenchmarklink'
fundid = Column(Integer, ForeignKey('fund.id'), primary_key=True, autoincrement=False)
benchmarkid = Column(Integer, ForeignKey('benchmark.id'), primary_key=True, autoincrement=False)
isactive = Column(Boolean, nullable=False, default=True)
fund = relationship('Fund')
benchmark = relationship('Benchmark')
def __repr__(self):
return "<FundBenchmarkLink(fundid='{}', benchmarkid='{}', isactive='{}')>".format(self.fundid, self.benchmarkid, self.isactive)
class Benchmark(Base):
__tablename__ = 'benchmark'
id = Column(Integer, primary_key=True)
name = Column(String, nullable=False)
funds = relationship('Fund', secondary='fundbenchmarklink', lazy='joined')
# activefunds has additional filtering on the secondary table, requiring a primaryjoin statement.
activefunds = relationship('Fund', secondary='fundbenchmarklink',
primaryjoin='and_(FundBenchmarkLink.isactive==True,'
'Benchmark.id==FundBenchmarkLink.benchmarkid,'
'Fund.id==FundBenchmarkLink.fundid)',
lazy='joined')
def __repr__(self):
return "<Benchmark(id='{}', name='{}')>".format(self.id, self.name)
class Fund(Base):
__tablename__ = 'fund'
id = Column(Integer, primary_key=True)
name = Column(String, nullable=False)
def __repr__(self):
return "<Fund(id='{}', name='{}')>".format(self.id, self.name)
if '__main__' == __name__:
engine = create_engine('sqlite://')
Base.metadata.create_all(engine)
maker = sessionmaker(bind=engine)
session = maker()
# Create some data
for bmkname in ['foo', 'bar', 'baz']:
bmk = Benchmark(name=bmkname)
session.add(bmk)
for fname in ['fund1', 'fund2', 'fund3']:
fnd = Fund(name=fname)
session.add(fnd)
session.add(FundBenchmarkLink(fundid=1, benchmarkid=1))
session.add(FundBenchmarkLink(fundid=2, benchmarkid=1))
session.add(FundBenchmarkLink(fundid=1, benchmarkid=2))
session.add(FundBenchmarkLink(fundid=2, benchmarkid=2, isactive=False))
session.commit()
# This code snippet works when activefunds doesn't exist, or doesn't use eager loading
# query = session.query(Benchmark)
# print(query)
# for bmk in query:
# print(bmk)
# for fund in bmk.funds:
# print('\t{}'.format(fund))
# This code snippet works for activefunds with eager loading
query = session.query(Benchmark).join(FundBenchmarkLink, Fund, isouter=True)
print(query)
for bmk in query:
print(bmk)
for fund in bmk.activefunds:
print('\t{}'.format(fund))
I think you've mixed the primary join and the secondary join a bit. Your primary would seem to contain both at the moment. Remove the predicate for Fund and it should work:
activefunds = relationship(
'Fund',
secondary='fundbenchmarklink',
primaryjoin='and_(FundBenchmarkLink.isactive==True,'
'Benchmark.id==FundBenchmarkLink.benchmarkid)',
lazy='joined')
The reason why your explicit join seems to fix the query is that it introduces the table fund before the implicit eager loading joins and so they can refer to it. It's not really a fix, rather than it hides the error. If you really want to use explicit Query.join() with eagerloading, inform the query about it with contains_eager(). Just be careful which relationship you choose as being contained, depending on the query in question; without additional filtering you could fill activefunds with inactive also.
Finally, consider using Query.outerjoin() instead of Query.join(..., isouter=True).

Sqlalchemy if table does not exist

I wrote a module which is to create an empty database file
def create_database():
engine = create_engine("sqlite:///myexample.db", echo=True)
metadata = MetaData(engine)
metadata.create_all()
But in another function, I want to open myexample.db database, and create tables to it if it doesn't already have that table.
EG of the first, subsequent table I would create would be:
Table(Variable_TableName, metadata,
Column('Id', Integer, primary_key=True, nullable=False),
Column('Date', Date),
Column('Volume', Float))
(Since it is initially an empty database, it will have no tables in it, but subsequently, I can add more tables to it. Thats what i'm trying to say.)
Any suggestions?
I've managed to figure out what I intended to do. I used engine.dialect.has_table(engine, Variable_tableName) to check if the database has the table inside. IF it doesn't, then it will proceed to create a table in the database.
Sample code:
engine = create_engine("sqlite:///myexample.db") # Access the DB Engine
if not engine.dialect.has_table(engine, Variable_tableName): # If table don't exist, Create.
metadata = MetaData(engine)
# Create a table with the appropriate Columns
Table(Variable_tableName, metadata,
Column('Id', Integer, primary_key=True, nullable=False),
Column('Date', Date), Column('Country', String),
Column('Brand', String), Column('Price', Float),
# Implement the creation
metadata.create_all()
This seems to be giving me what i'm looking for.
Note that in 'Base.metadata' documentation it states about create_all:
Conditional by default, will not attempt to recreate tables already
present in the target database.
And if you can see that create_all takes these arguments: create_all(self, bind=None, tables=None, checkfirst=True), and according to documentation:
Defaults to True, don't issue CREATEs for tables already present in
the target database.
So if I understand your question correctly, you can just skip the condition.
The accepted answer prints a warning that engine.dialect.has_table() is only for internal use and not part of the public API. The message suggests this as an alternative, which works for me:
import os
import sqlalchemy
# Set up a connection to a SQLite3 DB
test_db = os.getcwd() + "/test.sqlite"
db_connection_string = "sqlite:///" + test_db
engine = create_engine(db_connection_string)
# The recommended way to check for existence
sqlalchemy.inspect(engine).has_table("BOOKS")
See also the SQL Alchemy docs.
For those who define the table first in some models.table file, among other tables.
This is a code snippet for finding the class that represents the table we want to create ( so later we can use the same code to just query it )
But together with the if written above, I still run the code with checkfirst=True
ORMTable.__table__.create(bind=engine, checkfirst=True)
models.table
class TableA(Base):
class TableB(Base):
class NewTableC(Base):
id = Column('id', Text)
name = Column('name', Text)
form
Then in the form action file:
engine = create_engine("sqlite:///myexample.db")
if not engine.dialect.has_table(engine, table_name):
# Added to models.tables the new table I needed ( format Table as written above )
table_models = importlib.import_module('models.tables')
# Grab the class that represents the new table
# table_name = 'NewTableC'
ORMTable = getattr(table_models, table_name)
# checkfirst=True to make sure it doesn't exists
ORMTable.__table__.create(bind=engine, checkfirst=True)
engine.dialect.has_table does not work for me on cx_oracle.
I am getting AttributeError: 'OracleDialect_cx_oracle' object has no attribute 'default_schema_name'
I wrote a workaround function:
from sqlalchemy.engine.base import Engine
def orcl_tab_or_view_exists(in_engine: Engine, in_object: str, in_object_name: str,)-> bool:
"""Checks if Oracle table exists in current in_engine connection
in_object: 'table' | 'view'
in_object_name: table_name | view_name
"""
obj_query = """SELECT {o}_name FROM all_{o}s WHERE owner = SYS_CONTEXT ('userenv', 'current_schema') AND {o}_name = '{on}'
""".format(o=in_object, on=in_object_name.upper())
with in_engine.connect() as connection:
result = connection.execute(obj_query)
return len(list(result)) > 0
This is the code working for me to create all tables of all model classes defined with Base class
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
class YourTable(Base):
__tablename__ = 'your_table'
id = Column(Integer, primary_key = True)
DB_URL="mysql+mysqldb://<user>:<password>#<host>:<port>/<db_name>"
scoped_engine = create_engine(DB_URL)
Base = declarative_base()
Base.metadata.create_all(scoped_engine)

sqlalchemy one-to-many ORM update error

I have two tables: Eca_users and Eca_user_emails, one user can have many emails. I recive json with users and their emails. And I wont to load them into MS SQL database. Users can update their emails, so in this json I can get the same users with new (or changed) emails.
My code
# some import here
Base = declarative_base()
class Eca_users(Base):
__tablename__ = 'eca_users'
sql_id = sqlalchemy.Column(sqlalchemy.Integer(), primary_key = True)
first_id = sqlalchemy.Column(sqlalchemy.String(15))
name = sqlalchemy.Column(sqlalchemy.String(200))
main_email = sqlalchemy.Column(sqlalchemy.String(200))
user_emails = relationship("Eca_user_emails", backref=backref('eca_users'))
class Eca_user_emails(Base):
__tablename__ = 'user_emails'
sql_id = sqlalchemy.Column(sqlalchemy.Integer(), primary_key = True)
email_address = Column(String(200), nullable=False)
status = Column(String(10), nullable=False)
active = Column(DateTime, nullable=True)
sql_user_id = Column(Integer, ForeignKey('eca_users.sql_id'))
def main()
engine = sqlalchemy.create_engine('mssql+pymssql://user:pass/ECAusers?charset=utf8')
Session = sessionmaker()
Session.configure(bind = engine)
session = Session()
#then I get my json, parse it and...
query = session.query(Eca_users).filter(Eca_users.first_id == str(user_id))
if query.count() == 0:
# not interesting now
else:
for exstUser in query:
exstUser.name = name #update user info
exstUser.user_emails = [:] # empty old emails
# creating new Email obj
newEmail = Eca_user_emails(email_address = email_record['email'],
status = email_record['status'],
active = active_date)
exstUser.user_emails.append(newEmail) # and I get error here because autoflush
session.commit()
if __name__ == '__main__':
main()
Error message:
sqlalchemy.exc.IntegrityError: ...
[SQL: 'UPDATE user_emails SET sql_user_id=%(sql_user_id)s WHERE user_emails.sql_id = %(user_emails_sql_id)s'] [parameters: {'sql_user_id': None, 'user_emails_sql_id': Decimal('1')}]
Can't find any idea why this sql_user_id is None :(
When I chek exstUser and newEmail objects in debugger - it looks like everething fine. I mean all the reference is OK. The session obj and it's dirty attribute looks also OK in the debugger (sql_user_id is set for Eca_user_emails obj).
And what is most strange for me - this code worked absolutely fine when it was without a main function, just all code after the classes declaration. But after I wrote main declaration and put all code here I started to get this error.
I am completely new to Python so maybe this is one of stupid mistakes...
Any ideas how to fix it and what is the reason? Thanks for reading this :)
By the way: Python 3.4, sqlalchemy 1.0, SQL Server 2012
sql_user_id is None because by default SQLAlchemy clears out the foreign key when you delete a child object across a relationship, that is, when you clear exstUser.user_emails SQLAlchemy sets sql_user_id to None for all those instances. If you want SQLAlchemy to issue DELETEs for Eca_user_emails instances when they are detached from Eca_users, you need to add delete-orphan cascade option to the user_emails relationship. If you want SQLAlchemy to issue DELETEs for Eca_user_emails instances when a Eca_users instance is deleted, you need to add the delete cascade option to the user_emails relationship.
user_emails = relationship("Eca_user_emails", backref=backref('eca_users'), cascade="save-update, merge, delete, delete-orphan")
You can find more information about cascades in the SQLAlchemy docs

Get last inserted value from MySQL using SQLAlchemy

I've just run across a fairly vexing problem, and after testing I have found that NONE of the available answers are sufficient.
I have seen various suggestions but none seem to be able to return the last inserted value for an auto_increment field in MySQL.
I have seen examples that mention the use of session.flush() to add the record and then retrieve the id. However that always seems to return 0.
I have also seen examples that mention the use of session.refresh() but that raises the following error: InvalidRequestError: Could not refresh instance ''
What I'm trying to do seems insanely simple but I can't seem to figure out the secret.
I'm using the declarative approach.
So, my code looks something like this:
class Foo(Base):
__tablename__ = 'tblfoo'
__table_args__ = {'mysql_engine':'InnoDB'}
ModelID = Column(INTEGER(unsigned=True), default=0, primary_key=True, autoincrement=True)
ModelName = Column(Unicode(255), nullable=True, index=True)
ModelMemo = Column(Unicode(255), nullable=True)
f = Foo(ModelName='Bar', ModelMemo='Foo')
session.add(f)
session.flush()
At this point, the object f has been pushed to the DB, and has been automatically assigned a unique primary key id. However, I can't seem to find a way to obtain the value to use in some additional operations. I would like to do the following:
my_new_id = f.ModelID
I know I could simply execute another query to lookup the ModelID based on other parameters but I would prefer not to if at all possible.
I would much appreciate any insight into a solution to this problem.
Thanks for the help in advance.
The problem is you are setting defaul for the auto increment. So when it run the insert into query the log of server is
2011-12-21 13:44:26,561 INFO sqlalchemy.engine.base.Engine.0x...1150 INSERT INTO tblfoo (`ModelID`, `ModelName`, `ModelMemo`) VALUES (%s, %s, %s)
2011-12-21 13:44:26,561 INFO sqlalchemy.engine.base.Engine.0x...1150 (0, 'Bar', 'Foo')
ID : 0
So the output is 0 which is the default value and which is passed because you are setting default value for autoincrement column.
If I run same code without default then it give the correct output.
Please try this code
from sqlalchemy import create_engine
engine = create_engine('mysql://test:test#localhost/test1', echo=True)
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
from sqlalchemy.orm import sessionmaker
Session = sessionmaker(bind=engine)
session = Session()
from sqlalchemy import Column, Integer, Unicode
class Foo(Base):
__tablename__ = 'tblfoo'
__table_args__ = {'mysql_engine':'InnoDB'}
ModelID = Column(Integer, primary_key=True, autoincrement=True)
ModelName = Column(Unicode(255), nullable=True, index=True)
ModelMemo = Column(Unicode(255), nullable=True)
Base.metadata.create_all(engine)
f = Foo(ModelName='Bar', ModelMemo='Foo')
session.add(f)
session.flush()
print "ID :", f.ModelID
Try using session.commit() instead of session.flush(). You can then use f.ModelID.
Not sure why the flagged answer worked for you. But in my case, that does not actually insert the row into the table. I need to call commit() in the end.
So the last few lines of code are:
f = Foo(ModelName='Bar', ModelMemo='Foo')
session.add(f)
session.flush()
print "ID:", f.ModelID
session.commit()

Categories

Resources