I have a flask app that uses sqlalchemy to read, write a postgres schema. When i use the .delete() function, it only flushes, but actual changes to the database do not occur.
Session = (sessionmaker(autocommit=False, autoflush=False,bind=conn))
sess = Session()
sess.query(Table).filter(Column.id==1).delete()
sess.commit()
I tried without scoped_session, but still the same issue.
You're overwriting sess with the number of deleted rows, and then trying to commit that number to the database. The .delete() method returns the number of rows to be deleted (http://docs.sqlalchemy.org/en/rel_1_0/orm/query.html#sqlalchemy.orm.query.Query.delete).
Additionally, you set autoflush=False when you created your session. That makes it so you have to explicitly flush to the db after a commit. I suggest this:
Session = sessionmaker(autocommit=False, bind=conn)
sess = Session()
rows_deleted = sess.query(Table).filter(Column.id==1).delete()
sess.commit()
print str(rows_deleted) + " rows were deleted"
db.session.query(Model).filter(Model.id==123).delete()
db.session.commit()
Related
I Am trying to add some Rows to My Table with SQLAlchemy to my local SQLLite File. However the Rows dont get saved. I dont get any Primary Key Errors and i am also commiting my Inserts/Session.
What's really confusing me is that my Code works fine(My Inserts are Saved), when i am changing my Engine to Connect to a local postgresql-Server. I Am Calling the add_values Function from an other File.
def add_values(offers):
engine = create_engine("sqlite:///mydb.db",echo=True)
Base.metadata.create_all(bind=engine)
Session = sessionmaker(bind=engine)
with Session.begin() as session:
list_of_offers =[]
for ele in offers:
offer = Offer(ele.id,ele.asin,ele.price,ele.currency,ele.condition,ele.seller,ele.datetime,ele.buyboxWinner,ele.numberOfSellers)
list_of_offers.append(offer)
session.add_all(list_of_offers)
session.commit()
I Tryed to Change my Database to postgresql-Server.
I'm trying to build a flask web app that interfaces with a remote, pre-existing sql server database.
I have a successful connection, according to the output of echo='debug.'
I'm using the .reflect() "only" parameter because the database I'm connecting to has hundreds of tables. Without it, the entire database gets reflected and it runs way too slow.
engine = create_engine('mssql+pymssql://user:pass#server/db', echo='debug')
conn = engine.connect()
meta = MetaData(engine).reflect(schema='dbo', only=['Bookings'])
table = meta.tables['Bookings']
select_st = select([table]).where(
table.c.ID == 'id-1234')
res = conn.execute(select_st)
for _row in res:
print(_row)
The problem is that I'm getting the error:
table = meta.tables['Bookings']
AttributeError: 'NoneType' object has no attribute 'tables'
My guess is that .tables doesn't work with the subset 'Bookings' that I've passed it because .tables calls the meta object, which it believes should be a database, not a a table.
How can I get sqlalchemy features to work with that 'only' parameter? Again, I'm building a flask web app so the solution needs to be able to interface with Base.automap.
.reflect() does not return a value / returns None. So meta is None.
Try it like this please:
meta = MetaData(engine)
meta.reflect(schema='dbo', only=['Bookings'])
table = meta.tables['Bookings']
Per the docs, we should use the following pattern with a sessionmaker object:
Session = sessionmaker(engine)
with Session.begin() as session:
session.add(some_object)
In a multithreaded environment, we are also supposed to use a single scoped_session and share it. So in my __init__.py I create create one and import it everywhere else in my program:
engine = create_engine(config.SQLALCHEMY_DATABASE_URI)
Session = scoped_session(sessionmaker(bind=engine))
The question is, how am I supposed to combine these two approaches? This seems to be the suggested way, but it errors out:
from myapp import Session
with Session.begin() as session:
query_result = session.query(MyModel).all()
----
Exception has occurred: AttributeError
'SessionTransaction' object has no attribute 'query'
I tried the following and it works, but it seems like it doesn't follow the docs, and I'm afraid it breaks something not obvious. Can anyone confirm if this is correct?
from myapp import Session
with Session() as session, session.begin():
query_result = session.query(MyModel).all()
I've been looking around at other replies and seeing very little that addresses the specific question.
From the Session.begin() docs:
The Session object features autobegin behavior, so that normally it is not necessary to call the Session.begin() method explicitly. However, it may be used in order to control the scope of when the transactional state is begun.
You can use Session.begin() (new in 1.4) to obtain a SessionTransaction instance usable as a context manager which will autocommit on successful exit.
Calling the scoped_session returns a SessionTransaction right away, as per your error, so you do not need to begin it again.
All in all, you can definitely do the stacked context manager, but its unnecessary, so you might as well stick to using the original flow:
Session = scoped_session(...)
with Session() as session: # NB. session is a thread local SessionTransaction
...
session.commit()
or the proxied Session
Session = scoped_session(...)
#on_request_end
def remove_session(req):
Session.remove()
#route("/xyz", ...)
def handle_xyz():
instance = Class(...)
Session.add(instance)
Session.commit()
I am using SQLAlchemy's provided contextmanager to handle sessions for me. What I don't understand is how to get the automatically generated ID because (1) the ID is not created until after commit() is called yet (2) the newly created instance is only available in the context manager's scope:
def save_soft_file(name, is_geo=False):
with session_scope() as session:
soft_file = models.SoftFile(name=name, is_geo=is_geo)
session.add(soft_file)
# id is not available here, because the session has not been committed
# soft_file is not available here, because the session is out of context
return soft_file.id
What am I missing?
Use session.flush() to execute pending commands within the current transaction.
def save_soft_file(name, is_geo=False):
with session_scope() as session:
soft_file = models.SoftFile(name=name, is_geo=is_geo)
session.add(soft_file)
session.flush()
return soft_file.id
If an exception occurs after a flush but before the session goes out of scope, the changes will be rolled back to the beginning of the transaction. In that case your soft_file would not actually be written to the database, even though it had been given an ID.
I'm trying to do a two-phase commit using SQLalchemy 0.6.8 with Postgresql 8.3.4, but I think I'm missing something...
The workflow goes like this:
session = sessionmaker(engine)(autocommit=True)
tx = session.connection().begin_twophase(xid) # Doesn't issue any SQL
session.begin()
session.add(obj1)
session.flush()
tx.prepare()
then from another session
session = sessionmaker(engine)(autocommit=True)
session.connection().commit_prepared(xid, recover=True) # recover=True because otherwise it complains that you can't issue a COMMIT PREPARED from inside a transaction
This doesn't raise any error, but doesn't write anything to the table either... O_o
What am I missing?
I tried even blocking the application after the prepare() and issuing a COMMIT PREPARED 'xid' from pgadmin, but still nothing gets written.
I managed to get it working, here's how:
session = sessionmaker(engine)(twophase=True)
session.add(obj1)
session.prepare()
# Find transaction id
for k, v in s.transaction._connections.iteritems():
if isinstance(k, Connection):
return v[1].xid
then from another session
session = sessionmaker(engine)(twophase=True)
session.connection().commit_prepared(xid, recover=True)