SQLAlchemy freezing application - python

I am using SQLAlchemy in my python command line app. The app is basically reading a set of URLs and doing inserts into a postgreql database based on the data.
After about the same number of inserts (give or take a few), the entire app freezes.
Having seen python sqlalchemy + postgresql program freezes I am assuming I am doing something wrong with the SQLAlchemy Session (although I am not using drop_all(), which seemed to be the cause of that issue). I've tried a couple of things but thus far they have had no impact.
Any hints or help would be welcome. If my integration of SQLAlchemy into my app is incorrect, a pointer to a good example of doing it right would also be welcome.
My code is as follows:
Set up the sql alchemy base:
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
Create the session info and attach it to the Base
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
engine = create_engine("sqlite:///myapp.db")
db_session = scoped_session(sessionmaker(bind=engine))
Base.query = db_session.query_property()
Base.scoped_db_session = db_session
Create my model from Base and make use of the session
class Person(Base):
def store(self):
if self.is_new():
self.scoped_db_session.add(self)
self.scoped_db_session.commit()
If I create enough objects of type Person and call store(), the app eventually freezes.

Managed to solve the problem. Turns out that my implementation is specifically on the don't do it this way list (see http://docs.sqlalchemy.org/en/latest/orm/session_basics.html#session-frequently-asked-questions E.g. don't do this) and I was not managing the session correctly
To solve my problem I moved the session out of the model into a separate class, so instead of having calls like:
mymodel.store()
I now have:
db.current_session.store(mymodel)
where db is an instance of my custom DBContext below:
from contextlib import contextmanager
from sqlalchemy.orm import scoped_session, sessionmaker
class DbContext(object):
def __init__(self, engine, session=None):
self._engine = engine
self._session = session or scoped_session(sessionmaker(autocommit=False, autoflush=False, bind=self._engine))
self.query = self._session.query_property()
self.current_session = None
def start_session(self):
self.current_session = self._session()
def end_session(self):
if self.current_session:
self.current_session.commit()
self.current_session.close()
self.current_session = None
#contextmanager
def new_session(self):
try:
self.start_session()
yield
finally:
self.end_session()
When you want to store one or more model objects, call DBContext.start_session() to start a clean session. When you finally want to commit, call DBContext.end_session().

Related

sqlalchemy is not returning updated row information

I have a web application that is on top of mysql. When I started it, I built the mysql database from scratch, and connected to it using pymysql.
Fast forward...
I've rewritten everything using sqlalchemy to connect to the db (non-declarative?) I can connect to the db, read the db, update, etc. I also use MySQLWorkbench to view the database in a graphic way.
My webapp has a few tables that will poll the database for changes, and update the table. For instance, I have a job queue which will update the percentage-done. Here's the kicker:
When I send a job through, I will watch the database in MySQLWorkbench. I can verify that the job is going through (I see the percentage climbing). This confirms that the webapp is writing to the database. But! On the other end, where the table is asking for the status, it is not recieving the updated information (I've done print outs in the models, as well as in the flask app, as well as the FE js).
What is going on here? Even though I can see it in MySQLWorkbench is the connection not 'releasing' or something? Here is some of my code:
#models.py
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker, Query
import os
import json
Base = automap_base()
engine = create_engine("mysql+pymysql://user:asdas758ef10d2364d54d7e8#localhost:{}/metadatacontroller".format(3306))
# reflect the tables
Base.prepare(engine, reflect=True)
EncodeQueue = Base.classes.encode_queue
db_session = scoped_session(sessionmaker(bind=engine))
class DataStore:
def __init__(self):
self.db = db_session()
#receive status
def queue_rendering_items(self):
query = self.db.query(EncodeQueue).filter(EncodeQueue.status.in_(['queue', 'rendering', 'rendering thumbnails'])).all()
return_query = []
for item in query:
row = {'artist_id': item.artist_id,
'art_id': item.art_id,
'status': item.status,
'progress': item.errors,
'artwork_title': item.artwork_title,
'artist_name': item.artist_name,
}
return_query.append(row)
return return_query
#update status
def update_render_progress(self, job_id, progress):
self.db.query(EncodeQueue).filter_by(job_id=job_id).update\({'errors': json.dumps(progress)})
self.db.commit()
(ignore the back slash in update_render_progress ... weird SO formatting)
This is how 'status' code is being called on a flask end point:
#app.route('/api/jobs', methods=["GET"])
def get_all_jobs():
db = models.DataStore()
all_art = db.queue_rendering_items()
print(all_art)
return jsonify({'data':all_art})
Lastly, the status is being updated by a different file, on a different machine:
import models
db = models.DataStore()
db.update_render_progress(job_id, {'percentage': percentage_complete, 'time_remain': render.render_estimated_seconds_remaining})
What the heck is happening?

What is the type of table_name in select_from(table_name)?

I'm trying to get the number of rows from a SQL Server which consists of many tables by looping through it to see how much data they contain. However, I'm not sure what will go into select_from() function. As I currently supply Unicode for table names and it raised
NoInspectionAvailable: No inspection system is available for object of type <type 'unicode'>
The code that I used was
from sqlalchemy import create_engine
import urllib
from sqlalchemy import inspect
import sqlalchemy
from sqlalchemy import select, func, Integer, Table, Column, MetaData
from sqlalchemy.orm import sessionmaker
connection_string = "DRIVER={SQL Server}; *hidden*"
connection_string = urllib.quote_plus(connection_string)
connection_string = "mssql+pyodbc:///?odbc_connect=%s" % connection_string
engine = sqlalchemy.create_engine(connection_string)
Session = sessionmaker()
Session.configure(bind=engine)
session = Session()
connection = engine.connect()
inspector = inspect(engine)
for table_name in inspector.get_table_names():
print session.query(func.count('*')).select_from(table_name).scalar()
Typically, it's a class name that refers to a class that describes the database table.
In the sqlalchemy docs, http://docs.sqlalchemy.org/en/latest/orm/tutorial.html, they have you create a base class using declarative base and then create child classes for each table you want to query. You would then pass that class name into the select_from function unquoted.
The Flask framework provides a built-in base class that is ready for use called db.Model and Django has one called models.Model.
Alternatively, you can also pass queries. I use the Flask framework typically for python so I usually initiate queries like this:
my_qry = db.session.query(Cust).filter(Cust, Cust.cust == 'lolz')
results = my_qry.all()
On a side note, if you decide to look at .NET they also have nice ORMs. Personally, I favor Entity Framework, but Linq to SQL is out there, too.

SQLAlchemy-Continuum and Pyramid: UnboundExecutionError

I have a Pyramid application that does CRUD with SQLAlchemy via pyramid_basemodel. All seems to work nicely.
I then pip installed SQLAlchemy-Continuum, to provide history for certain objects. All I did to configure it was make the following alterations to my models.py file:
import sqlalchemy as sa
from sqlalchemy import (event, Column, Index, Integer, Text, String, Date, DateTime, \
Float, ForeignKey, Table, Boolean,)
from sqlalchemy.orm import (relationship, backref, mapper, scoped_session, sessionmaker,)
from pyramid_basemodel import Base, BaseMixin, Session, save
from pyramid_fullauth.models import User
from sqlalchemy_continuum import make_versioned
from colanderalchemy import setup_schema
from zope.sqlalchemy import ZopeTransactionExtension
DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
event.listen(mapper, 'mapper_configured', setup_schema)
# Continuum setup
make_versioned()
# FOR EACH VERSIONED MODEL I ADD __versioned__ = {} at the start of each model def. Eg:
class Thing(Base):
__versioned__ = {}
__tablename__ = 'thing'
id = sa.Column(Integer, primary_key=True)
related_id = sa.Column(Integer, ForeignKey('OtherThing.id'))
other_thing = sa.orm.relationship("OtherThing", backref="thing")
description = sa.Column(String(length=100))
a_date = sa.Column(Date)
some_hours = sa.Column(Integer)
b_date = sa.Column(Date)
more_hours = sa.Column(Integer)
sa.orm.configure_mappers()
(Sorry for the slightly redundant imports; I decided to totally follow the Continuum example and import sqlalchemy as sa, and switch to using that notation in the models that I versioned. I may also be doing stupid, monkey-see monkey-do stuff based on a half-understanding of different tutorials.)
This setup allowed me to run alembic revision --autogenerate and produce ModelHistory tables in the database, but when I go to some of the pages that read the now-versioned models, they give the error
sqlalchemy.exc.UnboundExecutionError: This session is not bound to a single Engine or Connection, and no context was provided to locate a binding.
For some reason it reads one model added in the same way, but then trying to update it fails with the same error.
My guess is that I need to configure whatever Continuum uses for a SQLAlchemy session to point to the existing one configured in Pyramid, but I'm not sure. Am I getting warm?
You're generating a session when you call:
DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
but not binding it to an engine. Your pyramid template, pyramid_basemodel, is already generating a session for you and binding it to the engine.
Try removing the DBSession and using Session imported from pyramid_basemodel.
FWIW, if anyone these days is looking how to make SQLAlchemy-Continuum work with Pyramid, here's how you do it:
Assuming you have followed the official Pyramid tutorial is the following:
install SQLAlchemy-Continuum
add make_versioned(user_cls=None) to the top of models/__init__.py
add __versioned__ = {} to MyModel class in models/mymodel.py
And that's it!
I've created a repo that has all the needed bits in place: https://github.com/zupo/tutorial/tree/exploration/sqlalchemy-continuum

Correct way of using session in SQLAlchemy

I started with sqlalchey and everything is good. But my question is: How to use session on a correct way.
For example:
I have a file like (sqlsetts.py):
from sqlalchemy import create_engine
engine = create_engine('mysql://root#127.0.0.1/test')
engine.execute("SET NAMES 'utf8'")
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
from sqlalchemy.orm import sessionmaker
Session = sessionmaker(bind=engine, autoflush=True, autocommit=True, expire_on_commit=True)
session = Session()
Every time I need to use a session i use following:
class test:
from sqlsettings import session
def testfunc(self):
self.session.query... bla bla
Is that correct way to include session from a file or to make a new one for each function?

How to properly test a Python Flask system based on SQLAlchemy Declarative

I have a project that I've been working on for a while, which is written in Flask, and uses SQLAlchemy with the Declarative extension http://flask.pocoo.org/docs/patterns/sqlalchemy/. I've recently decided to start unit testing my project, but for the life of me, I can't seem to figure out how to make it work.
I looked at http://flask.pocoo.org/docs/testing/, but I can't seem to make it work.
I tried a mix of things from different websites, but am unable to find something that works correctly.
class StopsTestCase(unittest.TestCase):
def setUp(self):
self.engine = create_engine('sqlite:///:memory:')
self.session = scoped_session(sessionmaker(autocommit=False,
autoflush=False,
bind=self.engine))
models.Base = declarative_base()
models.Base.query = self.session.query_property()
models.Base.metadata.create_all(bind=self.engine)
def test_empty_db(self):
stops = session.query(models.Stop).all()
assert len(stops) == 0
def tearDown(self):
session.remove()
if __name__ == '__main__':
unittest.main()
Unfortunately, the best I can seem to get, causes the following error.
OperationalError: (OperationalError) no such table: stops u'SELECT stops.agency_id AS stops_agency_id, stops.id AS stops_id, stops.name AS stops_name, stops."desc" AS stops_desc, stops.lat AS stops_lat, stops.lon AS stops_lon, stops.zone_id AS stops_zone_id \nFROM stops' ()
----------------------------------------------------------------------
Ran 1 test in 0.025s
FAILED (errors=1)
Any help on this would be greatly appreciated. If someone has ever been through this before, and made it work, I would like some pointers! Thanks in advance.
Based on what I found and how I got it working, here is a template solution that works for testing the underlying SQLAlchemy systems using the Declarative extension.**
import unittest
from database import Base
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
import models
class StopsTestCase(unittest.TestCase):
def setUp(self):
self.engine = create_engine('sqlite:///:memory:')
self.session = scoped_session(sessionmaker(autocommit=False,
autoflush=False,
bind=self.engine))
Base.query = self.session.query_property()
Base.metadata.create_all(bind=self.engine)
#Create objects here
#These will likely involve something like the following for one of my stops
# stop1 = models.Stop(id=1, name="Stop 1")
# self.session.add(stop1)
# self.session.commit()
# But adding a stop to the database here will break the test below. Just saying.
def test_empty_db(self):
stops = self.session.query(models.Stop).all()
assert len(stops) == 0
def tearDown(self):
self.session.remove()
You're instantiating declarative_base again, you should be using the same instance you used as a base class for your models. Also, you seem to be using two different session instances, self.session and some module-global session. Try cleaning that up, too.

Categories

Resources