I am using the Pyramid web framework with SQLAlchemy, connected to a MySQL backend. The app I've put together works, but I'm trying to add some polish by way of some enhanced logging and exception handling.
I based everything off of the basic SQLAlchemy tutorial on the Pyramid site, using the session like so:
DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
Using DBSession to query works great, and if I need to add and commit something to the database I'll do something like
DBSession.add(myobject)
DBSession.flush()
So I get my new ID.
Then I wanted to add logging to the database, so I followed this tutorial. That seemed to work great. I did initially run into some weirdness with things getting committed and I wasn't sure how SQLAlchemy was working so I had changed "transaction.commit()" to "DBSession.flush()" to force the logs to commit (this is addressed below!).
Next I wanted to add custom exception handling with the intent that I could put a friendly error page for anything that wasn't explicitly caught and still log things. So based on this documentation I created error handlers like so:
from pyramid.view import (
view_config,
forbidden_view_config,
notfound_view_config
)
from pyramid.httpexceptions import (
HTTPFound,
HTTPNotFound,
HTTPForbidden,
HTTPBadRequest,
HTTPInternalServerError
)
from models import DBSession
import transaction
import logging
log = logging.getLogger(__name__)
#region Custom HTTP Errors and Exceptions
#view_config(context=HTTPNotFound, renderer='HTTPNotFound.mako')
def notfound(request):
log.exception('404 not found: {0}'.format(str(request.url)))
request.response.status_int = 404
return {}
#view_config(context=HTTPInternalServerError, renderer='HTTPInternalServerError.mako')
def internalerror(request):
log.exception('HTTPInternalServerError: {0}'.format(str(request.url)))
request.response.status_int = 500
return {}
#view_config(context=Exception, renderer="HTTPExceptionCaught.mako")
def error_view(exc, request):
log.exception('HTTPException: {0}'.format(str(request.url)))
log.exception(exc.message)
return {}
#endregion
So now my problem is, exceptions are caught and my custom exception view comes up as expected. But the exceptions aren't logged to the database. It appears this is because the DBSession transaction is rolled back on any exception. So I changed the logging handler back to "transaction.commit". This had the effect of actually committing my exception logs to the database, BUT now any DBSession action after any log statement throws an "Instance not bound to a session" error...which makes sense because from what I understand after a transaction.commit() the session is cleared out. The console log always shows exactly what I want logged, including the SQL statements to write the log info to the database. But it's not committing on exception unless I use transaction.commit(), but if I do that then I kill any DBSession statements after the transaction.commit()!.
Sooooo....how might I set things up so that I can log to the database, but also catch and successfully log exceptions to the database, too? I feel like I want the logging handler to use some sort of separate database session/connection/instance/something so that it is self-contained but I'm unclear on how that might work.
Or should I architect what I want to do completely different?
EDIT:
I did end up going with a separate, log-specific session dedicated only to adding committing log info to the database. This seemed to work well until I started integrating a Pyramid console script into the mix, in which I ran into problems with sessions and database commits within the script not necessarily working like they do in the actual Pyramid web application.
In hindsight (and what I'm doing now) instead of logging to a database I use the standard logging and FileHandlers (TimedRotatingFileHandlers specifically) and log to the file system.
Using transaction.commit() has an unintended side-effect of the changes to other models being committed too, which is not too cool - the idea behind the "normal" Pyramid session setup with ZopeTransactionExtension is that a single session starts at the beginning of the request, then if everything succeeds the session is committed, if there's an exception then everything is rolled back. It would be better to keep this logic and avoid committing things manually in the middle of request.
(as a side note - DBSession.flush() does not commit the transaction, it emits the SQL statements but the transaction can be rolled back later)
For things like exception logs, I would look at setting up a separate Session which is not bound to Pyramid's request/response cycle (without ZopeTransactionExtension) and then using it to create log records. You'd need to commit the transaction manually after adding a log record:
record = Log("blah")
log_session.add(record)
log_session.commit()
Related
As I cannot use the Flask-SQLAlchemy due to models definitions and use of the database part of the app in other contexts than Flask, I found several ways to manage sessions and I am not sure what to do.
One thing that everyone seems to agree (including me) is that a new session should be created at the beginning of each request and be committed + closed when the request has been processed and the response is ready to be sent back to the client.
Currently, I implemented the session management that way:
I have a database initialization python script which creates the engine (engine = create_engine(app.config["MYSQL_DATABASE_URI"])) and defines the session maker Session = sessionmaker(bind=engine, expire_on_commit=False).
In another file I defined two function decorated with flask's before_request and teardown_request applications decorators.
#app.before_request
def create_db_session():
g.db_session = Session()
#app.teardown_request
def close_db_session(exception):
try:
g.db_session.commit()
except:
g.db_session.rollback()
finally:
g.db_session.close()
I then use the g.db_session when I need to perform queries: g.db_session.query(models.User.user_id).filter_by(username=username)
Is this a correct way to manage sessions ?
I also took a look at the scoped sessions proposed by SQLAlchemy and this might be anotherway of doing things, but I am not sure about how to change my system to use scoped sessions...
If I understood it well, I would not use the g variable, but I would instead always refer to the Session definition declared by Session = scoped_session(sessionmaker(bind=engine, expire_on_commit=False)) and I would not need to initialize a new session explicitly when a request arrives.
I could just perform my queries as usual with Session.query(models.User.user_id).filter_by(username=username) and I would just need to remove the session when the request ends:
#app.teardown_request
def close_db_session(exception):
Session.commit()
Session.remove()
I am a bit lost with this session management topic and I would need help to understand how to manage sessions. Is there a real difference between the two approaches above?
Your approach of managing the session via flask.g is completely acceptable to my point of view. Whatever we are trying to do with SQLAlchemy, one must remember the basic principles:
Always clean up after yourself. At web application runtime, if you spawn a lot of sessions without .close()ing them, this will eventually lead to connection overflow at your DB instance. You are handling this by calling finally: session.close()
Maintain session independence. It's not good if various application contexts ( requests, threads, etc..) share the same session instance, because it's not deterministic. You are doing this by ensuring only one session runs per one request.
The scoped_session can be considered as just an alternative of flask.g - it ensures that within one thread, each call to the Session() constructor returns the same object - https://docs.sqlalchemy.org/en/13/orm/contextual.html#unitofwork-contextual
It's a SQLA batteries included version of your session management code.
So far, if you are using Flask, which is a synchronous framework, I don't think you will have any issues with this setup.
I'd like to do some database checks before initalising the pyramid app, but I'm not quite sure how to do that.
I could potentially create a fake request through the request factory and get the session from it. But I suspect there's a better way. Can I get a temporary transaction from the app, or the configurator?
If you're using the cookiecutter then it's pretty straightforward. The session factory is stored in config.registry['dbsession_factory'] and you can grab it and use it to make sessions whenever you want, including at config-time. A good way to do it is with a temporary transaction manager, but it's not required.
import transaction
from myapp.models import get_tm_session
def main(...):
config.include('myapp.models')
tm = transaction.TransactionManager(explicit=True)
with tm:
dbsession = get_tm_session(config.registry['dbsession_factory'], tm)
# do stuff, the "with tm" will invoke commit if there are no exceptions
I am trying to log statements in flask application while an API call is in process. I want to log statements directly to DB. The problem is that I have db session commits and rollbacks everywhere. I want to log statements irrespective of the API call session
For instance
new_log = Log()
db.session.add(new_log)
...
if some logic:
db.session.rollback()
else:
db.session.commit()
In the above code I want new_log to be committed irrespective of "some logic"
I have tried (and it seems that I was successful) to create a separate db session in my main app file using
db = SQLAlchemy(app)
logger_db = SQLAlchemy(app)
Now my code looks like
new_log = Log()
logger_db.session.add(new_log)
logger_db.session.commit()
...
if some logic:
db.session.rollback()
else:
db.session.commit()
This seems to be working. But I wanted to know if there is an alternative better solution for the same. Also if anyone can point out cases in which this solution won't work, it would be great.
P.S.: I would be using logging everywhere in the application, so I am a bit worried about performance as well. Also I know that I can log into a file as well but for now, I would like to stick to db approach only
I have an MS-SQL deployed on AWS RDS, that I'm writing a Flask front end for.
I've been following some intro Flask tutorials, all of which seem to pass the DB credentials in the connection string URI. I'm following the tutorial here:
https://medium.com/#rodkey/deploying-a-flask-application-on-aws-a72daba6bb80#.e6b4mzs1l
For deployment, do I prompt for the DB login info and add to the connection string? If so, where? Using SQLAlchemy, I don't see any calls to create_engine (using the code in the tutorial), I just see an initialization using config.from_object, referencing the config.py where the SQLALCHEMY_DATABASE_URI is stored, which points to the DB location. Trying to call config.update(dict(UID='****', PASSWORD='******')) from my application has no effect, and looking in the config dict doesn't seem to have any applicable entries to set for this purpose. What am I doing wrong?
Or should I be authenticating using Flask-User, and then get rid of the DB level authentication? I'd prefer authenticating at the DB layer, for ease of use.
The tutorial you are using uses Flask-Sqlalchemy to abstract the database setup stuff, that's why you don't see engine.connect().
Frameworks like Flask-Sqlalchemy are designed around the idea that you create a connection pool to the database on launch, and share that pool amongst your various worker threads. You will not be able to use that for what you are doing... it takes care of initializing the session and things early in the process.
Because of your requirements, I don't know that you'll be able to make any use of things like connection pooling. Instead, you'll have to handle that yourself. The actual connection isn't too hard...
engine = create_engine('dialect://username:password#host/db')
connection = engine.connect()
result = connection.execute("SOME SQL QUERY")
for row in result:
# Do Something
connection.close()
The issue is that you're going to have to do that in every endpoint. A database connection isn't something you can store in the session- you'll have to store the credentials there and do a connect/disconnect loop in every endpoint you write. Worse, you'll have to either figure out encrypted sessions or server side sessions (without a db connection!) to prevent keeping those credentials in the session from becoming a horrible security leak.
I promise you, it will be easier both now and in the long run to figure out a simple way to authenticate users so that they can share a connection pool that is abstracted out of your app endpoints. But if you HAVE to do it this way, this is how you will do it. (make sure you are closing those connections every time!)
I'm working on an API with Flask and SQLAlchemy, and here's what I would like to do :
I have a client application, working on multiple tablets, that have to send several requests to add content to the server.
But I don't want to use auto rollback at the end of each API request (default behavior with flask-sqlalchemy), because the sending of data is done with multiple requests, like in this very simplified example :
1. beginTransaction/?id=transactionId -> opens a new session for the client making that request. SessionManager.new_session() in the code below.
2. addObject/?id=objectAid -> adds an object to the PostGreSQL database and flush
3. addObject/?id=objectBid -> adds an object to the PostGreSQL database and flush
4. commitTransaction/?id= transactionId -> commit what happened since the beginTransaction. SessionManager.commit() in the code below.
The point here is to not add the data to the server if the client app crashed / lost his connection before the « commitTransaction » was sent, thus preventing from having incomplete data on the server.
Since I don't want to use auto rollback, I can't really use flask-SQLAlchemy, so I'm implementing SQLAlchemy by myself into my flask application, but I'm not sure how to use the sessions.
Here's the implementation I did in the __ init __.py :
db = create_engine('postgresql+psycopg2://admin:pwd#localhost/postgresqlddb',
pool_reset_on_return=False,
echo=True, pool_size=20, max_overflow=5)
Base = declarative_base()
metadata = Base.metadata
metadata.bind = db
# create a configured "Session" class
Session = scoped_session(sessionmaker(bind=db, autoflush=False))
class SessionManager(object):
currentSession = Session()
#staticmethod
def new_session():
#if a session is already opened by the client, close it
#create a new session
try:
SessionManager.currentSession.rollback()
SessionManager.currentSession.close()
except Exception, e:
print(e)
SessionManager.currentSession = Session()
return SessionManager.currentSession
#staticmethod
def flush():
try:
SessionManager.currentSession.flush()
return True
except Exception, e:
print(e)
SessionManager.currentSession.rollback()
return False
#staticmethod
def commit():
#commit and close the session
#create a new session in case the client makes a single request without using beginTransaction/
try:
SessionManager.currentSession.commit()
SessionManager.currentSession.close()
SessionManager.currentSession = Session()
return True
except Exception, e:
print(e)
SessionManager.currentSession.rollback()
SessionManager.currentSession.close()
SessionManager.currentSession = Session()
return False
But now the API doesn’t work when several clients make a request, it seems like every client share the same session.
How should I implement the sessions so that each client has a different session and can make requests concurrently ?
Thank you.
You seem to want several HTTP requests to share one transaction. It's impossible - incompatible with stateless nature of HTTP.
Please consider for example that one client would open transaction and fail to close it because it has lost connectivity. A server has no way of knowing it and would leave this transaction open forever, possibly blocking other clients.
Using transactions to bundle database request is reasonable for example for performance reasons when there's more than one write operation. Or for keeping database consistent. But it always has to be committed or rolled back on the same HTTP request it was open.
I know this is an old thread, but you can achieve this with djondb (NoSQL database),
With djondb you can create transactions and if something goes wrong, i.e. you lost the connection, it does not matter, the transaction could be there forever without affecting the performance, or creating locks, djondb has been made to support long-term transactions, so you can open the transaction, use it, commit it, roll it back or just discard it (close the connection and forget it was there) and it won't leave the database in any inconsistent state.
I know this may sounds weird for Relational guys, but that's the beauty of NoSQL it creates new paradigms supporting what SQL guys say it's impossible.
Hope this helps,