Triggering connection pools with sqlalchemy in flask - python

I am using Flask + SQLAlchemy (DB is Postgres) for my server, and am wondering how connection pooling happens. I know that it is enabled by default with a pool size of 5, but I don't know if my code triggers it.
Assuming I use the default flask SQLalchemy bridge :
db = SQLAlchemy(app)
And then use that object to place database calls like
db.session.query(......)
How does flask-sqlalchemy manage the connection pool behind the scene? Does it grab a new session every time I access db.session? When is this object returned to the pool (assuming I don't store it in a local variable)?
What is the correct pattern to write code to maximize concurrency + performance? If I access the DB multiple times in one serial method, is it a good idea to use db.session every time?
I was unable to find documentation on this manner, so I don't know what is happening behind the scene (the code works, but will it scale?)
Thanks!

You can use event registration - http://docs.sqlalchemy.org/en/latest/core/event.html#event-registration
There are many different event types that can be monitored, checkout, checkin, connect etc... - http://docs.sqlalchemy.org/en/latest/core/events.html
Here is a basic example from the docs on printing a when a new connection is established.
from sqlalchemy.event import listen
from sqlalchemy.pool import Pool
def my_on_connect(dbapi_con, connection_record):
print "New DBAPI connection:", dbapi_con
listen(Pool, 'connect', my_on_connect)

Related

SQLAlchemy - work with connections to DB and Sessions (not clear behavior and part in documentation)

I use SQLAlchemy (really good ORM but documentation is not clear enough) for communicating with PostgreSQL
Everything was great till one case when postgres "crashed" cause of maximum connection limits was reached: no more connections allowed (max_client_conn).
That case makes me think that I do smth wrong. After few experiments I figure out how not to face that issue again, but some questions left
Below you'll see code examples (in Python 3+, PostgreSQL settings are default) without and with mentioned issue, and what I'd like to hear eventually is answers on following questions:
What exactly does context manager do with connections and sessions? Closing session and disposing connection or what?
Why does first working example of code behave as example with issue without NullPool as poolclass in "connect" method?
Why in the first example I got only 1 connection to db for all queries but in second example I got separate connection for each query? (please correct me if I understood it wrong, was checking it with "pgbouncer")
What is the best practices to open and close connections(and/or work with Session) when you use SQLAlchemy and PostgreSQL DB for multiple instances of script (or separate threads in script) that listens requests and has to have separate session to each of them? (I mean raw SQLAlchemy not Flask-SQLAlchemy or smth like this)
Working example of code without issue:
making connection to DB:
from sqlalchemy.pool import NullPool # does not work without NullPool, why?
def connect(user, password, db, host='localhost', port=5432):
"""Returns a connection and a metadata object"""
url = 'postgresql://{}:{}#{}:{}/{}'.format(user, password, host, port, db)
temp_con = sqlalchemy.create_engine(url, client_encoding='utf8', poolclass=NullPool)
temp_meta = sqlalchemy.MetaData(bind=temp_con, reflect=True)
return temp_con, temp_meta
function to get session to work with DB:
from contextlib import contextmanager
#contextmanager
def session_scope():
con_loc, meta_loc = connect(db_user, db_pass, db_instance, 'localhost')
Session = sessionmaker(bind=con_loc)
"""Provide a transactional scope around a series of operations."""
session = Session()
try:
yield session
session.commit()
except:
session.rollback()
raise
query example:
with session_scope() as session:
entity = session.query(SomeEntity).first()
Failing example of code:
function to get session to work with DB:
def create_session():
# connect method the same as in first example
con, meta = connect(db_user, db_pass, db_instance, 'localhost')
Session = sessionmaker(bind=con)
session = Session()
return session
query example:
session = create_session()
entity = session.query(SomeEntity).first()
Hope you got the main idea
First of all you should not create engines repeatedly in your connect() function. The usual practice is to have a single global Engine instance per database URL in your application. The same goes for the Session class created by the sessionmaker().
What exactly does context manager do with connections and sessions? Closing session and disposing connection or what?
What you've programmed it to do, and if this seems unclear, read about context managers in general. In this case it commits or rolls back the session if an exception was raised within the block governed by the with-statement. Both actions return the connection used by the session to the pool, which in your case is a NullPool, so the connection is simply closed.
Why does first working example of code behave as example with issue without NullPool as poolclass in "connect" method?
and
from sqlalchemy.pool import NullPool # does not work without NullPool, why?
Without NullPool the engines you repeatedly create also pool connections, so if they for some reason do not go out of scope, or their refcounts are otherwise not zeroed, they will hold on to the connections even if the sessions return them. It is unclear if the sessions go out of scope timely in the second example, so they might also be holding on to the connections.
Why in the first example I got only 1 connection to db for all queries but in second example I got separate connection for each query? (please correct me if I understood it wrong, was checking it with "pgbouncer")
The first example ends up closing the connection due to the use of the context manager that handles transactions properly and the NullPool, so the connection is returned to the bouncer, which is another pool layer.
The second example might never close the connections because it lacks the transaction handling, but that's unclear due to the example given. It also might be holding on to connections in the separate engines that you create.
The 4th point of your question set is pretty much covered by the official documentation in "Session Basics", especially "When do I construct a Session, when do I commit it, and when do I close it?" and "Is the session thread-safe?".
There's one exception: multiple instances of the script. You should not share an engine between processes, so in order to pool connections between them you need an external pool such as the PgBouncer.
What exactly does context manager do with connections and sessions?
Closing session and disposing connection or what?
The context manager in Python is used to create a runtime context for use with the with statement. Simply, when you run the code:
with session_scope() as session:
entity = session.query(SomeEntity).first()
session is the yielded session. So, to your question of what the context manager does with the connections and sessions, all you have to do is look at what happens after the yield to see what happens. In this case it's just:
try:
yield session
session.commit()
except:
session.rollback()
raise
If you trigger no exceptions, it will be session.commit(), which according to the SQLAlchemy docs will "Flush pending changes and commit the current transaction."
Why does first working example of code behave as example with issue
without NullPool as poolclass in "connect" method?
The poolclass argument is just telling SQLAlchemy which subclass of Pool to use. However, in the case where you pass NullPool here, you are telling SQLAlchemy to not use a pool. You're effectively disabling pooling connections when you pass in NullPool. From the docs: "to disable pooling, set poolclass to NullPool instead." I can't say for sure but using NullPool is probably contributing to your max_connection issues.
Why in the first example I got only 1 connection to db for all queries
but in second example I got separate connection for each query?
(please correct me if I understood it wrong, was checking it with
"pgbouncer")
I'm not exactly sure. I think this has to do with how in the first example, you are using a context manager so everything within the with block will use a session generator. In your second example, you created a function that initializes a new Session and returns it, so you're not getting back a generator. I also think this has to do with your NullPool use which prevents connection pooling. With NullPool each query execution is acquiring a connection on its own.
What is the best practices to open and close connections(and/or work
with Session) when you use SQLAlchemy and PostgreSQL DB for multiple
instances of script (or separate threads in script) that listens
requests and has to have separate session to each of them? (I mean raw
SQLAlchemy not Flask-SQLAlchemy or smth like this)
See the section Is the session thread-safe? for this, but you need to take a "share nothing" approach to your concurrency. So in your case, you need each instance of a script to share nothing between each other.
You probably want to check out Working with Engines and Connections. I don't think messing with sessions is where you want to be if concurrency is what you're working on. There's more information about the NullPool and concurrency there:
For a multiple-process application that uses the os.fork system call,
or for example the Python multiprocessing module, it’s usually
required that a separate Engine be used for each child process. This
is because the Engine maintains a reference to a connection pool that
ultimately references DBAPI connections - these tend to not be
portable across process boundaries. An Engine that is configured not
to use pooling (which is achieved via the usage of NullPool) does not
have this requirement.
#Ilja Everilä answer was mostly helpful
I'll leave edited code here, maybe it'll help someone
New code that works like I expected is following:
making connection to DB::
from sqlalchemy.pool import NullPool # will work even without NullPool in code
def connect(user, password, db, host='localhost', port=5432):
"""Returns a connection and a metadata object"""
url = 'postgresql://{}:{}#{}:{}/{}'.format(user, password, host, port, db)
temp_con = sqlalchemy.create_engine(url, client_encoding='utf8', poolclass=NullPool)
temp_meta = sqlalchemy.MetaData(bind=temp_con, reflect=True)
return temp_con, temp_meta
one instance of connection and sessionmaker per app, for example where your main function:
from sqlalchemy.orm import sessionmaker
# create one connection and Sessionmaker to each instance of app (to avoid creating it repeatedly)
con, meta = connect(db_user, db_pass, db_instance, db_host)
session_maker = sessionmaker(bind=con) enter code here
function to get session with with statement:
from contextlib import contextmanager
from some_place import session_maker
#contextmanager
def session_scope() -> Session:
"""Provide a transactional scope around a series of operations."""
session = session_maker() # create session from SQLAlchemy sessionmaker
try:
yield session
session.commit()
except:
session.rollback()
raise
wrap transaction and use session:
with session_scope() as session:
entity = session.query(SomeEntity).first()

Leaking database connections: PostgreSQL, SQLAlchemy, Flask

I'm running PostgreSQL 9.3 and SQLAlchemy 0.8.2 and experience database connections leaking. After deploying the app consumes around 240 connections. Over next 30 hours this number gradually grows to 500, when PostgreSQL will start dropping connections.
I use SQLAlchemy thread-local sessions:
from sqlalchemy import orm, create_engine
engine = create_engine(os.environ['DATABASE_URL'], echo=False)
Session = orm.scoped_session(orm.sessionmaker(engine))
For the Flask web app, the .remove() call to the Session proxy-object is send during request teardown:
#app.teardown_request
def teardown_request(exception=None):
if not app.testing:
Session.remove()
This should be the same as what Flask-SQLAlchemy is doing.
I also have some periodic tasks that run in a loop, and I call .remove() for every iteration of the loop:
def run_forever():
while True:
do_stuff(Session)
Session.remove()
What am I doing wrong which could lead to a connection leak?
If I remember correctly from my experiments with SQLAlchemy, the scoped_session() is used to create sessions that you can access from multiple places. That is, you create a session in one method and use it in another without explicitly passing the session object around.
It does that by keeping a list of sessions and associating them with a "scope ID". By default, to obtain a scope ID, it uses the current thread ID; so you have session per thread. You can supply a scopefunc to provide – for example – one ID per request:
# This is (approx.) what flask-sqlalchemy does:
from flask import _request_ctx_stack as context_stack
Session = orm.scoped_session(orm.sessionmaker(engine),
scopefunc=context_stack.__ident_func__)
Also, take note of the other answers and comments about doing background tasks.
First of all, it is a really really bad way to run background tasks. Try any ASync scheduler like celery.
Not 100% sure so this is a bit of a guess based on the information provided, but I wonder if each page load is starting a new db connection which is then listening for notifications. If this is the case, I wonder if the db connection is effectively removed from the pool and so gets created on the next page load.
If this is the case, my recommendation would be to have a separate DBI database handle dedicated to listening for notifications so that these are not active in the queue. This might be done outside your workflow.
Also
Particularly, the leak is happening when making more than one simultaneous requests. At the same time, I could see some of the requests were left with uncompleted query execution and timing out. You can write something to manage this yourself.

How to turn off MySQL query cache while using SQLAlchemy?

I am working with a fairly large MySQL database via the SQLAlchemy library, and I'd love to turn off MySQL's query caching to debug performance issues on a per-session basis. It's difficult to debug slow queries when repeating them results in much faster execution. With the CLI MySQL client, I can execute SET SESSION query_cache_type = OFF; to achieve the result I'm looking for, and I would like to run this on every SQLAlchemy session (when I'm debugging).
But I can't figure out how to configure SQLAlchemy such that it runs SET SESSION query_cache_type = OFF when it instantiates a new database session.
I've looked at the Engine and Connection docs, but can't seem to find anything.
Is there something obvious that I'm missing, or a better way of doing this?
Use an event hook immediately after you define your engine:
from sqlalchemy import event
def disable_query_cache(conn, record):
conn.cursor().execute("SET SESSION query_cache_type = OFF")
# this is probably in your Pyramid setup code
engine = create_engine(...)
if DEBUGGING:
event.listen(engine, 'connect', disable_query_cache)
You can do this globally by adding the hook to the Pool class itself, but (a) you probably want the Pyramid settings available anyway so you can decide whether or not to add the hook, and (b) global state is bad :)

Proper scoping to instantiate a Connection() object in Pymongo

I'm running a Flask-based web app that uses Mongodb (with Pymongo for use in Python). Nearly every view access the database, so I want to make the most effective use of memory and CPU resources. I'm unsure what the most efficient method is for instantiating pymongo's Connection() object, which is used access and manipulate the database. Right now, I declare from pymongo import Connection at the top of my file, and then at the beginning of each view function I have:
def sampleViewFunction():
myCollection = Connection()['myDB']['myCollection']
## then use myCollection to manipulation the database
## more code...
The other way I could do it is declare at the top of my file:
from pymongo import Connection
myCollection = Connection()['myD']['myCollection']
And then later on, your code would just read:
def sampleViewFunction():
## no declaration of myCollection since it's a global variable
## then use myCollection to manipulation the database
## more code...
So the only difference is the declaration scope of myCollection. How do these two methods differ in the way memory is handled and CPU consumption? Since this is a web application, I'm thinking about scenarios where multiple users are the site simultaneously. I imagine there's a difference in the lifespan of the connection to the database, which I'm guessing could impact performance.
You should use the second method. When you create a connection in pymongo you by default create a connection pool. See the documentation for more details see here. This is the correct way of doing things. The default max_pool_size is 10 so this will give you 10 connections to your mongod instance(s). If you did it the other way and created a pool per function call you would
Be creating and destroying a connection with each function call which is wasteful of resources - both RAM and CPU.
Have no control over how many connections your code is going to create to the mongod - you could flood the mongod with connections

Multi-threaded use of SQLAlchemy

I want to make a Database Application Programming Interface written in Python and using SQLAlchemy (or any other database connectors if it is told that using SQLAlchemy for this kind of task is not the good way to go). The setup is a MySQL server running on Linux or BSD and a the Python software running on a Linux or BSD machine (Either foreign or local).
Basically what I want to do is spawn a new thread for each connections and the protocol would be custom and quite simple, although for each requests I would like to open a new transaction (or session as I have read) and then I need to commit the session. The problem I am facing right now is that there is high probability that another sessions happen at the same time from another connection.
My question here is what should I do to handle this situation?
Should I use a lock so only a single session can run at the same time?
Are sessions actually thread-safe and I am wrong about thinking that they are not?
Is there a better way to handle this situation?
Is threading the way not-to-go?
Session objects are not thread-safe, but are thread-local. From the docs:
"The Session object is entirely designed to be used in a non-concurrent fashion, which in terms of multithreading means "only in one thread at a time" .. some process needs to be in place such that mutltiple calls across many threads don’t actually get a handle to the same session. We call this notion thread local storage."
If you don't want to do the work of managing threads and sessions yourself, SQLAlchemy has the ScopedSession object to take care of this for you:
The ScopedSession object by default uses threading.local() as storage, so that a single Session is maintained for all who call upon the ScopedSession registry, but only within the scope of a single thread. Callers who call upon the registry in a different thread get a Session instance that is local to that other thread.
Using this technique, the ScopedSession provides a quick and relatively simple way of providing a single, global object in an application that is safe to be called upon from multiple threads.
See the examples in Contextual/Thread-local Sessions for setting up your own thread-safe sessions:
# set up a scoped_session
from sqlalchemy.orm import scoped_session
from sqlalchemy.orm import sessionmaker
session_factory = sessionmaker(bind=some_engine)
Session = scoped_session(session_factory)
# now all calls to Session() will create a thread-local session
some_session = Session()
# you can now use some_session to run multiple queries, etc.
# remember to close it when you're finished!
Session.remove()

Categories

Resources