I'm running a suite of fairly straightforward test cases using Flask, SQLAlchemy, and PostgreSQL. Using an application factory, I've defined a base unit test class like so:
class BaseTestCase(unittest.TestCase):
def setUp(self):
self.app = create_app()
self.app.config.from_object('app.config.test')
self.api_base = '/api/v1'
self.ctx = self.app.test_request_context()
self.ctx.push()
self.client = self.app.test_client()
db.create_all()
def tearDown(self):
db.session.remove()
db.drop_all(app=self.app)
print db.engine.pool.status()
if self.ctx is not None:
self.ctx.pop()
All goes well for a few unit tests, up until:
OperationalError: (OperationalError) FATAL: remaining connection slots are reserved for non-replication superuser connections
It seems that there is only 1 connection in the pool (db.engine.pool.status() for each test shows: Pool size: 5 Connections in pool: 1 Current Overflow: -4 Current Checked out connections: 0), but somehow the app never disconnects. Clearly a new app instance is created for each test case, which seems fine according to the documentation. If I move the app creation to the module level it works fine.
Does anyone know why this is happening?
Thanks
Add db.get_engine(self.app).dispose() after db.drop_all()
Related
I'm using SQLAlchemy in my Flask app, and after a few requests have been made the app fails, with this error emitted:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at ..., port 5432 failed: FATAL: too many connections for role "...". I then have to restart the app to get it working again.
It seems as though with each web request, a database connection is being created but not removed. I can't figure out why that is, as I've added scoped_session.remove() to an #app.teardown_request function.
Here is my code...
app.py
...
#app.before_request
def before_request():
global database
database = PostgreSQL()
#app.teardown_request
def teardown(response):
database.session.remove()
...
PostgreSQL.py
class PostgreSQL:
def __init__(self):
self.database_url = os.environ.get('DATABASE_URL')
if self.database_url.startswith('postgres://'):
self.database_url = self.database_url.replace('postgres://', 'postgresql://')
self.engine = create_engine(self.database_url, echo = False)
Base.metadata.create_all(self.engine, checkfirst = True)
self.session = scoped_session(sessionmaker(bind = self.engine))
...
I guess I'd never have run into this problem if I used Flask-SQLAlchemy (which appears to handle things like database sessions for you) from the outset, but I'm keen to follow this problem through without resorting to that additional layer of abstraction - so please refrain from mentioning that module, unless it really is the only viable option here.
EDIT: Figured out the problem - needed to call database.engine.dispose() in #app.teardown_request
after a working production app was connected to local mongo, we've decided to move to Mongo Atlas.
This caused production errors.
Our stack is docker -> alping 3.6 -> python 2.7.13 -> Flask -> uwsgi (2.0.17) -> nginx running on aws
flask-mongoengine-0.9.3 mongoengine-0.14.3 pymongo-3.5.1
when starting the app in staging/production, The uwsgi is sending No replica set members found yet.
We don't know why.
We've tried different connection settings connect: False which is lazy connecting, meaning not on initializing, but on first query.
It caused the nginx to fail for resource temporarily unavailable error on some of our apps. We had to restart multiple times for the app to finally start serving requests.
I think the issue is with pymongo and the fact that it's not fork-safe
http://api.mongodb.com/python/current/faq.html?highlight=thread#id3
and uwsgi is using forks
I suspect it might be related to the way my app is being initalized.
might by aginst Using PyMongo with Multiprocessing
here is the app init code:
from app import FlaskApp
from flask import current_app
app = None
from flask_mongoengine import MongoEngine
import logging
application, app = init_flask_app(app_instance, module_name='my_module')
def init_flask_app(app_instance, **kwargs):
app_instance.init_instance(environment_config)
application = app_instance.get_instance()
app = application.app
return application, app
# app_instance.py
import FlaskApp
def init_instance(env):
global app
app = FlaskApp(env)
return app
def get_instance():
if globals().get('app') is None:
app = current_app.flask_app_object
else:
app = globals().get('app')
assert app is not None
return app
class FlaskApp(object):
def __init__(self, env):
.....
# Initialize the DB
self.db = Database(self.app)
....
# used in app_instance.py to get the flask app object in case it's None
self.app.flask_app_object = self
def run_server(self):
self.app.run(host=self.app.config['HOST'], port=self.app.config['PORT'], debug=self.app.config['DEBUG'])
class Database(object):
def __init__(self, app):
self.db = MongoEngine(app)
def drop_all(self, database_name):
logging.warn("Dropping database %s" % database_name)
self.db.connection.drop_database(database_name)
if __name__ == '__main__':
application.run_server()
help in debugging this will be appreciated!
I am hosting a flask application on Pythonanywhere. Where I have to make few queries from the database. While using MySQLdb I am able to close all the connection to database and don't get any error. But while using sqlalchemy some how connections to the database do not get closed.
This is my connection manager class which has a method defined to close the database connection.
class ConnectionManager:
def __init__(self):
self.base = declarative_base()
def get_db_session(self):
self.engine = create_engine(get_db_path())
self.base.metadata.bind = self.engine
self.session_maker = sessionmaker(bind = self.engine)
self.session = self.session_maker()
return self.session
def persist_in_db(self,record):
session = self.get_db_session()
session.add(record)
session.commit()
session.close()
def close_session(self):
self.session.close()
self.session_maker.close_all()
del self.session
del self.session_maker
del self.engine
#self.engine.dispose()
Before returning the response form the app I call close_session method.
So my question basically is where I am conceptually getting wrong and how can I completely remove database connection.
This is caused by connection pooling. You can disable connection pooling by using NullPool.
self.engine = create_engine(get_db_path(), poolclass=NullPool)
Be careful though, this may not be a good idea in a web app if each web request needs a DB connection.
I have a rest api hosted using bottle web framework. I would like to run integration tests for my api. As part of the test, I would need to start a local instance of the bottle server. But run api in bottle framework blocks the execution thread. How do I create integration tests with a local instance of the server?
I want to start the server during setUp and stop it after running all my tests.
Is this possible with bottle web framework?
I was able to do it using multi threading. If there is a better solution, I will consider it.
def setUp(self):
from thread import start_new_thread
start_new_thread(start_bottle,(),{})
def my_test():
#Run the tests here which make the http call to the resources hosted using above bottle server instance
UPDATE
class TestBottleServer(object):
"""
Starts a local instance of bottle container to run the tests against.
"""
is_running = False
def __init__(self, app=None, host="localhost", port=3534, debug=False, reloader=False, server="tornado"):
self.app = app
self.host = host
self.port = port
self.debug = debug
self.reloader = reloader
self.server = server
def ensured_bottle_started(self):
if TestBottleServer.is_running is False:
start_new_thread(self.__start_bottle__, (), {})
#Sleep is required for forked thread to initialise the app
TestBottleServer.is_running = True
time.sleep(1)
def __start_bottle__(self):
run(
app=self.app,
host=self.host,
port=self.port,
debug=self.debug,
reloader=self.reloader,
server=self.server)
#staticmethod
def restart():
TestBottleServer.is_running = False
TestBottleServer.ensured_bottle_started()
TEST_BOTTLE_SERVER = TestBottleServer()
I'm trying to learn flask technology stack and for my application I'm using Flask-SQLAlchemy. Everything works perfect, but I'm struggling with writing integration tests. I don't want to use SQLite since on production I'm using PostgreSQL and putting tons of mocks will actually more test my own implementation not the logic itself.
So, after some research I decided to implement tests that will write data in the test database and after each tests rollback the changes (for performance sake). Actually, I'm trying to implement something similar to this approach: http://sontek.net/blog/detail/writing-tests-for-pyramid-and-sqlalchemy.
My problem is creating correct transaction and being able to rollback it. Here is the code of my base class:
from flask.ext.sqlalchemy import SQLAlchemy
db = SQLAlchemy()
class MyAppIntegrationTestCase(unittest.TestCase):
#classmethod
def setUpClass(cls):
app.config['TESTING'] = True
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql+psycopg2:///db_test'
init_app()
db.app = app
db.create_all(app=app)
#classmethod
def tearDownClass(cls):
db.drop_all(app=app)
def setUp(self):
db.session.rollback()
self.trans = db.session.begin(subtransactions=True)
def tearDown(self):
self.trans.rollback()
When I'm trying to execute tests I got a following error:
Traceback (most recent call last):
File "myapp/src/core/tests/__init__.py", line 53, in tearDown
self.trans.rollback()
File "myapp/venv/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 370, in rollback
self._assert_active(prepared_ok=True, rollback_ok=True)
File "myapp/venv/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 203, in _assert_active
raise sa_exc.ResourceClosedError(closed_msg)
ResourceClosedError: This transaction is closed
I bet that this is the problem with scoped_session and that when I'm running tests it reuse one global session for all tests, but my knowledge in SQLAlchemy is not deep enough yet.
Any help will be highly appreciated!
Thanks!
You're tearDownClass and setUpClass are causing the issues.
The setUpClass is called once before all the tests, and the tearDownClass is after all the tests in the class.
So if you have 3 tests.
setUpClass is called
setUp is called
tearDown is called (You rollback, but you don't begin a session, this throws an error)
setUp is called (another rollback that's going to error)
etc...
Add a db.session.begin to your tearDown and you'll be fine.
from flask.ext.sqlalchemy import SQLAlchemy
db = SQLAlchemy()
class MyAppIntegrationTestCase(unittest.TestCase):
#classmethod
def setUpClass(cls):
app.config['TESTING'] = True
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql+psycopg2:///db_test'
init_app()
db.app = app
db.create_all(app=app)
#classmethod
def tearDownClass(cls):
db.drop_all(app=app)
def setUp(self):
db.session.rollback()
self.trans = db.session.begin(subtransactions=True)
def tearDown(self):
self.trans.rollback()
db.session.begin()
I wrote a blog post on how to set this up... in short, you have to create a nested transaction so that any session.commit() calls inside your application don't break your isolation. Then apply a listener to the inner transaction to restart it anytime someone tries to commit it or roll it back. Setup Flask-Sqlalchemy Transaction Test Case
A possible solution to your question:
If data size of your database is not very large and you want keep data unchanged, you can do a backup (by write straight sql sentenses) in set up
"CREATE TABLE {0}_backup SELECT * FROM {0}".format(table_name)
and do recover in teardown
"DROP TABLE {0}".format(table_name)
"RENAME TABLE {0}_backup TO {0}".format(table_name)