I have one small app that use fastapi. The problem is when I deploy it to my server and trying to make a post request to route which contains some database operations, it just stuck and gives me 504 error. But on my local machine it working well.
Here is how my db connecting:
app.add_event_handler("startup", tasks.create_start_app_handler(app))
app.add_event_handler("shutdown", tasks.create_stop_app_handler(app))
I tried to revert db connection from startup application to creation of this connection with closing it in different route to test and its worked. Like:
#app.get("/")
async def create_item():
engine = create_engine(
DB_URL
)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()
t = engine.execute('SELECT * FROM auth_user').fetchone()
engine.dispose()
return t
How it's depend on events? Versions of postgresql are different, but I don't think it's because of it.
Currently I have deployment with 2 pods running in it. When I use psql command I can connect normally. So it only stuck in application, not it pod.
If somebody finds same, I fixed it by updating pgpoll from 4.2.2 to latest.
Related
I am running into a problem that my Dash/Flask web app is using too many mysql resources when used for a longer time. Eventually the server becomes incredibly slow because it tries to keep too many database connections alive. The project started based on on this article and is still organised in a similar way: https://hackersandslackers.com/plotly-dash-with-flask/
Once I open an URL from the website each Dash callback seems to open it's own connection to the database. Apart from the callback Flask opens a database connection as well to store the user session. The amount of open connections at the same time isn't really a problem, but the fact the connections aren't closed once finished is.
I've tried different settings and ways to setup the database connection, but none of them solved the problem of open database connections after the request is finished. Eventually the database runs out of resources because it tries to keep too many database connections open and the web app becomes unusable.
I've tried
db.py
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session
from sqlalchemy.orm import sessionmaker
from sqlalchemy.pool import NullPool
from sqlalchemy.ext.declarative import declarative_base
app_engine = create_engine('databasestring', poolclass=NullPool)
db_app_session = scoped_session(sessionmaker(autocommit=False, autoflush=False, bind=app_engine))
Base = declarative_base()
Base.query = db_app_session.query_property()
def init_db():
Base.metadata.create_all(bind=app_engine)
and
db.py
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session
from sqlalchemy.orm import sessionmaker
from sqlalchemy.pool import NullPool
app_engine = create_engine('databasestring', poolclass=NullPool)
Session = sessionmaker(bind=app_engine)
And then import the db.py session / connection into the dash app.
Depending on the contents of db.py I use it in this way in the Dash app in the callback:
dash.py
#app.callback(
# Callback input/output
....
)
def update_graph(rows):
# ... Callback logic
session = Session()
domain: Domain = session.query(Domain).get(domain_id)
/*Do stuff */
session.close()
session.bind.dispose()
I've tried to close the database connections in the init.py of the Flask app with #app.after_request or #app.teardown_request but none of these seemed to work either.
init.py
#app.after_request
def after_request(response):
session = Session()
session.close()
session.bind.dispose()
return response
I am aware of Flask-alchemy package and tried that one as well but with similar results. When using similar code outside of Flask/Dash closing the connections after the code is finished does seem to work.
Adding the NullPool helped to get the connections close when code is executed outside of Flask/Dash, but not within the web app itself. So something still goes wrong within Flask/Dash, but I am unable to find what.
Who can point me into the right direction?
I've also found this issue and pin-pointed it to the login_required decorator. Essentially, each Dash view route has this decorator so any time the dash app is opened in Flask, it opens up a new DB connection, querying for the current user. I've brought it up on a GitHub post here.
I tried this out (in addition to the NullPool configuration) and it worked. Not sure if it's the right solution since it disposes of the database. Try it out and let me know.
#login.user_loader
def load_user(id):
user = db.session.query(User).filter(User.id == id).one_or_none()
db.session.close()
engine = db.get_engine(current_app)
engine.dispose()
return user
I'm using SQLAlchemy in my Flask app, and after a few requests have been made the app fails, with this error emitted:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at ..., port 5432 failed: FATAL: too many connections for role "...". I then have to restart the app to get it working again.
It seems as though with each web request, a database connection is being created but not removed. I can't figure out why that is, as I've added scoped_session.remove() to an #app.teardown_request function.
Here is my code...
app.py
...
#app.before_request
def before_request():
global database
database = PostgreSQL()
#app.teardown_request
def teardown(response):
database.session.remove()
...
PostgreSQL.py
class PostgreSQL:
def __init__(self):
self.database_url = os.environ.get('DATABASE_URL')
if self.database_url.startswith('postgres://'):
self.database_url = self.database_url.replace('postgres://', 'postgresql://')
self.engine = create_engine(self.database_url, echo = False)
Base.metadata.create_all(self.engine, checkfirst = True)
self.session = scoped_session(sessionmaker(bind = self.engine))
...
I guess I'd never have run into this problem if I used Flask-SQLAlchemy (which appears to handle things like database sessions for you) from the outset, but I'm keen to follow this problem through without resorting to that additional layer of abstraction - so please refrain from mentioning that module, unless it really is the only viable option here.
EDIT: Figured out the problem - needed to call database.engine.dispose() in #app.teardown_request
I am running an application with Flask and Flask-SQLAlchemy.
from config import FlaskDatabaseConfig
from flask import Flask
from flask import request
from flask_migrate import Migrate
from flask_sqlalchemy import SQLAlchemy
application = Flask(__name__)
application.config.from_object(FlaskDatabaseConfig())
db = SQLAlchemy(application)
#application.route("/queue/request", methods=["POST"])
def handle_queued_request():
stuff()
return ""
def stuff():
# Includes database queries and updates and a call to db.session.commit()
# db.session.begin() and db.session.close() are not called
pass
if __name__ == "__main__":
application.run(debug=False, port=5001)
Now, from my understanding, by using Flask-SQLAlchemy I do not need to manage sessions on my own. So why am I getting the following error if I run several requests turn by turn to my endpoint?
sqlalchemy.exc.TimeoutError: QueuePool limit of size 5 overflow 10 reached, connection timed out, timeout 30 (Background on this error at: http://sqlalche.me/e/3o7r)
I've tried using db.session.close() but then, instead of this error, my database updates are not committed properly. What am I doing incorrectly? Do I need to manually close connections with the database once a request has been handled?
I have found a solution to this. The issue was that I had a lot of processes that were "idle in transaction" because I did not call db.session.commit() after making certain database SELECT statements using Query.first()
To investigate this, I queried my (development) PostgreSQL database directly using:
SELECT * FROM pg_stat_activity
Just remove connection every time you make a query session to db.
products = db.session.query(Product).limit(20).all()
db.session.remove()
I have a piece of code that used to work fine until very recently:
from sqlalchemy import create_engine
engine = create_engine(
'mysql+mysqldb://{}:{}#{}/{}'.format(
user,
password,
hostname,
database),
echo=False,
pool_recycle=300) # re-connect after 5 minutes
connection = engine.connect()
Now, it fails immediately with a segmentation fault. Has the syntax changed?
The server runs MySQL 5.7.19 and is definitely responding. My installation is sqlalchemy-1.2.4 and mysql-python-1.2.5. I'm using python 2.7.14.
Thanks for any help.
I have found a workaround using pymysql instead of mysqldb:
engine = create_engine(
'mysql+pymysql://{}:{}#{}/{}'.format(
user,
password,
hostname,
database),
echo=False,
pool_recycle=300)
I've a requirement to develop a web app using pyramid and cassandra in the back end. I've googled enough to find out how to configure cassandra in pyramid (using alchemy scaffold). But, I could not find much details on the same. As per my search, I found that it is not possible to configure NoSQL class databases using alchemy. Is there any way to integrate cassandra with pyramid.
You just need to connect to your cassandra cluster on application start and register the session in the request:
app.models.__init__.py
def includeme(config):
def get_session():
from cassandra.cluster import Cluster
cluster = Cluster('your.cluster.ip')
return cluster.connect()
config.add_request_method(
lambda request: get_session,
'dbsession',
reify=True)
app.__init__:
def main(global_config, **settings):
config = Configurator(settings=settings)
config.include('app.models')
Then you can you use cassandra session in your view by calling request.dbsession, for example like this:
request.dbsession.execute('SELECT name, email FROM users')
At the moment using SQLAlchemy with Cassandra is not possible cause SQLAlchemy generates SQL code and Cassandra queries are build in CQL.
About connecting Pyramid with the Cassandra database I have an example similar to the one posted by #matino but also includes a finished callback, so all connections are closed at the end of the request.
Example of my app.__init__.py:
from cassandra.cluster import Cluster
from cassandra.io.libevreactor import LibevConnection
def main(global_config, **settings):
"""
... MORE CONFIG CODE ...
"""
# Retrieves connection to Cassandra (Non SQL database)
def get_cassandra(request):
cluster = Cluster(['127.0.0.1'], port=9042)
cluster.connection_class = LibevConnection
def disconnect(request):
cluster.shutdown()
session = cluster.connect('app')
session.row_factory = dict_factory
request.add_finished_callback(disconnect)
return session
config.add_request_method(get_cassandra, 'cassandra', reify=True)
"""
... MORE CONFIG CODE ...
"""
It certainly works although to be honest I don't know if this is the best approach cause every single time we execute a statement:
request.cassandra.execute('SELECT * FROM users')
it will go through the whole process: Creating the cluster, defining connection, connecting, executing statement and shutting down the cluster.
I wonder if this is the better approach...