I am running an application with Flask and Flask-SQLAlchemy.
from config import FlaskDatabaseConfig
from flask import Flask
from flask import request
from flask_migrate import Migrate
from flask_sqlalchemy import SQLAlchemy
application = Flask(__name__)
application.config.from_object(FlaskDatabaseConfig())
db = SQLAlchemy(application)
#application.route("/queue/request", methods=["POST"])
def handle_queued_request():
stuff()
return ""
def stuff():
# Includes database queries and updates and a call to db.session.commit()
# db.session.begin() and db.session.close() are not called
pass
if __name__ == "__main__":
application.run(debug=False, port=5001)
Now, from my understanding, by using Flask-SQLAlchemy I do not need to manage sessions on my own. So why am I getting the following error if I run several requests turn by turn to my endpoint?
sqlalchemy.exc.TimeoutError: QueuePool limit of size 5 overflow 10 reached, connection timed out, timeout 30 (Background on this error at: http://sqlalche.me/e/3o7r)
I've tried using db.session.close() but then, instead of this error, my database updates are not committed properly. What am I doing incorrectly? Do I need to manually close connections with the database once a request has been handled?
I have found a solution to this. The issue was that I had a lot of processes that were "idle in transaction" because I did not call db.session.commit() after making certain database SELECT statements using Query.first()
To investigate this, I queried my (development) PostgreSQL database directly using:
SELECT * FROM pg_stat_activity
Just remove connection every time you make a query session to db.
products = db.session.query(Product).limit(20).all()
db.session.remove()
Related
I am using apscheduler in a flask application where i pull data from a site every 5 miniutes and insert data every 5 miniutes.The apsscheduler works fine till this point.But the problem occurs whenever i hit this server address http://127.0.0.1:5000 . anyhow through any api i get an assertion error:
AssertionError: A setup function was called after the first request was handled. This usually indicates a bug in the application where a module was not imported and decorators or other functionality was called too late.
To fix this make sure to import all your view modules, database models, and everything related at a central place before the application starts serving requests.
After this the save api stop working.The database doesnot work anymore.From what i get it happens due to sqlalchemy error.
this is app.py file:
from apscheduler.schedulers.background import BackgroundScheduler
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from Config.config import Config
from Service.anyService import job
app = Flask(__name__)
app.config.from_object(Config)
db = SQLAlchemy(app)
db.init_app(app)
scheduler = BackgroundScheduler()
scheduler.add_job(func=job, trigger="interval", seconds=300, args=[app])
scheduler.start()
if __name__ == "__main__":
from waitress import serve
serve(app, host="0.0.0.0", port=5000)
this is the file for saving data:
from flask_sqlalchemy import SQLAlchemy
def job(app):
db = SQLAlchemy(app)
connection = db.engine.connect(close_with_result=True)
# pull some data from a site and sql query to inset data
db.session.add()
db.session.commit()
I am using waitress server.the application seems to be working fine if i run python -m flask run. But if i run python3 app.py and i hit the server it stops working on saving data.what am i doing wrong here
I have a route in my Flask app that spawns a process (using multiprocessing.Process) to do some background work. That process needs to be able to write to the database.
__init__.py:
from flask_sqlalchemy import SQLAlchemy
from project.config import Config
db = SQLAlchemy()
# app factory
def create_app(config_class=Config):
app = Flask(__name__)
app.config.from_object(Config)
db.init_app(app)
return app
And this is the relevant code that illustrates that way i'm spawning the process and using the db connection:
def worker(row_id):
db_session = db.create_scoped_session()
# Do stuff with db_session here
db_session.close()
#app.route('/worker/<row_id>/start')
def start(row_id):
p = Process(target=worker, args=(row_id,))
p.start()
return redirect('/')
The problem is that sometimes (not always) i have errors like this one:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) insufficient data in "D" message lost synchronization with server: got message type "a", length 1668573551
I assume that this is related to the fact that there is another process accessing the database (because if i don't use a separate process, everything works fine) but i honestly can't find a way of fixing it. As you can see on my code, i tried used create_scoped_session() method as an attempt to fix this but the problem is the same.
Any help?
Ok so, i followed #antont 's hint and created a new sqlalchemy session inside the worker function this way and it worked flawlessly:
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
def worker(row_id):
db_url = os.environ['DATABASE_URL']
db_engine = create_engine(db_url)
Session = sessionmaker(bind=db_engine)
db_session = Session()
# Do stuff with db_session here
db_session.close()
I am running into a problem that my Dash/Flask web app is using too many mysql resources when used for a longer time. Eventually the server becomes incredibly slow because it tries to keep too many database connections alive. The project started based on on this article and is still organised in a similar way: https://hackersandslackers.com/plotly-dash-with-flask/
Once I open an URL from the website each Dash callback seems to open it's own connection to the database. Apart from the callback Flask opens a database connection as well to store the user session. The amount of open connections at the same time isn't really a problem, but the fact the connections aren't closed once finished is.
I've tried different settings and ways to setup the database connection, but none of them solved the problem of open database connections after the request is finished. Eventually the database runs out of resources because it tries to keep too many database connections open and the web app becomes unusable.
I've tried
db.py
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session
from sqlalchemy.orm import sessionmaker
from sqlalchemy.pool import NullPool
from sqlalchemy.ext.declarative import declarative_base
app_engine = create_engine('databasestring', poolclass=NullPool)
db_app_session = scoped_session(sessionmaker(autocommit=False, autoflush=False, bind=app_engine))
Base = declarative_base()
Base.query = db_app_session.query_property()
def init_db():
Base.metadata.create_all(bind=app_engine)
and
db.py
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session
from sqlalchemy.orm import sessionmaker
from sqlalchemy.pool import NullPool
app_engine = create_engine('databasestring', poolclass=NullPool)
Session = sessionmaker(bind=app_engine)
And then import the db.py session / connection into the dash app.
Depending on the contents of db.py I use it in this way in the Dash app in the callback:
dash.py
#app.callback(
# Callback input/output
....
)
def update_graph(rows):
# ... Callback logic
session = Session()
domain: Domain = session.query(Domain).get(domain_id)
/*Do stuff */
session.close()
session.bind.dispose()
I've tried to close the database connections in the init.py of the Flask app with #app.after_request or #app.teardown_request but none of these seemed to work either.
init.py
#app.after_request
def after_request(response):
session = Session()
session.close()
session.bind.dispose()
return response
I am aware of Flask-alchemy package and tried that one as well but with similar results. When using similar code outside of Flask/Dash closing the connections after the code is finished does seem to work.
Adding the NullPool helped to get the connections close when code is executed outside of Flask/Dash, but not within the web app itself. So something still goes wrong within Flask/Dash, but I am unable to find what.
Who can point me into the right direction?
I've also found this issue and pin-pointed it to the login_required decorator. Essentially, each Dash view route has this decorator so any time the dash app is opened in Flask, it opens up a new DB connection, querying for the current user. I've brought it up on a GitHub post here.
I tried this out (in addition to the NullPool configuration) and it worked. Not sure if it's the right solution since it disposes of the database. Try it out and let me know.
#login.user_loader
def load_user(id):
user = db.session.query(User).filter(User.id == id).one_or_none()
db.session.close()
engine = db.get_engine(current_app)
engine.dispose()
return user
I'm using SQLAlchemy in my Flask app, and after a few requests have been made the app fails, with this error emitted:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at ..., port 5432 failed: FATAL: too many connections for role "...". I then have to restart the app to get it working again.
It seems as though with each web request, a database connection is being created but not removed. I can't figure out why that is, as I've added scoped_session.remove() to an #app.teardown_request function.
Here is my code...
app.py
...
#app.before_request
def before_request():
global database
database = PostgreSQL()
#app.teardown_request
def teardown(response):
database.session.remove()
...
PostgreSQL.py
class PostgreSQL:
def __init__(self):
self.database_url = os.environ.get('DATABASE_URL')
if self.database_url.startswith('postgres://'):
self.database_url = self.database_url.replace('postgres://', 'postgresql://')
self.engine = create_engine(self.database_url, echo = False)
Base.metadata.create_all(self.engine, checkfirst = True)
self.session = scoped_session(sessionmaker(bind = self.engine))
...
I guess I'd never have run into this problem if I used Flask-SQLAlchemy (which appears to handle things like database sessions for you) from the outset, but I'm keen to follow this problem through without resorting to that additional layer of abstraction - so please refrain from mentioning that module, unless it really is the only viable option here.
EDIT: Figured out the problem - needed to call database.engine.dispose() in #app.teardown_request
Im using Flask app with long-running worker proccess and this process cannot see changes in DB.
main.py
In this file I create Flask app and start the process.
from paas.app_factory import create_app()
app = create_app()
from .controllers.resource_controller import *
from .worker import create_resource, delete_resource
from multiprocessing import Process
from flask import current_app
with app.app_context():
create_proc = Process(target=create_resource, args=(current_app._get_current_object(),)).start()
worker.py
Worker every 20 sec checks if new resources appeared in DB and if so, it will process them
def create_resources(app):
with app.app_context():
while True:
resource = get_resource_to_create()
if not resource:
print("Wait for resource to create...")
time.sleep(20)
continue
create_resource.....
resource_controller.py
After application started I added some resource to db. Problem is, that through the second route I can get resource from db, but worker.py sees nothing in DB.
from .main import app
#app.route('resource/<resource>', method=POST)
create_resource():
write_resource_to_db(resource)
#app.route('resources')
get_resources():
select_all_resources_from_db()
I believe there is some misunderstanding of application context from my side, but I cannot figure it out. Help would be really nice.
Thank you in advance.
P.S. Let's say get_resource_to_create() and select_all_resources_from_db() are the same function inside.
Problem was in
def create_resources(app):
with app.app_context():
while True:
Loop was in the app context == in one transaction, so it worked like that:
START TRANSACTION;
SELECT ...;
SELECT ...;
SELECT ...;
...
So there was no chance to get fresh data.
If I switched the lines,
def create_resources(app):
while True:
with app.app_context():
then after one cycle ends, there is a commit and new transaction with fresh data.
START TRANSACTION;
SELECT ... ;
COMMIT;
START TRANSACTION;
SELECT ... ;
COMMIT;
...