I've ready many posts about this problem. My understanding is the Application has a setting which says how long to keep idle database connections before dropping them and creating new ones. MySQL has a setting that says how long to keep idle connections. After no site activity, MySQL times out the Application's connections. But the Application doesn't know this and still tries using an existing connection, which fails. After the failure the Application drops the connection and makes a new one, and then it is fine.
I have wait_timeout set to 10 seconds on my local mysql server. I have pool_recycle set to 5 seconds on my locally running application. After 10 seconds of inactivity, I make a request, and am still getting this error. Making another request afterwards within 10 seconds, it is then fine. Waiting longer than 10 seconds, it gives this error again.
Any thoughts?
mysql> SELECT ##global.wait_timeout\G
*************************** 1. row ***************************
##global.wait_timeout: 10
1 row in set (0.00 sec)
.
sqlalchemy.twelvemt.pool_recycle = 5
.
engine = engine_from_config(settings, 'sqlalchemy.twelvemt.')
DBSession.configure(bind=engine)
.
OperationalError: (OperationalError) (2006, 'MySQL server has gone away') 'SELECT beaker_cache.data \nFROM beaker_cache \nWHERE beaker_cache.namespace = %s' ('7cd57e290c294c499e232f98354a1f70',)
It looks like the error you're getting is being thrown by your Beaker connection, not your DBSession connection-- the pool_recycle option needs to be set for each connection.
Assuming you're configuring Beaker in your x.ini file, you can pass sqlalchemy options via session.sa.*, so session.sa.pool_recycle = 5
See http://docs.pylonsproject.org/projects/pylons-webframework/en/v0.9.7/sessions.html#sa
Try setting sqlalchemy.pool_recycle for your connection
I always add this into my config file when using mySQL
sqlalchemy.pool_recycle = 3600
Without this I get MySQL server has gone away on the first request after any long pause in activity.
I fixed this by calling remove() on sessions after each request. You can do this by defining a global function:
def remove_session(request, response):
request.dbsession.remove()
Afterwards, you set this function to be run by every class involving requests and a database session:
def __init__(self, request):
request.dbsession = DBSession
request.add_response_callback(remove_session)
This works because SQLAlchemy expects its users to handle the opening and closing of database sessions. More information can be found in the documentation.
Related
I am making a Flask app for a very simple game that I am hosting on Heroku, and it interacts with a Heroku-Postgres "hobby-dev" level database. When hosted locally, the app can interact with the database perfectly (. However, when I deploy it on Heroku, it will usually crash not long after it tries to interact with the database. Pages that don't require this interaction work fine. Specifically, I receive the following two errors in Heroku's logs:
2021-05-19T19:23:20.943811+00:00 app[web.1]:cur = db.cursor()
2021-05-19T19:23:20.943812+00:00 app[web.1]: psycopg2.InterfaceError: connection already closed
and...
2021-05-19T19:20:35.682211+00:00 app[web.1]: cur.execute("select * from mock_table;")
2021-05-19T19:20:35.682211+00:00 app[web.1]: psycopg2.OperationalError: SSL error: decryption failed or bad record mac
I have a few theories for the cause based on research, but I am not sure:
The database connection is only created once at the very beginning of the Flask app.
I store the Python code for interacting with the database in a separate module. I make the original connection once using psycopg2.connect(...) in the Flask app module, but I pass it to another module and its methods to actually interact with the database.
Something else causes the database connection to end
Are any of these close, and does anyone understand what I am missing? I can provide more info if it is required, but I didn't want to put too much.
According to this article, one possible reason for the errors I am receiving is a timed-out database connection. Thus, all I had to do was place my database calling methods in a try-except block and re-establish the connection if an error occurred. As an example,
cur.execute("select * from mock_table;")
could be replaced with...
try:
cur.execute("select * from mock_table;")
except:
#remake connection here
db.connect(host=host, database=database, user=user, password=password)
cur.execute("select * from mock_table;")
It isn't the most elegant solution, and it might have some flaws, but it has fixed the issue successfully so far, so I figured that I would share it.
I am trying to identify the number of connections to a postgres database. This is in context of the connection limit on heroku-postgres for dev and hobby plans, which is limited to 20. I have a python django application using the database. I want to understand what constitute a connection. Will each instance of an user using the application count as one connection ? Or The connection from the application to the database is counted as one.
To figure this out I tried the following.
Opened multiple instances of the application from different clients (3 separate machines).
Connected to the database using an online Adminer tool(https://adminer.cs50.net/)
Connected to the database using pgAdmin installed in my local system.
Created and ran dataclips (query reports) on the database from heroku.
Ran the following query from adminer and pgadmin to observe the number of records:
select * from pg_stat_activity where datname ='db_name';
Initial it seemed there was a new record for each for the instance of the application I opened and 1 record for the adminer instance. After some time the query from adminer was showing 6 records (2 connections for adminer, 2 for the pgadmin and 2 for the web-app).
Unfortunately I am still not sure if each instance of users using my web application would be counted as a connection or will all connections to the database from the web app be counted as one ?
Thanks in advance.
Best Regards!
Using PostgreSQL parameters to log connections and disconnections (with right log_line_prefix parameter to have client information) should help:
log_connections (boolean)
Causes each attempted connection to the server to be logged, as well as successful completion of client authentication. Only
superusers can change this parameter at session start, and it cannot
be changed at all within a session. The default is off.
log_disconnections (boolean)
Causes session terminations to be logged. The log output provides information similar to log_connections, plus the duration of the
session. Only superusers can change this parameter at session start,
and it cannot be changed at all within a session. The default is off.
I know this issue is not a new one on SO but I'm unable to find a solution. Whenever I return to my desk after leaving my app running overnight, I get a MySQL server has gone away error that persists until I restart my uwsgi service. I've already done the following:
pool_recycle=some really large number in my create_engine() call
Added a ping_connection() after a #event.listens_for() decorator (and I can't use pool_pre_ping - that breaks my create_engine() call)
in /etc/my.cnf I added wait_timeout and interactive_timeout params with large values
but nothing has had any effect.
From the sqlalchemy doc located here, the pool_recycle feature is what you are looking for.
from sqlalchemy import create_engine
engine = create_engine("mysql://scott:tiger#localhost/test", pool_recycle=28700)
Set pool_recycle to a value < wait_timeout in your mysql configuration file my.cnf
MySQL default wait_time is 28800 (8 hrs)
Dont forget to restart your services (i.e. mysql, etc) if you do modify the conf files
I use flask and peewee. Sometimes peewee throws this error
MySQL server has gone away (error(32, 'Broken pipe'))
Peewee database connection
db = PooledMySQLDatabase(database,**{
"passwd": password, "user": user,
"max_connections":None,"stale_timeout":None,
"threadlocals" : True
})
#app.before_request
def before_request():
db.connect()
#app.teardown_request
def teardown_request(exception):
db.close()
After mysql error that "MySQL server has gone away (error(32, 'Broken pipe'))", select queries works without problem, but insert,update,delete queries don't work.
On insert,update,delete queries works behind(in mysql) but peewee throw this errors.
(2006, "MySQL server has gone away (error(32, 'Broken pipe'))")
The peewee documentation has talked about this problem, here is the link: Error 2006: MySQL server has gone away
This particular error can occur when MySQL kills an idle database connection. This typically happens with web apps that do not explicitly manage database connections. What happens is your application starts, a connection is opened to handle the first query that executes, and, since that connection is never closed, it remains open, waiting for more queries.
So you have some problems on managing your database connection.
Since I can't reproduce your problem, could you please try this one, close your database this way:
#app.teardown_appcontext
def close_database(error):
db.close()
And you may get some info from the doc: Step 3: Database Connections
I know this is an old question, but since there's no accepted answer I thought I'd add my two cents.
I was having the same problem when committing largeish amounts of data in Peewee objects (larger than the amount of data MySQL allows in a single commit by default). I fixed it by changing the max_allowed_packet size in my.conf.
To do this, open my.conf, add the following line under [mysqld]:
max_allowed_packet=50M
... or whatever size you need and restart mysqld
I know this is an old question, but I also fixed the problem in another way which might be of interest. In my case, it was an insert_many which was too large.
To fix it, simply do the insert in batches, as described in the peewee documentation
I have recently been exploring the Tornado web framework to serve a lot of consistent connections by lots of different clients.
I have a request handler that basically takes an RSA encrypted string and decrypts it. The decrypted text is an XML string that gets parsed by a SAX document handler that I have written. Everything works perfectly fine and the execution time (per HTTP request) was roughly 100 milliseconds (with decryption and parsing).
The XML contains the Username and Password hash of the user. I want to connect to a MySQL server to verify that the username matches the password hash supplied by the application.
When I add basically the following code:
conn = MySQLdb.connect (host = "192.168.1.12",
user = "<useraccount>",
passwd = "<Password>",
db = "<dbname>")
cursor = conn.cursor()
safe_username = MySQLdb.escape_string(XMLLoginMessage.username)
safe_pass_hash = MySQLdb.escape_string(XMLLoginMessage.pass_hash)
sql = "SELECT * FROM `mrad`.`users` WHERE `username` = '" + safe_username + "' AND `password` = '" + safe_pass_hash + "' LIMIT 1;"
cursor.execute(sql)
cursor.close()
conn.close()
The time it takes to execute the HTTP request shoots up to 4 - 5 seconds! I believe this is incurred in the time it takes to connect to the MySql database server itself.
My question is how can I speed this up? Can I declare the MySQL connection in the global scope and access it in the request handlers by creating a new cursor, or will that run into concurrency issues because of the asynchronous design of Tornado?
Basically, how can I not have to incur a new connection to a MySQL server EVERY Http request, so it only takes a fraction of a second instead of multiple seconds to implement.
Also, please note, the SQL server is actually on the same physical machine as the Tornado Web Server instance
Update
I just ran a simple MySQL query through a profiler, the same code below.
The call to 'connections.py' init function took 4.944 seconds to execute alone. That doesn't seem right, does it?
Update 2
I think that running with one connection (or even a few with a very simple DB conn pool) will be fast enough to handle the throughput I'm expecting per tornado web server instance.
If 1,000 clients need to access a query, typical query times being in the thousands of seconds, the unluckiest client would only have to wait one second to retrieve the data.
Consider SQLAlchemy, which provides a nicer abstraction over DBAPI and also provides connection pooling, etc. (You can happily ignore its ORM and just use the SQL-toolkit)
(Also, you're not doing blocking database calls in the asynchronous request handlers?)
An SQL connection should not take 5 seconds. Try to not issue a query and see if that improves your performance - which it should.
The Mysqldb module has a threadsafety of "1", which means the module is thread safe, but connections cannot be shared amongst threads. You can implement a connection pool as an alternative.
Lastly, the DB-API has a parameter replacement form for queries which would not require manually concatenating a query and escaping parameters:
cur.execute("SELECT * FROM blach WHERE x = ? AND y = ?", (x,y))
Declare it in the base handler, it will be called once per application.