I know this issue is not a new one on SO but I'm unable to find a solution. Whenever I return to my desk after leaving my app running overnight, I get a MySQL server has gone away error that persists until I restart my uwsgi service. I've already done the following:
pool_recycle=some really large number in my create_engine() call
Added a ping_connection() after a #event.listens_for() decorator (and I can't use pool_pre_ping - that breaks my create_engine() call)
in /etc/my.cnf I added wait_timeout and interactive_timeout params with large values
but nothing has had any effect.
From the sqlalchemy doc located here, the pool_recycle feature is what you are looking for.
from sqlalchemy import create_engine
engine = create_engine("mysql://scott:tiger#localhost/test", pool_recycle=28700)
Set pool_recycle to a value < wait_timeout in your mysql configuration file my.cnf
MySQL default wait_time is 28800 (8 hrs)
Dont forget to restart your services (i.e. mysql, etc) if you do modify the conf files
Related
A Python 3.6 script access the MySQL database using SQLAlchemy when it first starts. The script then continue running for several hours without accessing the MySQL database. However when it finally decides to access MySQL, we get an error
sqlalchemy.exc.OperationalError: (mysql.connector.errors.OperationalError) MySQL Connection not available. [SQL: 'SELECT ........ ]
The engine was created using
create_engine("mysql+mysqlconnector://..., pool_pre_ping=True, pool_recycle=290)
The pool_recycle value has already been reduced to 290 seconds, much shorter than the recommended 3600 in other SO posts.
Enabling pool_pre_ping also did not help with reconnecting to MySQL to avoid the mentioned error.
MySQL Variables
SHOW VARIABLES LIKE 'wait_%'; gave wait_timeout with value 28800
SHOW VARIABLES LIKE 'interactive_%'; gave interactive_timeout with value 28800
Software Versions
Python 3.5
SQLAlchemy 1.2.2
Ubuntu 16.04
How should we troubleshoot this?
You can restart the mysql server to test the reconnection.
My sqlalchemy engine config in general is:
pool_recycle=3600,
pool_size=5,
max_overflow=10,
pool_timeout=30,
pool_pre_ping=True
I have an MS-SQL deployed on AWS RDS, that I'm writing a Flask front end for.
I've been following some intro Flask tutorials, all of which seem to pass the DB credentials in the connection string URI. I'm following the tutorial here:
https://medium.com/#rodkey/deploying-a-flask-application-on-aws-a72daba6bb80#.e6b4mzs1l
For deployment, do I prompt for the DB login info and add to the connection string? If so, where? Using SQLAlchemy, I don't see any calls to create_engine (using the code in the tutorial), I just see an initialization using config.from_object, referencing the config.py where the SQLALCHEMY_DATABASE_URI is stored, which points to the DB location. Trying to call config.update(dict(UID='****', PASSWORD='******')) from my application has no effect, and looking in the config dict doesn't seem to have any applicable entries to set for this purpose. What am I doing wrong?
Or should I be authenticating using Flask-User, and then get rid of the DB level authentication? I'd prefer authenticating at the DB layer, for ease of use.
The tutorial you are using uses Flask-Sqlalchemy to abstract the database setup stuff, that's why you don't see engine.connect().
Frameworks like Flask-Sqlalchemy are designed around the idea that you create a connection pool to the database on launch, and share that pool amongst your various worker threads. You will not be able to use that for what you are doing... it takes care of initializing the session and things early in the process.
Because of your requirements, I don't know that you'll be able to make any use of things like connection pooling. Instead, you'll have to handle that yourself. The actual connection isn't too hard...
engine = create_engine('dialect://username:password#host/db')
connection = engine.connect()
result = connection.execute("SOME SQL QUERY")
for row in result:
# Do Something
connection.close()
The issue is that you're going to have to do that in every endpoint. A database connection isn't something you can store in the session- you'll have to store the credentials there and do a connect/disconnect loop in every endpoint you write. Worse, you'll have to either figure out encrypted sessions or server side sessions (without a db connection!) to prevent keeping those credentials in the session from becoming a horrible security leak.
I promise you, it will be easier both now and in the long run to figure out a simple way to authenticate users so that they can share a connection pool that is abstracted out of your app endpoints. But if you HAVE to do it this way, this is how you will do it. (make sure you are closing those connections every time!)
I use flask and peewee. Sometimes peewee throws this error
MySQL server has gone away (error(32, 'Broken pipe'))
Peewee database connection
db = PooledMySQLDatabase(database,**{
"passwd": password, "user": user,
"max_connections":None,"stale_timeout":None,
"threadlocals" : True
})
#app.before_request
def before_request():
db.connect()
#app.teardown_request
def teardown_request(exception):
db.close()
After mysql error that "MySQL server has gone away (error(32, 'Broken pipe'))", select queries works without problem, but insert,update,delete queries don't work.
On insert,update,delete queries works behind(in mysql) but peewee throw this errors.
(2006, "MySQL server has gone away (error(32, 'Broken pipe'))")
The peewee documentation has talked about this problem, here is the link: Error 2006: MySQL server has gone away
This particular error can occur when MySQL kills an idle database connection. This typically happens with web apps that do not explicitly manage database connections. What happens is your application starts, a connection is opened to handle the first query that executes, and, since that connection is never closed, it remains open, waiting for more queries.
So you have some problems on managing your database connection.
Since I can't reproduce your problem, could you please try this one, close your database this way:
#app.teardown_appcontext
def close_database(error):
db.close()
And you may get some info from the doc: Step 3: Database Connections
I know this is an old question, but since there's no accepted answer I thought I'd add my two cents.
I was having the same problem when committing largeish amounts of data in Peewee objects (larger than the amount of data MySQL allows in a single commit by default). I fixed it by changing the max_allowed_packet size in my.conf.
To do this, open my.conf, add the following line under [mysqld]:
max_allowed_packet=50M
... or whatever size you need and restart mysqld
I know this is an old question, but I also fixed the problem in another way which might be of interest. In my case, it was an insert_many which was too large.
To fix it, simply do the insert in batches, as described in the peewee documentation
I've ready many posts about this problem. My understanding is the Application has a setting which says how long to keep idle database connections before dropping them and creating new ones. MySQL has a setting that says how long to keep idle connections. After no site activity, MySQL times out the Application's connections. But the Application doesn't know this and still tries using an existing connection, which fails. After the failure the Application drops the connection and makes a new one, and then it is fine.
I have wait_timeout set to 10 seconds on my local mysql server. I have pool_recycle set to 5 seconds on my locally running application. After 10 seconds of inactivity, I make a request, and am still getting this error. Making another request afterwards within 10 seconds, it is then fine. Waiting longer than 10 seconds, it gives this error again.
Any thoughts?
mysql> SELECT ##global.wait_timeout\G
*************************** 1. row ***************************
##global.wait_timeout: 10
1 row in set (0.00 sec)
.
sqlalchemy.twelvemt.pool_recycle = 5
.
engine = engine_from_config(settings, 'sqlalchemy.twelvemt.')
DBSession.configure(bind=engine)
.
OperationalError: (OperationalError) (2006, 'MySQL server has gone away') 'SELECT beaker_cache.data \nFROM beaker_cache \nWHERE beaker_cache.namespace = %s' ('7cd57e290c294c499e232f98354a1f70',)
It looks like the error you're getting is being thrown by your Beaker connection, not your DBSession connection-- the pool_recycle option needs to be set for each connection.
Assuming you're configuring Beaker in your x.ini file, you can pass sqlalchemy options via session.sa.*, so session.sa.pool_recycle = 5
See http://docs.pylonsproject.org/projects/pylons-webframework/en/v0.9.7/sessions.html#sa
Try setting sqlalchemy.pool_recycle for your connection
I always add this into my config file when using mySQL
sqlalchemy.pool_recycle = 3600
Without this I get MySQL server has gone away on the first request after any long pause in activity.
I fixed this by calling remove() on sessions after each request. You can do this by defining a global function:
def remove_session(request, response):
request.dbsession.remove()
Afterwards, you set this function to be run by every class involving requests and a database session:
def __init__(self, request):
request.dbsession = DBSession
request.add_response_callback(remove_session)
This works because SQLAlchemy expects its users to handle the opening and closing of database sessions. More information can be found in the documentation.
I am facing a problem where I am trying to add data from a python script to mysql database with InnonDB engine, it works fine with myisam engine of the mysql database. But the problem with the myisam engine is that it doesn't support foreign keys so I'll have to add extra code each place where I want to insert/delete records in database.
Does anyone know why InnonDB doesn't work with python scripts and possible solutions for this problem ??
InnoDB is transactional. You need to call connection.commit() after inserts/deletes/updates.
Edit: you can call connection.autocommit(True) to turn on autocommit.
Python DB API disables autocommit by default
Pasted from google (first page, 2nd result)
MySQL :: MySQL 5.0 Reference Manual :: 13.2.8 The InnoDB ...
By default, MySQL starts the session for each new connection with autocommit ...
dev.mysql.com/.../innodb-transaction-model.html
However
Apparently Python starts MySQL in NON-autocommit mode, see:
http://www.kitebird.com/articles/pydbapi.html
From the article:
The connection object commit() method commits any outstanding changes in the current transaction to make them permanent in the database. In DB-API, connections begin with autocommit mode disabled, so you must call commit() before disconnecting or changes may be lost.
Bummer, dunno how to override that and I don't want to lead you astray by guessing.
I would suggest opening a new question titled:
How to enable autocommit mode in MySQL python DB-API?
Good luck.