Alternative for django.db.close_connection() - python

Is there a possible way to re-establish MySQL connection in Django after it goes down?
I am getting the following error while trying to access MySQL using the get_wsgi_application in django.core.wsgi:
(2006,'MYSQL server has gone away')

We need to close the old connection to db, which Django had automatically created before.
We can do this by executing the function:
django.db.close_old_connections()
Django automatically creates new connections after closing previous connections.

Related

Heroku App can Interact with Heroku-postgres locally, but not When Deployed to Heroku

I am making a Flask app for a very simple game that I am hosting on Heroku, and it interacts with a Heroku-Postgres "hobby-dev" level database. When hosted locally, the app can interact with the database perfectly (. However, when I deploy it on Heroku, it will usually crash not long after it tries to interact with the database. Pages that don't require this interaction work fine. Specifically, I receive the following two errors in Heroku's logs:
2021-05-19T19:23:20.943811+00:00 app[web.1]:cur = db.cursor()
2021-05-19T19:23:20.943812+00:00 app[web.1]: psycopg2.InterfaceError: connection already closed
and...
2021-05-19T19:20:35.682211+00:00 app[web.1]: cur.execute("select * from mock_table;")
2021-05-19T19:20:35.682211+00:00 app[web.1]: psycopg2.OperationalError: SSL error: decryption failed or bad record mac
I have a few theories for the cause based on research, but I am not sure:
The database connection is only created once at the very beginning of the Flask app.
I store the Python code for interacting with the database in a separate module. I make the original connection once using psycopg2.connect(...) in the Flask app module, but I pass it to another module and its methods to actually interact with the database.
Something else causes the database connection to end
Are any of these close, and does anyone understand what I am missing? I can provide more info if it is required, but I didn't want to put too much.
According to this article, one possible reason for the errors I am receiving is a timed-out database connection. Thus, all I had to do was place my database calling methods in a try-except block and re-establish the connection if an error occurred. As an example,
cur.execute("select * from mock_table;")
could be replaced with...
try:
cur.execute("select * from mock_table;")
except:
#remake connection here
db.connect(host=host, database=database, user=user, password=password)
cur.execute("select * from mock_table;")
It isn't the most elegant solution, and it might have some flaws, but it has fixed the issue successfully so far, so I figured that I would share it.

How to manage mongodb connection using mongoengine django

I am running a django project and using mongoengine as my ORM for MongoDB. The project uses daemons, celery, crons and calling django custom commands which are using the django project and stores massive data in the database (MongoDB Atlas).
Here's my settings.py file where I'm building the mongodb connection using mongoengine
CONNECTION_STRING = 'mongodb://{user}:{pwd}#{host}/{db}?replicaSet={replicaset}&ssl={ssl}&authSource={auth_source}'.format(
user=env('DB_USERNAME'), pwd=env('DB_PASSWORD'), host=env('DB_HOST'), db=env('DB_DATABASE'), replicaset=env.str('DB_REPLICASET'), ssl=env.str('DB_SSL'), auth_source=env.str('DB_AUTH_SOURCE')
)
DB_CONNECTION = mongoengine.connect(host=CONNECTION_STRING, connect=False)
Suddenly I'm gettig alerts from MongoDb Atlas that connections have been overused. And I'm not sure how to handle connections in an optimised way or my existing settings are already optimised?
Below is the screenshot which shows 450+ connections always open.
Do I have to close the connection in every request class? any help is much appreciated.

Peewee MySQL server has gone away

I use flask and peewee. Sometimes peewee throws this error
MySQL server has gone away (error(32, 'Broken pipe'))
Peewee database connection
db = PooledMySQLDatabase(database,**{
"passwd": password, "user": user,
"max_connections":None,"stale_timeout":None,
"threadlocals" : True
})
#app.before_request
def before_request():
db.connect()
#app.teardown_request
def teardown_request(exception):
db.close()
After mysql error that "MySQL server has gone away (error(32, 'Broken pipe'))", select queries works without problem, but insert,update,delete queries don't work.
On insert,update,delete queries works behind(in mysql) but peewee throw this errors.
(2006, "MySQL server has gone away (error(32, 'Broken pipe'))")
The peewee documentation has talked about this problem, here is the link: Error 2006: MySQL server has gone away
This particular error can occur when MySQL kills an idle database connection. This typically happens with web apps that do not explicitly manage database connections. What happens is your application starts, a connection is opened to handle the first query that executes, and, since that connection is never closed, it remains open, waiting for more queries.
So you have some problems on managing your database connection.
Since I can't reproduce your problem, could you please try this one, close your database this way:
#app.teardown_appcontext
def close_database(error):
db.close()
And you may get some info from the doc: Step 3: Database Connections
I know this is an old question, but since there's no accepted answer I thought I'd add my two cents.
I was having the same problem when committing largeish amounts of data in Peewee objects (larger than the amount of data MySQL allows in a single commit by default). I fixed it by changing the max_allowed_packet size in my.conf.
To do this, open my.conf, add the following line under [mysqld]:
max_allowed_packet=50M
... or whatever size you need and restart mysqld
I know this is an old question, but I also fixed the problem in another way which might be of interest. In my case, it was an insert_many which was too large.
To fix it, simply do the insert in batches, as described in the peewee documentation

Python Pyramid SQLAlchemy, MySQL server has gone away

I've ready many posts about this problem. My understanding is the Application has a setting which says how long to keep idle database connections before dropping them and creating new ones. MySQL has a setting that says how long to keep idle connections. After no site activity, MySQL times out the Application's connections. But the Application doesn't know this and still tries using an existing connection, which fails. After the failure the Application drops the connection and makes a new one, and then it is fine.
I have wait_timeout set to 10 seconds on my local mysql server. I have pool_recycle set to 5 seconds on my locally running application. After 10 seconds of inactivity, I make a request, and am still getting this error. Making another request afterwards within 10 seconds, it is then fine. Waiting longer than 10 seconds, it gives this error again.
Any thoughts?
mysql> SELECT ##global.wait_timeout\G
*************************** 1. row ***************************
##global.wait_timeout: 10
1 row in set (0.00 sec)
.
sqlalchemy.twelvemt.pool_recycle = 5
.
engine = engine_from_config(settings, 'sqlalchemy.twelvemt.')
DBSession.configure(bind=engine)
.
OperationalError: (OperationalError) (2006, 'MySQL server has gone away') 'SELECT beaker_cache.data \nFROM beaker_cache \nWHERE beaker_cache.namespace = %s' ('7cd57e290c294c499e232f98354a1f70',)
It looks like the error you're getting is being thrown by your Beaker connection, not your DBSession connection-- the pool_recycle option needs to be set for each connection.
Assuming you're configuring Beaker in your x.ini file, you can pass sqlalchemy options via session.sa.*, so session.sa.pool_recycle = 5
See http://docs.pylonsproject.org/projects/pylons-webframework/en/v0.9.7/sessions.html#sa
Try setting sqlalchemy.pool_recycle for your connection
I always add this into my config file when using mySQL
sqlalchemy.pool_recycle = 3600
Without this I get MySQL server has gone away on the first request after any long pause in activity.
I fixed this by calling remove() on sessions after each request. You can do this by defining a global function:
def remove_session(request, response):
request.dbsession.remove()
Afterwards, you set this function to be run by every class involving requests and a database session:
def __init__(self, request):
request.dbsession = DBSession
request.add_response_callback(remove_session)
This works because SQLAlchemy expects its users to handle the opening and closing of database sessions. More information can be found in the documentation.

Getting "Invalid cursor state (0)" when running concurrent requests (SQLAlchemy & wsgi/python)

I'm using SQLAlchemy in WSGI python web app to query the database. If I do two concurrent requests, the second request invariably throws an exception, with SQL Server stating
[24000] [FreeTDS][SQL Server]Invalid cursor state (0) (SQLExecDirectW)
Unfortunately it looks like I can't use caching to prevent additional requests to the database. Is there another way to resolve this issue? Ideally using native python libraries (i.e. not relying on another python module)?
The only thing I can think of is using threads to put a lock on the function making the database queries, but I'm worried this will slow down the app.
Is there anything else that can be done? Is this a configuration issue?
I'm using FreeTDS v0.91 on a Centos 5.9 server, connecting to MS SQL Server 2008.
The webapp is based on Paste.
Are your two concurrent requests using different database connections? DBAPI connections are not generally threadsafe. At the ORM level, you'd make sure you're using session per request so that each request has its own Session and therefore dedicated DBAPI connection.

Categories

Resources