MySQL-Connector for python is taking > 1 minute to open a connection to the Database.
I'm writing an AWS Lambda function to simply pull the current secret from AWS Secret manager and then use that secret to open a connection to an SQL Database and run a query to fetch some test data.
secret = {
'user': (responseDict['User ID']),
'password': (responseDict['Password']),
'host': (responseDict['Data Source']),
'database': (responseDict['Initial Catalog']),
'raise_on_warnings': True
}
def connect_database(secret):
# Creates a client connection to the database, using the secret
log.info('Attempting to connect')
try:
dbClient = mysql.connector.connect(**secret)
except mysql.connector.Error as err:
if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
logger.error('ERROR: Failed to auth with the SQL Database.')
elif err.errno == errorcode.ER_BAD_DB_ERROR:
logger.error('ERROR: Database does not exist.')
else:
logger.error('ERROR: Unkown error when connecting to database')
logger.error(err)
logging.info('Connected succesfully')
return dbClient
When I first ran the lambda function, it failed due to reaching the timeout threshold of 3 seconds, so I increased the timeout to 60 seconds and placed a few info logs at various points of the script to find the point where it times out. To my suprise, with a 60-second threshold, it still times out.
The last log shown is the "log.info('Attempting to connect')" just before it tries to open a connection to the SQL Server. The 'Connected Succesfully' does not log and neither does any of the error catches.
Can anyone clarify if the connection should take this long or the more likely choice, point out where I've gone wrong?
EDIT: This ended up being related to a networking issue that I was unaware of, thanks!
AWS Lambda function itself takes sometime to get ready and execute, eventhouh lambda is in hotstand by mode. please consider this fact when you're setting the timeout. Try increasing the timeout, you can increase timeout as 300 seconds and try.
I'm running a Django project with Peewee in Python 3.6 and trying to track down what's wrong with the connection pooling. I keep getting the following error on the development server (for some reason I never experience this issue on my local machine):
Lost connection to MySQL server during query
The repro steps are reliable and are:
Restart Apache on the instance.
Go to my Django page and press a button which triggers a DB operation.
Works fine.
Wait exactly 10 minutes (I've tested enough to get the exact number).
Press another button to trigger another DB operation.
Get the lost connection error above.
The code is structured such that I have all the DB operations inside an independent Python module which is imported into the Django module.
In the main class constructor I'm setting up the DB as such:
from playhouse.pool import PooledMySQLDatabase
def __init__(self, host, database, user, password, stale_timeout=300):
self.mysql_db = PooledMySQLDatabase(host=host, database=database, user=user, password=password, stale_timeout=stale_timeout)
db_proxy.initialize(self.mysql_db)
Every call which needs to make calls out to the DB are done like this:
def get_user_by_id(self, user_id):
db_proxy.connect(reuse_if_open=True)
user = (User.get(User.user_id == user_id))
db_proxy.close()
return {'id': user.user_id, 'first_name': user.first_name, 'last_name': user.last_name, 'email': user.email }
I looked at the wait_timeout value on the MySQL instance and its value is 3600 so that doesn't seem to be the issue (and I tried changing it anyway just to see).
Any ideas on what I could be doing wrong here?
Update:
I found that the /etc/my.cnf configuration file for MySQL has the wait-timeout value set to 600, which matches what I'm experiencing. I don't know why this value doesn't show when I runSHOW VARIABLES LIKE 'wait_timeout'; on the MySQL DB (that returns 3600) but it does seem likely the issue is coming from the wait timeout.
Given this I tried setting the stale timeout to 60, assuming that if it's less than the wait timeout it might fix the issue but it didn't make a difference.
You need to be sure you're recycling the connections properly -- that means that when a request begins you open a connection and when the response is delivered you close the connection. The pool is not recycling the conn most likely because you're never putting it back in the pool, so it looks like its still "in use". This can easily be done with middleware and is described here:
http://docs.peewee-orm.com/en/latest/peewee/database.html#django
I finally came up with a fix which works for my case, after trying numerous ideas. It's not ideal but it works. This post on Connection pooling pointed me in the right direction.
I created a Django middleware class and configured it to be the first in the list of Django middleware.
from peewee import OperationalError
from playhouse.pool import PooledMySQLDatabase
database = PooledMySQLDatabase(None)
class PeeweeConnectionMiddleware(object):
CONN_FAILURE_CODES = [ 2006, 2013, ]
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
if database.database: # Is DB initialized?
response = None
try:
database.connect(reuse_if_open=True)
with database.atomic() as transaction:
try:
response = self.get_response(request)
except:
transaction.rollback()
raise
except OperationalError as exception:
if exception.args[0] in self.CONN_FAILURE_CODES:
database.close_all()
database.connect()
response = None
with database.atomic() as transaction:
try:
response = self.get_response(request)
except:
transaction.rollback()
raise
else:
raise
finally:
if not database.is_closed():
database.close()
return response
else:
return self.get_response(request)
I am developing an application using django with a PostgreSQL database. The application is designed to be used within an organization, so user-supplied SQL requests need to be executed. To deal with the possibility of malformed SQL requests, the database calls are wrapped in a try/except block:
import psycopg2
...
def djangoView(request):
...
try:
result = DBModel.objects.raw(sqlQuery)
return getJSONResponse(result) #Serializes result of query to JSON
except psycopg2.Error:
pass #Handle error (no db requests are made)
However, when I make a request to the view with malformed SQL, I am greeted with a 500 internal server error. The stack trace reveals that the cause of the 500 is a ProgrammingError, which is a subclass of psycopg2.Error. However, the except statement doesn't catch it correctly.
Django wraps all database exceptions with exceptions from its django.db package.
A correct way to catch the Error is:
import django.db
...
except django.db.Error:
If you want to access the underlying database exception:
except django.db.Error as e:
dbException = e.__cause__ #Subclass of psycopg2.Error
I am attempting to connect to a solr server using this tutorial. At this point, I am confident that my solr is set up correctly. I am able to run
> solr start -p 8983
and it appears to start something up.
sure enough
> solr status
Solr process 31421 running on port 8983
So now in my python code, I try what I think should be a basic connection script.
import solr
host = "http://localhost:8983/solr"
# also tried
# host = "http://localhost:8983"
# also tried
# host = "http://127.0.0.1:8983/solr"
# also tried
# host = "http://127.0.0.1:8983"
connection = solr.SolrConnection(host)
try:
connection.add(
id= 1,
title= "Lucene in Action",
author= ['Zack', 'Hank Hill']
)
except Exception as e:
import pdb
pdb.set_trace()
connection.commit()
My code never makes it to the connection.commit(), instead, it hits the debug point in the exception. Looking at exception e
HTTP code=404, Reason=Not Found, body=<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 404 Not Found</title>
</head>
<body><h2>HTTP ERROR 404</h2>
<p>Problem accessing /solr/update. Reason:
<pre> Not Found</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
</body>
</html>
So it looks like the python client is not finding the solr server, due to the 404? This seems like it should be pretty simple, so I'm not sure where I messed up here. Can anyone point me in the right direction?
edit : I added this script to check various hosts, no go
hosts = [
'http://localhost:8983/solr',
'http://localhost:8983',
'http://127.0.0.1:8983/solr',
'http://127.0.0.1:8983'
]
def connect(host):
connection = solr.SolrConnection(host)
try:
connection.add(
id= 1,
title='Lucene in Action',
author= ['Zack Botkin', 'Hank Hill']
)
except:
raise
for host in hosts:
try:
connect(host)
except Exception as e:
import pdb
pdb.set_trace()
Each exception is the same, 404 error
edit 2 : I was able to
> telnet localhost 8983
and it connected, so I'm pretty sure the solr server is running on that port?
To index with solr you will need to also create a core and make sure to use that core in your url. For example, once solr has been started run this command to create a core named test:
solr create -c test
Once that has been created you should see it listed in the solr admin page. To use it you can simply add that core name to your connection url. Simple example python code:
import solr
# create a connection to a solr server
s = solr.SolrConnection('http://localhost:8983/solr/test')
# add 2 documents to the index
s.add(id=1, title='Lucene in Action', author=['bob', 'asdf'])
s.add(id=2, title='test2', author=['Joe', 'test'])
s.commit()
# do a search
response = s.query('joe')
for hit in response.results:
print hit['title']
More information here https://cwiki.apache.org/confluence/display/solr/Running+Solr
I got a lot of errors with the message :
"DatabaseError: current transaction is aborted, commands ignored until end of transaction block"
after changed from python-psycopg to python-psycopg2 as Django project's database engine.
The code remains the same, just don't know where those errors are from.
This is what postgres does when a query produces an error and you try to run another query without first rolling back the transaction. (You might think of it as a safety feature, to keep you from corrupting your data.)
To fix this, you'll want to figure out where in the code that bad query is being executed. It might be helpful to use the log_statement and log_min_error_statement options in your postgresql server.
To get rid of the error, roll back the last (erroneous) transaction after you've fixed your code:
from django.db import transaction
transaction.rollback()
You can use try-except to prevent the error from occurring:
from django.db import transaction, DatabaseError
try:
a.save()
except DatabaseError:
transaction.rollback()
Refer : Django documentation
In Flask you just need to write:
curs = conn.cursor()
curs.execute("ROLLBACK")
conn.commit()
P.S. Documentation goes here https://www.postgresql.org/docs/9.4/static/sql-rollback.html
So, I ran into this same issue. The problem I was having here was that my database wasn't properly synced. Simple problems always seem to cause the most angst...
To sync your django db, from within your app directory, within terminal, type:
$ python manage.py syncdb
Edit: Note that if you are using django-south, running the '$ python manage.py migrate' command may also resolve this issue.
Happy coding!
In my experience, these errors happen this way:
try:
code_that_executes_bad_query()
# transaction on DB is now bad
except:
pass
# transaction on db is still bad
code_that_executes_working_query() # raises transaction error
There nothing wrong with the second query, but since the real error was caught, the second query is the one that raises the (much less informative) error.
edit: this only happens if the except clause catches IntegrityError (or any other low level database exception), If you catch something like DoesNotExist this error will not come up, because DoesNotExist does not corrupt the transaction.
The lesson here is don't do try/except/pass.
I think the pattern priestc mentions is more likely to be the usual cause of this issue when using PostgreSQL.
However I feel there are valid uses for the pattern and I don't think this issue should be a reason to always avoid it. For example:
try:
profile = user.get_profile()
except ObjectDoesNotExist:
profile = make_default_profile_for_user(user)
do_something_with_profile(profile)
If you do feel OK with this pattern, but want to avoid explicit transaction handling code all over the place then you might want to look into turning on autocommit mode (PostgreSQL 8.2+): https://docs.djangoproject.com/en/dev/ref/databases/#autocommit-mode
DATABASES['default'] = {
#.. you usual options...
'OPTIONS': {
'autocommit': True,
}
}
I am unsure if there are important performance considerations (or of any other type).
just use rollback
Example code
try:
cur.execute("CREATE TABLE IF NOT EXISTS test2 (id serial, qa text);")
except:
cur.execute("rollback")
cur.execute("CREATE TABLE IF NOT EXISTS test2 (id serial, qa text);")
You only need to run
rollback;
in PostgreSQL and that's it!
If you get this while in interactive shell and need a quick fix, do this:
from django.db import connection
connection._rollback()
originally seen in this answer
I encountered a similar behavior while running a malfunctioned transaction on the postgres terminal. Nothing went through after this, as the database is in a state of error. However, just as a quick fix, if you can afford to avoid rollback transaction. Following did the trick for me:
COMMIT;
I've just got a similar error here. I've found the answer in this link https://www.postgresqltutorial.com/postgresql-python/transaction/
client = PsqlConnection(config)
connection = client.connection
cursor = client.cursor
try:
for query in list_of_querys:
#query format => "INSERT INTO <database.table> VALUES (<values>)"
cursor.execute(query)
connection.commit()
except BaseException as e:
connection.rollback()
Doing this the following query's you send to postgresql will not return an error.
I've got the silimar problem. The solution was to migrate db (manage.py syncdb or manage.py schemamigration --auto <table name> if you use south).
In Flask shell, all I needed to do was a session.rollback() to get past this.
I have met this issue , the error comes out since the error transactions hasn't been ended rightly, I found the postgresql_transactions of Transaction Control command here
Transaction Control
The following commands are used to control transactions
BEGIN TRANSACTION − To start a transaction.
COMMIT − To save the changes, alternatively you can use END TRANSACTION command.
ROLLBACK − To rollback the changes.
so i use the END TRANSACTION to end the error TRANSACTION, code like this:
for key_of_attribute, command in sql_command.items():
cursor = connection.cursor()
g_logger.info("execute command :%s" % (command))
try:
cursor.execute(command)
rows = cursor.fetchall()
g_logger.info("the command:%s result is :%s" % (command, rows))
result_list[key_of_attribute] = rows
g_logger.info("result_list is :%s" % (result_list))
except Exception as e:
cursor.execute('END TRANSACTION;')
g_logger.info("error command :%s and error is :%s" % (command, e))
return result_list
I just had this error too but it was masking another more relevant error message where the code was trying to store a 125 characters string in a 100 characters column:
DatabaseError: value too long for type character varying(100)
I had to debug through the code for the above message to show up, otherwise it displays
DatabaseError: current transaction is aborted
In response to #priestc and #Sebastian, what if you do something like this?
try:
conn.commit()
except:
pass
cursor.execute( sql )
try:
return cursor.fetchall()
except:
conn.commit()
return None
I just tried this code and it seems to work, failing silently without having to care about any possible errors, and working when the query is good.
I believe #AnujGupta's answer is correct. However the rollback can itself raise an exception which you should catch and handle:
from django.db import transaction, DatabaseError
try:
a.save()
except DatabaseError:
try:
transaction.rollback()
except transaction.TransactionManagementError:
# Log or handle otherwise
If you find you're rewriting this code in various save() locations, you can extract-method:
import traceback
def try_rolling_back():
try:
transaction.rollback()
log.warning('rolled back') # example handling
except transaction.TransactionManagementError:
log.exception(traceback.format_exc()) # example handling
Finally, you can prettify it using a decorator that protects methods which use save():
from functools import wraps
def try_rolling_back_on_exception(fn):
#wraps(fn)
def wrapped(*args, **kwargs):
try:
return fn(*args, **kwargs)
except:
traceback.print_exc()
try_rolling_back()
return wrapped
#try_rolling_back_on_exception
def some_saving_method():
# ...
model.save()
# ...
Even if you implement the decorator above, it's still convenient to keep try_rolling_back() as an extracted method in case you need to use it manually for cases where specific handling is required, and the generic decorator handling isn't enough.
This is very strange behavior for me. I'm surprised that no one thought of savepoints. In my code failing query was expected behavior:
from django.db import transaction
#transaction.commit_on_success
def update():
skipped = 0
for old_model in OldModel.objects.all():
try:
Model.objects.create(
group_id=old_model.group_uuid,
file_id=old_model.file_uuid,
)
except IntegrityError:
skipped += 1
return skipped
I have changed code this way to use savepoints:
from django.db import transaction
#transaction.commit_on_success
def update():
skipped = 0
sid = transaction.savepoint()
for old_model in OldModel.objects.all():
try:
Model.objects.create(
group_id=old_model.group_uuid,
file_id=old_model.file_uuid,
)
except IntegrityError:
skipped += 1
transaction.savepoint_rollback(sid)
else:
transaction.savepoint_commit(sid)
return skipped
I am using the python package psycopg2 and I got this error while querying.
I kept running just the query and then the execute function, but when I reran the connection (shown below), it resolved the issue. So rerun what is above your script i.e the connection, because as someone said above, I think it lost the connection or was out of sync or something.
connection = psycopg2.connect(user = "##",
password = "##",
host = "##",
port = "##",
database = "##")
cursor = connection.cursor()
It is an issue with bad sql execution which does not allow other queries to execute until the previous one gets suspended/rollback.
In PgAdmin4-4.24 there is an option of rollback, one can try this.
you could disable transaction via "set_isolation_level(0)"