Related
import sqlite3
location = 'data'
table_name = 'table_name'
try:
conn = sqlite3.connect(location)
c = conn.cursor()
sql = 'insert into ' + table_name + ' (id) values (%d)' % (1)
c.execute(sql) # Some exception occurs here
conn.commit() # This line is not reached, so database is locked.
except Exception as e:
pass:
So if we run the above code and an exception occurs, the database is locked. So what should we do ?
EDIT: I am aware that this is vulnerable to sql injection. I've posted this example simply so that I can quickly ask the question.
This is a great use case for with:
from contextlib import closing
conn = sqlite3.connect(location)
with closing(conn.cursor()) as c:
# Do anything that may raise an exception here
With works on context objects. These objects define behavior for cleaning up in the event of an exception. In this case, the cursor will be released and the database will be unlocked if an exception occurs.
Note that we need closing from contextlib. This is because sqlite3 doesn't make its cursors contexts (even though it should, and other db's like mysql and postgres do this for their db bindings). closing allows the with to know what to do with the cursor when you leave the with's block.
A similar pattern is the correct way to handle files in Python:
with open('my_file.txt', 'r') as f:
# Do anything you want here, even something that may
# raise an exception
# the with will automatically f.close() whenever control
# leaves the with statement (whether by natural flow or
# exception)
As #IljaEverilä mentions in the comments, the connections themselves are also context managers and you should definitely use them this way to ensure that connections are properly cleaned up:
with sqlite3.connect(location) as conn:
# Do whatever you need with conn
I am trying to create a login function. But it only works ones. Ex- When i give a wrong userid and password I got correct error massage that "Could't login" after canceling that message and giving correct userid and password then I get "pymysql.err.Error: Already closed" below are the sample code.
import pymysql
# Connect to the database
connection = pymysql.connect(host='localhost',
user='root',
password='',
db='python_code',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
class LoginModel:
def check_user(self, data):
try:
with connection.cursor() as cursor:
# Read a single record
sql = "SELECT `username` FROM `users` WHERE `username`=%s"
cursor.execute(sql, (data.username))
user = cursor.fetchone()
print(user)
if user:
if (user, data.password):
return user
else:
return False
else:
return False
finally:
connection.close()
You have a mismatch with respect to the number of times you're creating the connection (once) and the number of times you're closing the connection (once per login attempt).
One fix would be to move your:
connection = pymysql.connect(host='localhost',
user='root',
password='',
db='python_code',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
into your def check__user(). It would work because you'd create and close the connection on each invocation (as others have pointed out, the finally clause always gets executed.)
That's not a great design because getting database connections tends to be relatively expensive. So keeping the connection creating outside of the method is preferred.... which means you must remove the connection.close() within the method.
I think you're mixing up connection.close() with cursor.close(). You want to do the latter, not the former. In your example you don't have to explicitly close the cursor because that happens automatically with your with connection.cursor() as cursor: line.
Change finally to except, or remove the try block completely.
This is the culprit code:
finally:
connection.close()
Per the docs:
"A finally clause is always executed before leaving the try statement, whether an exception has occurred or not"
From: https://docs.python.org/2/tutorial/errors.html
You didn't describe alternative behavior for what you would like to see happen instead of this, but my answer addresses the crux of your question.
Had the same issue. The "Finally clause is needed for Postgres with the psycopg2 driver, if used with context manager (with clause), it close the cursor but not the connection. The same does not apply with Pymysql.
Original: I have recently started getting MySQL OperationalErrors from some of my old code and cannot seem to trace back the problem. Since it was working before, I thought it may have been a software update that broke something. I am using python 2.7 with django runfcgi with nginx. Here is my original code:
views.py
DBNAME = "test"
DBIP = "localhost"
DBUSER = "django"
DBPASS = "password"
db = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME)
cursor = db.cursor()
def list(request):
statement = "SELECT item from table where selected = 1"
cursor.execute(statement)
results = cursor.fetchall()
I have tried the following, but it still does not work:
views.py
class DB:
conn = None
DBNAME = "test"
DBIP = "localhost"
DBUSER = "django"
DBPASS = "password"
def connect(self):
self.conn = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME)
def cursor(self):
try:
return self.conn.cursor()
except (AttributeError, MySQLdb.OperationalError):
self.connect()
return self.conn.cursor()
db = DB()
cursor = db.cursor()
def list(request):
cursor = db.cursor()
statement = "SELECT item from table where selected = 1"
cursor.execute(statement)
results = cursor.fetchall()
Currently, my only workaround is to do MySQLdb.connect() in each function that uses mysql. Also I noticed that when using django's manage.py runserver, I would not have this problem while nginx would throw these errors. I doubt that I am timing out with the connection because list() is being called within seconds of starting the server up. Were there any updates to the software I am using that would cause this to break/is there any fix for this?
Edit: I realized that I recently wrote a piece of middle-ware to daemonize a function and this was the cause of the problem. However, I cannot figure out why. Here is the code for the middle-ware
def process_request_handler(sender, **kwargs):
t = threading.Thread(target=dispatch.execute,
args=[kwargs['nodes'],kwargs['callback']],
kwargs={})
t.setDaemon(True)
t.start()
return
process_request.connect(process_request_handler)
Sometimes if you see "OperationalError: (2006, 'MySQL server has gone away')", it is because you are issuing a query that is too large. This can happen, for instance, if you're storing your sessions in MySQL, and you're trying to put something really big in the session. To fix the problem, you need to increase the value of the max_allowed_packet setting in MySQL.
The default value is 1048576.
So see the current value for the default, run the following SQL:
select ##max_allowed_packet;
To temporarily set a new value, run the following SQL:
set global max_allowed_packet=10485760;
To fix the problem more permanently, create a /etc/my.cnf file with at least the following:
[mysqld]
max_allowed_packet = 16M
After editing /etc/my.cnf, you'll need to restart MySQL or restart your machine if you don't know how.
As per the MySQL documentation, your error message is raised when the client can't send a question to the server, most likely because the server itself has closed the connection. In the most common case the server will close an idle connection after a (default) of 8 hours. This is configurable on the server side.
The MySQL documentation gives a number of other possible causes which might be worth looking into to see if they fit your situation.
An alternative to calling connect() in every function (which might end up needlessly creating new connections) would be to investigate using the ping() method on the connection object; this tests the connection with the option of attempting an automatic reconnect. I struggled to find some decent documentation for the ping() method online, but the answer to this question might help.
Note, automatically reconnecting can be dangerous when handling transactions as it appears the reconnect causes an implicit rollback (and appears to be the main reason why autoreconnect is not a feature of the MySQLdb implementation).
This might be due to DB connections getting copied in your child threads from the main thread. I faced the same error when using python's multiprocessing library to spawn different processes. The connection objects are copied between processes during forking and it leads to MySQL OperationalErrors when making DB calls in the child thread.
Here's a good reference to solve this: Django multiprocessing and database connections
For me this was happening in debug mode.
So I tried Persistent connections in debug mode, checkout the link: Django - Documentation - Databases - Persistent connections.
In settings:
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'dbname',
'USER': 'root',
'PASSWORD': 'root',
'HOST': 'localhost',
'PORT': '3306',
'CONN_MAX_AGE': None
},
Check if you are allowed to create mysql connection object in one thread and then use it in another.
If it's forbidden, use threading.Local for per-thread connections:
class Db(threading.local):
""" thread-local db object """
con = None
def __init__(self, ...options...):
super(Db, self).__init__()
self.con = MySQLdb.connect(...options...)
db1 = Db(...)
def test():
"""safe to run from any thread"""
cursor = db.con.cursor()
cursor.execute(...)
This error is mysterious because MySQL doesn't report why it disconnects, it just goes away.
It seems there are many causes of this kind of disconnection. One I just found is, if the query string too large, the server will disconnect. This probably relates to the max_allowed_packets setting.
I've been struggling with this issue too. I don't like the idea of increasing timeout on mysqlserver. Autoreconnect with CONNECTION_MAX_AGE doesn't work either as it was mentioned. Unfortunately I ended up with wrapping every method that queries the database like this
def do_db( callback, *arg, **args):
try:
return callback(*arg, **args)
except (OperationalError, InterfaceError) as e: # Connection has gone away, fiter it with message or error code if you could catch another errors
connection.close()
return callback(*arg, **args)
do_db(User.objects.get, id=123) # instead of User.objects.get(id=123)
As you can see I rather prefer catching the exception than pinging the database every time before querying it. Because catching an exception is a rare case. I would expect django to reconnect automatically but they seemed to refused that issue.
This error may occur when you try to use the connection after a time-consuming operation that doesn't go to the database. Since the connection is not used for some time, MySQL timeout is hit and the connection is silently dropped.
You can try calling close_old_connections() after the time-consuming non-DB operation so that a new connection is opened if the connection is unusable. Beware, do not use close_old_connections() if you have a transaction.
The most common issue regarding such warning, is the fact that your application has reached the wait_timeout value of MySQL.
I had the same problem with a Flask app.
Here's how I solved:
$ grep timeout /etc/mysql/mysql.conf.d/mysqld.cnf
# https://support.rackspace.com/how-to/how-to-change-the-mysql-timeout-on-a-server/
# wait = timeout for application session (tdm)
# inteactive = timeout for keyboard session (terminal)
# 7 days = 604800s / 4 hours = 14400s
wait_timeout = 604800
interactive_timeout = 14400
Observation: if you search for the variables via MySQL batch mode, the values will appear as it is. But If you perform SHOW VARIABLES LIKE 'wait%'; or SHOW VARIABLES LIKE 'interactive%';, the value configured for interactive_timeout, will appear to both variables, and I don't know why, but the fact is, that the values configured for each variable at '/etc/mysql/mysql.conf.d/mysqld.cnf', will be respected by MySQL.
How old is this code? Django has had databases defined in settings since at least .96. Only other thing I can think of is multi-db support, which changed things a bit, but even that was 1.1 or 1.2.
Even if you need a special DB for certain views, I think you'd probably be better off defining it in settings.
SQLAlchemy now has a great write-up on how you can use pinging to be pessimistic about your connection's freshness:
http://docs.sqlalchemy.org/en/latest/core/pooling.html#disconnect-handling-pessimistic
From there,
from sqlalchemy import exc
from sqlalchemy import event
from sqlalchemy.pool import Pool
#event.listens_for(Pool, "checkout")
def ping_connection(dbapi_connection, connection_record, connection_proxy):
cursor = dbapi_connection.cursor()
try:
cursor.execute("SELECT 1")
except:
# optional - dispose the whole pool
# instead of invalidating one at a time
# connection_proxy._pool.dispose()
# raise DisconnectionError - pool will try
# connecting again up to three times before raising.
raise exc.DisconnectionError()
cursor.close()
And a test to make sure the above works:
from sqlalchemy import create_engine
e = create_engine("mysql://scott:tiger#localhost/test", echo_pool=True)
c1 = e.connect()
c2 = e.connect()
c3 = e.connect()
c1.close()
c2.close()
c3.close()
# pool size is now three.
print "Restart the server"
raw_input()
for i in xrange(10):
c = e.connect()
print c.execute("select 1").fetchall()
c.close()
I had this problem and did not have the option to change my configuration. I finally figured out that the problem was occurring 49500 records in to my 50000-record loop, because that was the about the time I was trying again (after having tried a long time ago) to hit my second database.
So I changed my code so that every few thousand records, I touched the second database again (with a count() of a very small table), and that fixed it. No doubt "ping" or some other means of touching the database would work, as well.
Firstly, You should make sure the MySQL session & global enviroments wait_timeout and interactive_timeout values. And secondly Your client should try to reconnect to the server below those enviroments values.
I have a seemingly straight-forward situation, but can't find a straight-forward solution.
I'm using sqlalchemy to query postgres. If a client timeout occurs, I'd like to stop/cancel the long running postgres queries from another thread. The thread has access to the Session or Connection object.
At this point I've tried:
session.bind.raw_connection().close()
and
session.connection().close()
and
session.close
and
session.transaction.close()
But no matter what I try, the postgres query still continues until it's end. I know this from watching pg in top. Shouldn't this be fairly easy to do? I'm I missing something? Is this impossible without getting the pid and sending a stop signal directly?
This seems to work well, so far:
def test_close_connection(self):
import threading
from psycopg2.extensions import QueryCanceledError
from sqlalchemy.exc import DBAPIError
session = Session()
conn = session.connection()
sql = self.get_raw_sql_for_long_query()
seconds = 5
t = threading.Timer(seconds, conn.connection.cancel)
t.start()
try:
conn.execute(sql)
except DBAPIError, e:
if type(e.orig) == QueryCanceledError:
print 'Long running query was cancelled.'
t.cancel()
source
For those MySQL folks that may have ended up here, a modified version of this answer that kills the query from a second connection can work. Essentially the following, assuming pymysql under the hood:
thread_id = conn1.connection.thread_id()
t = threading.Timer(seconds, lambda: conn2.execute("kill {}".format(thread_id)))
The original connection will raise pymysql.err.OperationalError. See this other answer for a neat way to create a long running query for testing.
Found on MYSQL that you can specify the query optimiser hints.
One such hint is MAX_EXECUTION_TIME to specify how long query should execute before termination.
You can add this in your app.py
#event.listens_for(engine, 'before_execute', retval=True)
def intercept(conn, clauseelement, multiparams, params):
from sqlalchemy.sql.selectable import Select
# check if it's select statement
if isinstance(clauseelement, Select):
# 'froms' represents list of tables that statement is querying
table = clauseelement.froms[0]
'''Update the timeout here in ms (1s = 1000ms)'''
timeout_ms = 4000
# adding filter in clause
clauseelement = clauseelement.prefix_with(f"/*+ MAX_EXECUTION_TIME({timeout_ms}) */", dialect="mysql")
return clauseelement, multiparams, params
SQLAlchemy query API not working correctly with hints and
MYSQL reference
I got a lot of errors with the message :
"DatabaseError: current transaction is aborted, commands ignored until end of transaction block"
after changed from python-psycopg to python-psycopg2 as Django project's database engine.
The code remains the same, just don't know where those errors are from.
This is what postgres does when a query produces an error and you try to run another query without first rolling back the transaction. (You might think of it as a safety feature, to keep you from corrupting your data.)
To fix this, you'll want to figure out where in the code that bad query is being executed. It might be helpful to use the log_statement and log_min_error_statement options in your postgresql server.
To get rid of the error, roll back the last (erroneous) transaction after you've fixed your code:
from django.db import transaction
transaction.rollback()
You can use try-except to prevent the error from occurring:
from django.db import transaction, DatabaseError
try:
a.save()
except DatabaseError:
transaction.rollback()
Refer : Django documentation
In Flask you just need to write:
curs = conn.cursor()
curs.execute("ROLLBACK")
conn.commit()
P.S. Documentation goes here https://www.postgresql.org/docs/9.4/static/sql-rollback.html
So, I ran into this same issue. The problem I was having here was that my database wasn't properly synced. Simple problems always seem to cause the most angst...
To sync your django db, from within your app directory, within terminal, type:
$ python manage.py syncdb
Edit: Note that if you are using django-south, running the '$ python manage.py migrate' command may also resolve this issue.
Happy coding!
In my experience, these errors happen this way:
try:
code_that_executes_bad_query()
# transaction on DB is now bad
except:
pass
# transaction on db is still bad
code_that_executes_working_query() # raises transaction error
There nothing wrong with the second query, but since the real error was caught, the second query is the one that raises the (much less informative) error.
edit: this only happens if the except clause catches IntegrityError (or any other low level database exception), If you catch something like DoesNotExist this error will not come up, because DoesNotExist does not corrupt the transaction.
The lesson here is don't do try/except/pass.
I think the pattern priestc mentions is more likely to be the usual cause of this issue when using PostgreSQL.
However I feel there are valid uses for the pattern and I don't think this issue should be a reason to always avoid it. For example:
try:
profile = user.get_profile()
except ObjectDoesNotExist:
profile = make_default_profile_for_user(user)
do_something_with_profile(profile)
If you do feel OK with this pattern, but want to avoid explicit transaction handling code all over the place then you might want to look into turning on autocommit mode (PostgreSQL 8.2+): https://docs.djangoproject.com/en/dev/ref/databases/#autocommit-mode
DATABASES['default'] = {
#.. you usual options...
'OPTIONS': {
'autocommit': True,
}
}
I am unsure if there are important performance considerations (or of any other type).
just use rollback
Example code
try:
cur.execute("CREATE TABLE IF NOT EXISTS test2 (id serial, qa text);")
except:
cur.execute("rollback")
cur.execute("CREATE TABLE IF NOT EXISTS test2 (id serial, qa text);")
You only need to run
rollback;
in PostgreSQL and that's it!
If you get this while in interactive shell and need a quick fix, do this:
from django.db import connection
connection._rollback()
originally seen in this answer
I encountered a similar behavior while running a malfunctioned transaction on the postgres terminal. Nothing went through after this, as the database is in a state of error. However, just as a quick fix, if you can afford to avoid rollback transaction. Following did the trick for me:
COMMIT;
I've just got a similar error here. I've found the answer in this link https://www.postgresqltutorial.com/postgresql-python/transaction/
client = PsqlConnection(config)
connection = client.connection
cursor = client.cursor
try:
for query in list_of_querys:
#query format => "INSERT INTO <database.table> VALUES (<values>)"
cursor.execute(query)
connection.commit()
except BaseException as e:
connection.rollback()
Doing this the following query's you send to postgresql will not return an error.
I've got the silimar problem. The solution was to migrate db (manage.py syncdb or manage.py schemamigration --auto <table name> if you use south).
In Flask shell, all I needed to do was a session.rollback() to get past this.
I have met this issue , the error comes out since the error transactions hasn't been ended rightly, I found the postgresql_transactions of Transaction Control command here
Transaction Control
The following commands are used to control transactions
BEGIN TRANSACTION − To start a transaction.
COMMIT − To save the changes, alternatively you can use END TRANSACTION command.
ROLLBACK − To rollback the changes.
so i use the END TRANSACTION to end the error TRANSACTION, code like this:
for key_of_attribute, command in sql_command.items():
cursor = connection.cursor()
g_logger.info("execute command :%s" % (command))
try:
cursor.execute(command)
rows = cursor.fetchall()
g_logger.info("the command:%s result is :%s" % (command, rows))
result_list[key_of_attribute] = rows
g_logger.info("result_list is :%s" % (result_list))
except Exception as e:
cursor.execute('END TRANSACTION;')
g_logger.info("error command :%s and error is :%s" % (command, e))
return result_list
I just had this error too but it was masking another more relevant error message where the code was trying to store a 125 characters string in a 100 characters column:
DatabaseError: value too long for type character varying(100)
I had to debug through the code for the above message to show up, otherwise it displays
DatabaseError: current transaction is aborted
In response to #priestc and #Sebastian, what if you do something like this?
try:
conn.commit()
except:
pass
cursor.execute( sql )
try:
return cursor.fetchall()
except:
conn.commit()
return None
I just tried this code and it seems to work, failing silently without having to care about any possible errors, and working when the query is good.
I believe #AnujGupta's answer is correct. However the rollback can itself raise an exception which you should catch and handle:
from django.db import transaction, DatabaseError
try:
a.save()
except DatabaseError:
try:
transaction.rollback()
except transaction.TransactionManagementError:
# Log or handle otherwise
If you find you're rewriting this code in various save() locations, you can extract-method:
import traceback
def try_rolling_back():
try:
transaction.rollback()
log.warning('rolled back') # example handling
except transaction.TransactionManagementError:
log.exception(traceback.format_exc()) # example handling
Finally, you can prettify it using a decorator that protects methods which use save():
from functools import wraps
def try_rolling_back_on_exception(fn):
#wraps(fn)
def wrapped(*args, **kwargs):
try:
return fn(*args, **kwargs)
except:
traceback.print_exc()
try_rolling_back()
return wrapped
#try_rolling_back_on_exception
def some_saving_method():
# ...
model.save()
# ...
Even if you implement the decorator above, it's still convenient to keep try_rolling_back() as an extracted method in case you need to use it manually for cases where specific handling is required, and the generic decorator handling isn't enough.
This is very strange behavior for me. I'm surprised that no one thought of savepoints. In my code failing query was expected behavior:
from django.db import transaction
#transaction.commit_on_success
def update():
skipped = 0
for old_model in OldModel.objects.all():
try:
Model.objects.create(
group_id=old_model.group_uuid,
file_id=old_model.file_uuid,
)
except IntegrityError:
skipped += 1
return skipped
I have changed code this way to use savepoints:
from django.db import transaction
#transaction.commit_on_success
def update():
skipped = 0
sid = transaction.savepoint()
for old_model in OldModel.objects.all():
try:
Model.objects.create(
group_id=old_model.group_uuid,
file_id=old_model.file_uuid,
)
except IntegrityError:
skipped += 1
transaction.savepoint_rollback(sid)
else:
transaction.savepoint_commit(sid)
return skipped
I am using the python package psycopg2 and I got this error while querying.
I kept running just the query and then the execute function, but when I reran the connection (shown below), it resolved the issue. So rerun what is above your script i.e the connection, because as someone said above, I think it lost the connection or was out of sync or something.
connection = psycopg2.connect(user = "##",
password = "##",
host = "##",
port = "##",
database = "##")
cursor = connection.cursor()
It is an issue with bad sql execution which does not allow other queries to execute until the previous one gets suspended/rollback.
In PgAdmin4-4.24 there is an option of rollback, one can try this.
you could disable transaction via "set_isolation_level(0)"