InterfaceError (0, '') - python

I have built a site using Django and I am receiving this annoying error when I am trying to execute a query.
If I restart the Apache server, the error will go away for a short time.
Traceback:
File "/usr/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
100. response = callback(request, *callback_args, **callback_kwargs)
File "/home/fran/cron/views/set_caches.py" in set_caches
24. cursor.execute(query, [category['id']])
File "/usr/local/lib/python2.7/site-packages/django/db/backends/util.py" in execute
15. return self.cursor.execute(sql, params)
File "/usr/local/lib/python2.7/site-packages/django/db/backends/mysql/base.py" in execute
86. return self.cursor.execute(query, args)
File "build/bdist.linux-i686/egg/MySQLdb/cursors.py" in execute
155. charset = db.character_set_name()
Exception Type: InterfaceError at /blablabla/
Exception Value: (0, '')

This is caused by a global cursor. Try creating and closing the cursor within each method a raw query is needed.
cursor = connection.cursor()
cursor.execute(query)
cursor.close()

You get this error when you have a db.close() call and later try to access the database without creating a new connection. Try to find if you close the connection to the database when you don't mean to.

I agreed with Moberg. This error is caused when we try to access the database after we have closed the connection. This could be caused by some wrong indentation in the code. Below is my code.
conn = connect()
cur = conn.cursor()
tk = get_tickers(cur)
for t in tk:
prices = read_price(t, cur)
if prices != None:
update_price(t, cur)
print 'Price after update of ticker ', t, ':'
p_open, p_high, p_low, p_close = read_price(t, cur)
print p_open, p_high, p_low, p_close
else:
print 'Price for ', t, ' is not available'
conn.close()
I got the same error as reported by Marian. After dedenting conn.close(), everything worked well. Confirmed that global conn is not an issue.

I had the same problem as for April of 2019 using python 3.7 and Mysql 2.7.
At intermittent intervals, the string (0, '') would be added at random to my SQL statements causing errors. I have solved the issue by commenting on the closing of the database connection and just leaving the closing of the cursors across my code.
def set_db():
db = pymysql.connect(host='localhost',
user="root",
passwd="root",
db="DATABASE")
return db
def execute_sql(cnx, sql_clause, fetch_all):
if sql_clause and sql_clause is not None:
try:
cnx.execute(sql_clause)
except Exception as e:
print("Error in sql: " + sql_clause + str(e))
return 0
pass
if fetch_all:
result = cnx.fetchall()
else:
result = cnx.fetchone()
return result
else:
print("Empty sql.")
return 0
db = set_db()
cnx = db.cursor()
sql = "SELECT * FROM TABLE"
result = execute_sql(cnx, sql, 1)
cnx.close() #close the cursor
#db.close #do not close the db connection
...

I had the same issue using threading with Python3 and Pymysql. I was getting deadlocks and then I would get hit with InterfaceError (0, '').
My issue was that I was trying to do a rollback on exception of the query- I believe this rollback was trying to use a connection that no longer existed and it was giving me the interface error. I took this rollback out (because I am OK with not doing rollback for this query) and I just let things go. This fixed my issue.
def delete_q_msg(self, assetid, queuemsgtypeid, msgid):
"""
Given the paramerts below remove items from the msg queue equal to or older than this.
If appropriate send them into a history table to be processed later
:param assetid:
:param queuemsgtypeid:
:param msgid:
:return:
"""
params = (assetid, queuemsgtypeid, msgid,)
db_connection = self._connect_to_db()
sp_sql = "{db}.ps_delete_q_msg".format(db=self._db_settings["database"])
return_value = []
try:
with db_connection.cursor() as cursor:
cursor.callproc(sp_sql, params)
return_value = cursor.fetchall()
db_connection.commit()
except Exception as ex:
# i think we dont want rollback here
# db_connection.rollback()
raise Exception(ex)
finally:
db_connection.close()
return return_value

I can confirm this is caused by a global cursor which is then later used in some functions. My symptoms were the exact same: intermittent interface errors that would temporarily be cleared up by an apache restart.
from django.db import connection
cursor = connection.cursor() # BAD
def foo():
cursor.execute('select * from bar')
But, I am using Django on top of Oracle 11.2 so I do not believe this is a bug in the MySQL/python driver. This is probably due to the caching done by apache/mod_wsgi.

I had the same issue with Flask+pymysql, I was getting an empty tuple as a result in the except: block, something like this "(\"(0, '')\",)" to be specific.
It turned out that the connection was getting closed and later the code tried accessing it which resulted into this error.
So I solved it by referring to above solutions and used a function for connection which assured me a conn every time I had to access the db.
You can recreate this issue by inserting conn.close() just before accessing the cursor.
For reference I used this site which helped me solve this issue.
https://hackersandslackers.com/python-mysql-pymysql/

For me, removing the conn.close() from my function worked. I was trying to access the database again after closing.
I am using flask with AWS.
Also you can try to restart your flask application if it has been running for a long time & if you are also using AWS RDS with MYSQL workbench like in my case, then just check whether your session is expired or not and update the access key and id.
Hope this helps.

I had this same problem and what worked for me in Django was what is described in this answer, which consists of:
Replacing
'ENGINE': 'django.db.backends.mysql'
with
'ENGINE': 'mysql_server_has_gone_away'
on
settings.DATABASES['ENGINE']
and installing with pip the package below:
mysql_server_has_gone_away==1.0.0

with connections.cursor() as cursor:
res=cursor.execute(sql)

Related

How to properly organise the database calls with Python and MySQL?

I have a code like this:
import mysql.connector as mysql
from generate_records import generateRecords
devicesQuery = "CALL iot.sp_sensors_overview()"
try:
db = mysql.connect(
user = "username",
password = "password",
host = "hostname",
database="iot"
)
cursor = db.cursor(dictionary=True, buffered=True)
cursor.execute(devicesQuery)
for sensor in cursor:
generateRecords(sensor, db)
cursor.close()
except mysql.connector.Error as error:
print("Error:")
print(error)
else:
db.close()
The purpose of generateRecords function is obviously to generate records and run the INSERT query against the different table.
Seems like I do something wrong, because no matter what I trying, I getting different errors here, like mysql.connector.errors.OperationalError: MySQL Connection not available..
(upd) I also tried to change the code like it was suggested (see example bellow), with no luck - I still receiving the MySQL connection not available. error.
rows = cursor.fetchall()
cursor.close()
for sensor in rows:
cursor2 = db.cursor()
generateRecords(sensor, cursor2)
So, should I create a new connection within generateRecords function, or pass something different within it, or use some kind of different approach here?
Thank you!
Finally I found what was wrong. I'm used the query to call the stored procedure. Using the cursor.callproc("sp_sensors_overview") instead fixed my issue, and now I'm able to create the next cursor without errors.

How to make this Flask-mysql insert commit?

I'm still using Flask-mysql.
I'm getting the database context (the mysql variable) just fine, and can query on the database / get results. It's only the insert that is not working: it's not complaining (throwing Exceptions). It returns True from the insert method.
This should be done inserting the record when it commits, but for some reason, as I watch the MySQL database with MySQL Workbench, nothing is getting inserted into the table (and it's not throwing exceptions from the insert method):
I'm passing in this to insertCmd:
"INSERT into user(username, password) VALUES ('test1','somepassword');"
I've checked the length of the column in the database, and copied the command into MySQL Workbench (where it successfully inserts the row into the table).
I'm at a loss. The examples I've seen all seem to follow this format, and I have a good database context. You can see other things I've tried in the comments.
def insert(mysql, insertCmd):
try:
#connection = mysql.get_db()
cursor = mysql.connect().cursor()
cursor.execute(insertCmd)
mysql.connect().commit()
#mysql.connect().commit
#connection.commit()
return True
except Exception as e:
print("Problem inserting into db: " + str(e))
return False
You need to keep a handle to the connection; you keep overriding it in your loop.
Here is a simplified example:
con = mysql.connect()
cursor = con.cursor()
def insert(mysql, insertCmd):
try:
cursor.execute(insertCmd)
con.commit()
return True
except Exception as e:
print("Problem inserting into db: " + str(e))
return False
If mysql is your connection, then you can just commit on that, directly:
def insert(mysql, insertCmd):
try:
cursor = mysql.cursor()
cursor.execute(insertCmd)
mysql.commit()
return True
except Exception as e:
print("Problem inserting into db: " + str(e))
return False
return False
Apparently, you MUST separate the connect and cursor, or it won't work.
To get the cursor, this will work: cursor = mysql.connect().cursor()
However, as Burchan Khalid so adeptly pointed out, any attempt after that to make a connection object in order to commit will wipe out the work you did using the cursor.
So, you have to do the following (no shortcuts):
connection = mysql.connect()
cursor = connection.cursor()
cursor.execute(insertCmd)
connection.commit()

pymysql.err.Error: Already closed

I am trying to create a login function. But it only works ones. Ex- When i give a wrong userid and password I got correct error massage that "Could't login" after canceling that message and giving correct userid and password then I get "pymysql.err.Error: Already closed" below are the sample code.
import pymysql
# Connect to the database
connection = pymysql.connect(host='localhost',
user='root',
password='',
db='python_code',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
class LoginModel:
def check_user(self, data):
try:
with connection.cursor() as cursor:
# Read a single record
sql = "SELECT `username` FROM `users` WHERE `username`=%s"
cursor.execute(sql, (data.username))
user = cursor.fetchone()
print(user)
if user:
if (user, data.password):
return user
else:
return False
else:
return False
finally:
connection.close()
You have a mismatch with respect to the number of times you're creating the connection (once) and the number of times you're closing the connection (once per login attempt).
One fix would be to move your:
connection = pymysql.connect(host='localhost',
user='root',
password='',
db='python_code',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
into your def check__user(). It would work because you'd create and close the connection on each invocation (as others have pointed out, the finally clause always gets executed.)
That's not a great design because getting database connections tends to be relatively expensive. So keeping the connection creating outside of the method is preferred.... which means you must remove the connection.close() within the method.
I think you're mixing up connection.close() with cursor.close(). You want to do the latter, not the former. In your example you don't have to explicitly close the cursor because that happens automatically with your with connection.cursor() as cursor: line.
Change finally to except, or remove the try block completely.
This is the culprit code:
finally:
connection.close()
Per the docs:
"A finally clause is always executed before leaving the try statement, whether an exception has occurred or not"
From: https://docs.python.org/2/tutorial/errors.html
You didn't describe alternative behavior for what you would like to see happen instead of this, but my answer addresses the crux of your question.
Had the same issue. The "Finally clause is needed for Postgres with the psycopg2 driver, if used with context manager (with clause), it close the cursor but not the connection. The same does not apply with Pymysql.

pymssql multi insert single commit

I am trying to make multiple insertions into the database, with a rollback occuring unless all insertions complete successfully. I can do this easily in TSQL by wrapping the entire block like so:
BEGIN TRANSACTION
BEGIN TRY
--INSERTIONS GO HERE
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION;
SELECT Error = 1
END CATCH
Now if I try to replicate this behaviour in python using PYMSSQL I try the following:
sql=""" SOME SQL CODE HERE """
try:
cursor = DB.execute(sql)
except:
DB.rollback()
print('Fail')
return False
sql=""" SOME DIFFERENT SQL CODE HERE """
try:
cursor = DB.execute(sql)
except:
DB.rollback()
print('Fail')
return False
DB.commit()
print('Success')
return True
This results in none of the transactions being committed, with no changes seen in the DB. Also if I try to commit after a single insertion using this same method, the insertion is made in the DB, but due to some complex parent child dependencies, the task requires that either all of the insertions are made, or none at all.
I should also mention that a persistent DB connection is kept open using a singleton, which simply overrides the regular connection methods, but causes only a single connection to be allowed open through:
def __init__(self, connid='one'):
self.ensure_conn(connid)
def ensure_conn(self, connid='one'):
conn = getattr(self.connection_stack, connid, None)
if conn is None:
conn = pymssql.connect(
self.server, self.user, self.password, self.database)
self.connection_stack[connid] = conn
def conn(self, connid='one'):
self.ensure_conn(connid)
if connid in self.connection_stack:
return self.connection_stack[connid]
else:
return None
I have tried to find examples of this online but the problem seems to be somewhat unique, so any input or suggestions would be greatly appreciated.
The solution I found was to leverage stored procedures in TSQL and use pymssql's rpc capabilities. This requires EXEC permissions at the SQL account level, but can be limited to set stored procs for safety concerns.
By doing this it allows you to leverage the transactional behaviour for multi-inserts on different tables.
cursor.callproc(name, args)

DatabaseError: current transaction is aborted, commands ignored until end of transaction block?

I got a lot of errors with the message :
"DatabaseError: current transaction is aborted, commands ignored until end of transaction block"
after changed from python-psycopg to python-psycopg2 as Django project's database engine.
The code remains the same, just don't know where those errors are from.
This is what postgres does when a query produces an error and you try to run another query without first rolling back the transaction. (You might think of it as a safety feature, to keep you from corrupting your data.)
To fix this, you'll want to figure out where in the code that bad query is being executed. It might be helpful to use the log_statement and log_min_error_statement options in your postgresql server.
To get rid of the error, roll back the last (erroneous) transaction after you've fixed your code:
from django.db import transaction
transaction.rollback()
You can use try-except to prevent the error from occurring:
from django.db import transaction, DatabaseError
try:
a.save()
except DatabaseError:
transaction.rollback()
Refer : Django documentation
In Flask you just need to write:
curs = conn.cursor()
curs.execute("ROLLBACK")
conn.commit()
P.S. Documentation goes here https://www.postgresql.org/docs/9.4/static/sql-rollback.html
So, I ran into this same issue. The problem I was having here was that my database wasn't properly synced. Simple problems always seem to cause the most angst...
To sync your django db, from within your app directory, within terminal, type:
$ python manage.py syncdb
Edit: Note that if you are using django-south, running the '$ python manage.py migrate' command may also resolve this issue.
Happy coding!
In my experience, these errors happen this way:
try:
code_that_executes_bad_query()
# transaction on DB is now bad
except:
pass
# transaction on db is still bad
code_that_executes_working_query() # raises transaction error
There nothing wrong with the second query, but since the real error was caught, the second query is the one that raises the (much less informative) error.
edit: this only happens if the except clause catches IntegrityError (or any other low level database exception), If you catch something like DoesNotExist this error will not come up, because DoesNotExist does not corrupt the transaction.
The lesson here is don't do try/except/pass.
I think the pattern priestc mentions is more likely to be the usual cause of this issue when using PostgreSQL.
However I feel there are valid uses for the pattern and I don't think this issue should be a reason to always avoid it. For example:
try:
profile = user.get_profile()
except ObjectDoesNotExist:
profile = make_default_profile_for_user(user)
do_something_with_profile(profile)
If you do feel OK with this pattern, but want to avoid explicit transaction handling code all over the place then you might want to look into turning on autocommit mode (PostgreSQL 8.2+): https://docs.djangoproject.com/en/dev/ref/databases/#autocommit-mode
DATABASES['default'] = {
#.. you usual options...
'OPTIONS': {
'autocommit': True,
}
}
I am unsure if there are important performance considerations (or of any other type).
just use rollback
Example code
try:
cur.execute("CREATE TABLE IF NOT EXISTS test2 (id serial, qa text);")
except:
cur.execute("rollback")
cur.execute("CREATE TABLE IF NOT EXISTS test2 (id serial, qa text);")
You only need to run
rollback;
in PostgreSQL and that's it!
If you get this while in interactive shell and need a quick fix, do this:
from django.db import connection
connection._rollback()
originally seen in this answer
I encountered a similar behavior while running a malfunctioned transaction on the postgres terminal. Nothing went through after this, as the database is in a state of error. However, just as a quick fix, if you can afford to avoid rollback transaction. Following did the trick for me:
COMMIT;
I've just got a similar error here. I've found the answer in this link https://www.postgresqltutorial.com/postgresql-python/transaction/
client = PsqlConnection(config)
connection = client.connection
cursor = client.cursor
try:
for query in list_of_querys:
#query format => "INSERT INTO <database.table> VALUES (<values>)"
cursor.execute(query)
connection.commit()
except BaseException as e:
connection.rollback()
Doing this the following query's you send to postgresql will not return an error.
I've got the silimar problem. The solution was to migrate db (manage.py syncdb or manage.py schemamigration --auto <table name> if you use south).
In Flask shell, all I needed to do was a session.rollback() to get past this.
I have met this issue , the error comes out since the error transactions hasn't been ended rightly, I found the postgresql_transactions of Transaction Control command here
Transaction Control
The following commands are used to control transactions
BEGIN TRANSACTION − To start a transaction.
COMMIT − To save the changes, alternatively you can use END TRANSACTION command.
ROLLBACK − To rollback the changes.
so i use the END TRANSACTION to end the error TRANSACTION, code like this:
for key_of_attribute, command in sql_command.items():
cursor = connection.cursor()
g_logger.info("execute command :%s" % (command))
try:
cursor.execute(command)
rows = cursor.fetchall()
g_logger.info("the command:%s result is :%s" % (command, rows))
result_list[key_of_attribute] = rows
g_logger.info("result_list is :%s" % (result_list))
except Exception as e:
cursor.execute('END TRANSACTION;')
g_logger.info("error command :%s and error is :%s" % (command, e))
return result_list
I just had this error too but it was masking another more relevant error message where the code was trying to store a 125 characters string in a 100 characters column:
DatabaseError: value too long for type character varying(100)
I had to debug through the code for the above message to show up, otherwise it displays
DatabaseError: current transaction is aborted
In response to #priestc and #Sebastian, what if you do something like this?
try:
conn.commit()
except:
pass
cursor.execute( sql )
try:
return cursor.fetchall()
except:
conn.commit()
return None
I just tried this code and it seems to work, failing silently without having to care about any possible errors, and working when the query is good.
I believe #AnujGupta's answer is correct. However the rollback can itself raise an exception which you should catch and handle:
from django.db import transaction, DatabaseError
try:
a.save()
except DatabaseError:
try:
transaction.rollback()
except transaction.TransactionManagementError:
# Log or handle otherwise
If you find you're rewriting this code in various save() locations, you can extract-method:
import traceback
def try_rolling_back():
try:
transaction.rollback()
log.warning('rolled back') # example handling
except transaction.TransactionManagementError:
log.exception(traceback.format_exc()) # example handling
Finally, you can prettify it using a decorator that protects methods which use save():
from functools import wraps
def try_rolling_back_on_exception(fn):
#wraps(fn)
def wrapped(*args, **kwargs):
try:
return fn(*args, **kwargs)
except:
traceback.print_exc()
try_rolling_back()
return wrapped
#try_rolling_back_on_exception
def some_saving_method():
# ...
model.save()
# ...
Even if you implement the decorator above, it's still convenient to keep try_rolling_back() as an extracted method in case you need to use it manually for cases where specific handling is required, and the generic decorator handling isn't enough.
This is very strange behavior for me. I'm surprised that no one thought of savepoints. In my code failing query was expected behavior:
from django.db import transaction
#transaction.commit_on_success
def update():
skipped = 0
for old_model in OldModel.objects.all():
try:
Model.objects.create(
group_id=old_model.group_uuid,
file_id=old_model.file_uuid,
)
except IntegrityError:
skipped += 1
return skipped
I have changed code this way to use savepoints:
from django.db import transaction
#transaction.commit_on_success
def update():
skipped = 0
sid = transaction.savepoint()
for old_model in OldModel.objects.all():
try:
Model.objects.create(
group_id=old_model.group_uuid,
file_id=old_model.file_uuid,
)
except IntegrityError:
skipped += 1
transaction.savepoint_rollback(sid)
else:
transaction.savepoint_commit(sid)
return skipped
I am using the python package psycopg2 and I got this error while querying.
I kept running just the query and then the execute function, but when I reran the connection (shown below), it resolved the issue. So rerun what is above your script i.e the connection, because as someone said above, I think it lost the connection or was out of sync or something.
connection = psycopg2.connect(user = "##",
password = "##",
host = "##",
port = "##",
database = "##")
cursor = connection.cursor()
It is an issue with bad sql execution which does not allow other queries to execute until the previous one gets suspended/rollback.
In PgAdmin4-4.24 there is an option of rollback, one can try this.
you could disable transaction via "set_isolation_level(0)"

Categories

Resources