i'm using Python with MySQL and Django. I keep seeing this error and I can't figure out where the exception is being thrown:
Exception _mysql_exceptions.ProgrammingError: (2014, "Commands out of sync; you can't run this command now") in <bound method Cursor.__del__ of <MySQLdb.cursors.Cursor object at 0x20108150>> ignored
I have many "try" and "exception" blocks in my code--if the exception occurred within one of those, then I would see my own debugging messages. The above Exception is obviously being caught somewhere since my program does not abort when the Exception is thrown.
I'm very puzzled, can someone help me out?
I had exactly that error (using MySQLdb and Django) and discovered that the reason it was "ignored" was that it occurred in a __del__ method. Exceptions in __del__ are categorically ignored:
object.__del__ datamodel
There doesn't seem to be any way to catch it from further up the stack (at least according to this thread), but you can edit MySQLdb/cursors.py or monkey-patch to get your own __del__ in there that catches the exception and drops you into a pdb prompt or logs a full traceback.
This is a Python Error.
See: http://eric.lubow.org/2009/python/pythons-mysqldb-2014-error-commands-out-of-sync/
It looks like there is a problem with your MySQLdb Query.
I believe this error can occur if you are using the same connection/cursor from multiple threads.
However, I dont think the creators of Django has made such a mistake, but if you are doing something by yourself it can easily happen.
After printing out a bunch of stuff and debugging, I figured out the problem I think. One of the libraries that I used didn't close the connection or the cursor. But this problem only shows up if I iterate through a large amount of data. The problem is also very intermittent and I still don't know who's throwing the "command out of sync" exception. But now that we closed both the connection and cursor, I don't see the errors anymore.
The exceptions in object destructors (__del__) are ignored, which this message indicates. If you execute some MySQL command without fetching results from the cursor (e.g. 'create procedure' or 'insert') then the exception is unnoticed until the cursor is destroyed.
If you want to raise and catch an exception, call explicitly cursor.close() somewhere before going out of the scope.
Related
I'm running a telegram bot using python-telegram-bot==13.13 and psycopg==3.1.4 for database connections.
Yesterday this error raised suddenly:
psycopg.transaction.OutOfOrderTransactionNesting: transaction commit at the wrong nesting level.
The function is more than a hundred of lines so I'll put only the transaction where the error raised:
Due to the traceback, it happened on the first line here:
with conn.transaction():
cur.execute(f'update testauth_profile set balance=balance-{price} where telegram_id={update.effective_chat.id}')
cur.execute(f'update testauth_profile set balance=balance+{price-cost} where id=622')
cur.execute(f"insert into testauth_log values ('{data['mdn']}','{name}',now(),{context.user_data['uid']},'{data['id']}')")
cur.execute(f"insert into verifications values ('{data['id']}',now()+interval'15 minute',{update.effective_chat.id},0,'{data['mdn']}',{price-cost},{price},{context.user_data['lang']})")
where conn is a psycopg connection and has autocommit=True. and cur is a cursor from conn.
After that, everything happend on the database in the next 11 hours were rolled back and I lost them permanently.
Any idea for what caused this error? and how can I avoid this in the future?
I searched in google but nothing about this error.
I had a look at the postgres logs in /var/logs/postgres/ but there was no errors in the whole 11 hours and even before.
The strange thing is that I never touched the code and it has been running for days without any problem.
The function where the error was raised, was executed many times before without problems.
I have an Mlflow project that raises an exception. I execute that function using mlflow.run, but I get mlflow.exceptions.ExecutionException("Run (ID '<run_id>') failed").
Is there any way I could get the exception that is being raised where I am executing mlflow.run?
Or is it possible to send an mlflow.exceptions.ExecutionException with custom message set from within the project?
Unfortunately not at the moment. mlflow run starts a new process and there is no protocol for exception passing right now. In general the other project does not even have to be in the same language.
One workaround I can think of is to pass the exception via mlflow by setting run tag. E.g.:
try:
...
except Exception as ex:
mlflow.set_tag("exception", str(ex))
I debug django application and want to suspend code execution at the point where exception occurs with cursor pointing to problematic place in code. Pretty HTML display by django would be helpful either but not mandatory. My IDE is PyCharm.
If I set pycharm to suspend on termination of exception, then I never catch it, because django handles the exception with HTML debug info and exceptions never terminate. Setting DEBUG_PROPAGATE_EXCEPTIONS = True inside settings.py causes HTML debug info to disappear but the execution does not terminate either.
If I set pycharm to suspend on raise of exception, then I have to pass all existing exceptions inside py internals such as copy.py, decimal.py, gettext.py, etc, which is inconvenient (there are so many of them that I could never reach exceptions caused by my code).
If I set "temporary" setup to suspend on raise of exception which occurs after given breakpoint (which I place at the last line of settings.py) then django server does not start.
Thanks in advance for your help.
This should happen automatically in PyCharm. What you need to do is set no breakpoints, but run as debug (click on the green bug icon). When the exception occurs, execution should automatically halt.
I'm running a django application on apache with mod_wsgi. The apache error log is clean except for one mysterious error.
Exception exceptions.TypeError: "'NoneType' object is not callable" in <bound method SharedSocket.__del__ of <httplib.SharedSocket instance at 0x82fb138c>> ignored
I understand that this means that some code is trying to call SharedSocket.del, but it is None. I have no idea what the reason for that is, or how to go about fixing it. Also, this error is marked as ignored. It also doesn't seem to be causing any actual problems other than filling the log file with error messages. Should I even bother trying to fix it?
It is likely that this is coming about because while handling an exception a further exception is occurring within the destructor of an object being destroyed and that the latter exception is unable to be raised because of the state of the pending one. Within Python C API details of such can be written direct to error log by PyErr_WriteUnraisable().
So, it isn't that the del method is None, but some variable it is trying to use from code executed within the del method is None. You would need to look at the code for SharedSocket.del to work out exactly what is going on.
Note: this is more of a pointer than an answer, but I couldn't get this to work in a comment.
I did some googling on the error message and there seems to be a group of related problems that crop up in Apache + mod_wsgi + MySQL environments. The culprit may be that you are running out of simultaneous connections to MySQL because of process pooling, with each process maintaining an open connection to the DB server. There are also indications that some non-thread-safe code may be used in a multi-thread environment. Good luck.
I have a problem with Python + SQLAlchemy.
When something goes wrong (in my case it is an integrity error, due to a race condition) and the database error is raised, all following requests result in the error being raised:
InvalidRequestError: The transaction is inactive due to a rollback in a subtransaction. Issue rollback() to cancel the transaction.
While I can prevent this original error (race condition) from happening, but I would like a more robust solution, I want to prevent a single error from crashing the entire application.
What is the best way to do this? Is there a way to tell Python to rollback the failed transaction?
The easiest thing is to make sure you are using a new SQLAlchemy Session when you start work in your controller. in /project/lib/base.py, add a method for BaseController:
def __before__(self):
model.Session.close()
Session.close() will clear out the session and close any open transactions if there are any. You want to make sure that each time you use a session it's cleared when you're done with your work in the controller. Doing it at the start of the controller's handling of the request will make sure that it's always cleared, even if the thread's previous request had an exception and there is a rollback waiting.
Do you use in your controllers yoursapp.lib.base.BaseController?
You can look at
Handle mysql restart in SQLAlchemy
Also you can catch SA exception in BaseController try-finally block and do session rollback()
In BaseController SA Session removed http://www.sqlalchemy.org/docs/05/session.html#lifespan-of-a-contextual-session