I'm running a django application on apache with mod_wsgi. The apache error log is clean except for one mysterious error.
Exception exceptions.TypeError: "'NoneType' object is not callable" in <bound method SharedSocket.__del__ of <httplib.SharedSocket instance at 0x82fb138c>> ignored
I understand that this means that some code is trying to call SharedSocket.del, but it is None. I have no idea what the reason for that is, or how to go about fixing it. Also, this error is marked as ignored. It also doesn't seem to be causing any actual problems other than filling the log file with error messages. Should I even bother trying to fix it?
It is likely that this is coming about because while handling an exception a further exception is occurring within the destructor of an object being destroyed and that the latter exception is unable to be raised because of the state of the pending one. Within Python C API details of such can be written direct to error log by PyErr_WriteUnraisable().
So, it isn't that the del method is None, but some variable it is trying to use from code executed within the del method is None. You would need to look at the code for SharedSocket.del to work out exactly what is going on.
Note: this is more of a pointer than an answer, but I couldn't get this to work in a comment.
I did some googling on the error message and there seems to be a group of related problems that crop up in Apache + mod_wsgi + MySQL environments. The culprit may be that you are running out of simultaneous connections to MySQL because of process pooling, with each process maintaining an open connection to the DB server. There are also indications that some non-thread-safe code may be used in a multi-thread environment. Good luck.
Related
When gettings rows from Cassandra via Pycassa sometimes a TApplicationException: get_slice failed: unknown result].
I'm not able to reproduce this Exception nor can I find any documentation on this issue. Right now I'm running Cassandra as a single node (for development) and the Exception is always raised in a context where a lot of concurrent requests are happening.
I'd like to know if the reason for this is something like a performance issue or if it is related to something else (therefore I'd like to know if it may disappear when running more than one node in production?).
Cassandra Version is 1.0.7 and the output from the log is the following:
[DEBUG 17:45:58,253 Thrift transport error occurred during processing of message.
org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
at org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2877)
at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
DEBUG 17:45:58,315 logged out: #<User allow_all groups=[]>]
If you are using multiprocessing, pycassa.pool is not multiprocessing-safe.
I was receiving "TApplicationException: get_slice failed: unknown result" under similar circumstances. I created multiple pycassa.pools, one pool per process, and the problem appears solved.
The documentation has a special page to advise us about multiprocessing and pools, but it does not list what error you will get if you multiprocess with one pool. I think this is it!
I am currently developing an application based on flask. It runs fine spawning the server manually using app.run(). I've tried to run it through mod_wsgi now. Strangely, I get a 500 error, and nothing in the logs. I've investigated a bit and here are my findings.
Inserting a line like print >>sys.stderr, "hello" works as expected. The message shows up in the error log.
When calling a method without using a template it works just fine. No 500 Error.
Using a simple template works fine too.
BUT as soon as I trigger a database access inside the template (for example looping over a query) I get the error.
My gut tells me that it's SQLAlchemy which emits an error, and maybe some logging config causes the log to be discarded at some point in the application.
Additionally, for testing, I am using SQLite. This, as far as I can recall, can only be accessed from one thread. So if mod_wsgi spawns more threads, it may break the app.
I am a bit at a loss, because it only breaks running behind mod_wsgi, which also seems to swallow my errors. What can I do to make the errors bubble up into the apache error_log?
For reference, the code can be seen on this github permalink.
Turns out I was not completely wrong. The exception was indeed thrown by sqlalchemy. And as it's streamed to stdout by default, mod_wsgi silently ignored it (as far as I can tell).
To answer my main question: How to see the errors produced by the WSGI app?
It's actually very simple. Redirect your logs to stderr. The only thing you need to do, is add the following to your WSGI script:
import logging, sys
logging.basicConfig(stream=sys.stderr)
Now, this is the most mundane logging config. As I haven't put anything into place yet for my application this will do. But, I guess, once the application matures you will have a more sophisticated logging config anyways, so this won't bite you.
But for quick and dirty debugging, this will do just fine.
I had a similar problem: occasional "Internal Server Error" without logs. When you use mod_wsgi you should remove "app.run()" because this will always start a local WSGI server which we do not want if we deploy that application to mod_wsgi. See docs. I do not know if this is your case, but I hope this can help.
If you put this into your config.py it will help dramatically in propagating errors up to the apache error log:
PROPAGATE_EXCEPTIONS = True
I'm trying to run a Disco job using map and reduce functions that are deserialized after being passed over a TCP socket using the marshal library. Specifically, I'm unpacking them with
code = marshal.loads(data_from_tcp)
func = types.FunctionType(code, globals(), "func")
I've already tested plain Disco jobs (with locally defined functions) on the same system, and they work fine. However, when I run a Disco job with the new functions, the jobs keep failing and I keep getting the error message localhost WARNING: [map:0] Could not parse worker event: invalid_length
I've searched the documentation, and there is no mention that I could find of a "worker event", or of an invalid_length. After doing a grep on the source code, I find a single instance of the phrase "Could not parse worker event:", specifically in the file master/src/disco_worker.erl. I'm not familiar with Erlang, and have no idea how this works.
What is causing this problem? Should I do something else to circumvent it?
EDIT: After more debugging, I've realized that this error is tied to my use of the string.split() method inside my test-case function. Whenever it is used (even on strings that are not part of the input), this error is raised. I've verified that the method does exist on the object, but calling it seems to cause problems. Any thoughts?
EDIT 2: In addition, any use of the re.split function achieves the same effect.
EDIT 3: It appears that calling any string function on the input string in the map function creates this same error.
In my case this warning occured always when I printed something to sys.stderr in map function (and the job failed in the end).
The documentation to worker protocol says: Workers should not write anything to stderr, except messages formatted as described below. stdout is also initially redirected to stderr.
Im using Pylons in combination with WMI module to do some basic system monitoring of a couple of machines, for POSIX based systems everything is simple - for Windows - not so much.
Doing a request to the Pylons server to get current CPU, however it's not working well, or atleast with the WMI module. First i simply did (something) this:
c = wmi.WMI()
for cpu in c.Win32_Processor():
value = cpu.LoadPercentage
However, that gave me an error when accessing this module via Pylons (GET http://ip:port/cpu):
raise x_wmi_uninitialised_thread ("WMI returned a syntax error: you're probably running inside a thread without first calling pythoncom.CoInitialize[Ex]")
x_wmi_uninitialised_thread: <x_wmi: WMI returned a syntax error: you're probably running inside a thread without first calling pythoncom.CoInitialize[Ex] (no underlying exception)>
Looking at http://timgolden.me.uk/python/wmi/tutorial.html, i wrapped the code accordingly to the example under the topic "CoInitialize & CoUninitialize", which makes the code work, but it keeps throwing "Win32 exception occurred releasing IUnknown at..."
And then looking at http://mail.python.org/pipermail/python-win32/2007-August/006237.html and the follow up post, trying to follow that - however pythoncom._GetInterfaceCount() is always 20.
Im guessing this is someway related to Pylons spawning worker threads and crap like that, however im kinda lost here, advice would be nice.
Thanks in advance,
Anders
EDIT: If you are doing something similar, don't bother with the WMI module, simply use http://msdn.microsoft.com/en-us/library/aa394531%28VS.85%29.aspx , and you don't have to worry about threads crap like this.
Add "sys.coinit_flags = 0" after your "import sys" line and before the "import pythoncom" line. That worked for me, although I don't know why.
To me it sounds like Windows is not enjoying the way you are doing this kind of work on what are probably temporary worker threads (as you point out).
If this is the case, and you can't get things to work, one possible workaround would be to re-factor your application slightly so that there is a service thread running at all times which you can query for this information rather than setting everything up and asking for it on demand. It might not even need to be a thread, perhaps just a utility class instance which you get set up when the application starts, protected with a lock to prevent concurrent access.
i'm using Python with MySQL and Django. I keep seeing this error and I can't figure out where the exception is being thrown:
Exception _mysql_exceptions.ProgrammingError: (2014, "Commands out of sync; you can't run this command now") in <bound method Cursor.__del__ of <MySQLdb.cursors.Cursor object at 0x20108150>> ignored
I have many "try" and "exception" blocks in my code--if the exception occurred within one of those, then I would see my own debugging messages. The above Exception is obviously being caught somewhere since my program does not abort when the Exception is thrown.
I'm very puzzled, can someone help me out?
I had exactly that error (using MySQLdb and Django) and discovered that the reason it was "ignored" was that it occurred in a __del__ method. Exceptions in __del__ are categorically ignored:
object.__del__ datamodel
There doesn't seem to be any way to catch it from further up the stack (at least according to this thread), but you can edit MySQLdb/cursors.py or monkey-patch to get your own __del__ in there that catches the exception and drops you into a pdb prompt or logs a full traceback.
This is a Python Error.
See: http://eric.lubow.org/2009/python/pythons-mysqldb-2014-error-commands-out-of-sync/
It looks like there is a problem with your MySQLdb Query.
I believe this error can occur if you are using the same connection/cursor from multiple threads.
However, I dont think the creators of Django has made such a mistake, but if you are doing something by yourself it can easily happen.
After printing out a bunch of stuff and debugging, I figured out the problem I think. One of the libraries that I used didn't close the connection or the cursor. But this problem only shows up if I iterate through a large amount of data. The problem is also very intermittent and I still don't know who's throwing the "command out of sync" exception. But now that we closed both the connection and cursor, I don't see the errors anymore.
The exceptions in object destructors (__del__) are ignored, which this message indicates. If you execute some MySQL command without fetching results from the cursor (e.g. 'create procedure' or 'insert') then the exception is unnoticed until the cursor is destroyed.
If you want to raise and catch an exception, call explicitly cursor.close() somewhere before going out of the scope.