I am currently developing an application based on flask. It runs fine spawning the server manually using app.run(). I've tried to run it through mod_wsgi now. Strangely, I get a 500 error, and nothing in the logs. I've investigated a bit and here are my findings.
Inserting a line like print >>sys.stderr, "hello" works as expected. The message shows up in the error log.
When calling a method without using a template it works just fine. No 500 Error.
Using a simple template works fine too.
BUT as soon as I trigger a database access inside the template (for example looping over a query) I get the error.
My gut tells me that it's SQLAlchemy which emits an error, and maybe some logging config causes the log to be discarded at some point in the application.
Additionally, for testing, I am using SQLite. This, as far as I can recall, can only be accessed from one thread. So if mod_wsgi spawns more threads, it may break the app.
I am a bit at a loss, because it only breaks running behind mod_wsgi, which also seems to swallow my errors. What can I do to make the errors bubble up into the apache error_log?
For reference, the code can be seen on this github permalink.
Turns out I was not completely wrong. The exception was indeed thrown by sqlalchemy. And as it's streamed to stdout by default, mod_wsgi silently ignored it (as far as I can tell).
To answer my main question: How to see the errors produced by the WSGI app?
It's actually very simple. Redirect your logs to stderr. The only thing you need to do, is add the following to your WSGI script:
import logging, sys
logging.basicConfig(stream=sys.stderr)
Now, this is the most mundane logging config. As I haven't put anything into place yet for my application this will do. But, I guess, once the application matures you will have a more sophisticated logging config anyways, so this won't bite you.
But for quick and dirty debugging, this will do just fine.
I had a similar problem: occasional "Internal Server Error" without logs. When you use mod_wsgi you should remove "app.run()" because this will always start a local WSGI server which we do not want if we deploy that application to mod_wsgi. See docs. I do not know if this is your case, but I hope this can help.
If you put this into your config.py it will help dramatically in propagating errors up to the apache error log:
PROPAGATE_EXCEPTIONS = True
Related
We have a python MVC Web application built using (werkzeug, jinja2 and MongoEngine).
In production we have 4 nginx servers setup behind a nginx load balancer. All 4 servers share a common Mongo server, a Redis server and a Sphinx server.We are using uwsgi between nginx and the application.
Now to the curious case.
Once we deploy a new code, we do a touch xyz.wsgi. For a few hours everything looks fine.
but after that we randomly get the error.
'module' object is not callable
I have seen this error before, in other python development scenarios. But what confuses me this time is the total random behavior.
For Example example.com/multimedia?keywords=sdf&s=title&c=21830.
If we refresh the error is gone. Try another value for any parameter like 'keywords=xzy' and there it is again. Refresh its gone.
That 'multimedia' module is something we did just recently.So we can assume its the root cause. But why does the error occur randomly ?
My assumption is that, it might have something to do with nginx caching or existence of pyc/pyo ? Could a illicit Global Variable be the cause ?
Could you expert hands help me out.
The error probably occurs randomly because it's a runtime error in your code. That is, it doesn't get fired until a user visits your site with the right conditions to follow the code path that results in this error.
It's unlikely to be an nginx caching issue. If it was caching it, then it would probably return the same result over and over rather then change on reload.
However, you can test this by removing nginx and directly testing against werkzeug. Run the requests against it and see if you see the same behavior. No use in debugging Nginx unless you can prove that the underlying systems work the way you expect.
It's also probably worth the 30 seconds to search for module() in your code, since that's the most direct interpretation of that error message.
Ok, here is my confusion/problem:
I develop in localhost and there you could raise exceptions and could see the logs in command line easily.
Then I deploy the code on test, stage and production server, that is where the problem begins, it is not easy to see logs or debug errors and exceptions. For normal errors I guess django-toolbar could be enabled, but I do get some silent exceptions which dont crash but the process manipulates to failure because of that. For example, I have payment integration, and few days ago the payments were failing on return (callback) on our site, but nothing was crashing, just that payment process failed message was coming, but the payment gateway vendor was working fine, then I had to look for some failure instances which could lead to this problem and figured out that one db save operation was not saving because the variable was not there.
Now my question, is Sentry (https://github.com/getsentry/sentry) an answer for that? Or is there any other option for this?
Please do ask if any further clarification is needed for my requirement.
Sentry is an option, but honestly is too limited (I tried it a month ago or so), it's intended to track exceptions, but in the real world we should track important informations and events too.
If you didn't setup an application logging, I suggest you to do it, by following this example.
In my app I defined several loggers, for different purposes, the python logging configuration via dictionary (the one used by Django), is very powerful and you have a full control over how things get logged, for example you can write logs to a file, to a database, send an email, call a third party api or whatever. If your app is running in a load balanced environment (so you have several machines running your app), you can use services like Loggly to aggregate the logs coming from your instances in a single place (and since it uses RSYSLOG, it aggregates not only your Django app logs, but also all the logs of your underlying OS).
I suggest you to use also New Relic, which keeps track of a lot of stuff automatically: query executed and time, template loading time, errors and a lot of other useful statistics.
I have been working on an app for a while but lately my python SDK decided that it would log things up to a certain point then display nothing anymore.
The only way to change its mind it is to stop and start the app again and it blocks at the same point (which is hard to debug)
The last thing that I see is
GET /_ah/api/static/proxy.html?jsh=SOMERANDOMID with an answer 200
The app works fine, I can do calls to Cloud Endpoint, get my results, etc... Just the Logging display is gone.
For some reason when I upload my app, everything works fine, the logs works fine.
Anything that I can do like cleaning some cache somewhere, I don't know.
Thanks
So I figured it out. It was due to some wsgi middleware setup that I had to manage sessions
The command that I've been using is:
pserve development.ini --reload
and every time when i meet a error like SQLAlchemy's "IntegrityError" or something else,
I have to kill pserve an type the command again to restart the apps.
Is there a method i can restart the apps in a exception view like this?
#view_config(context=Exception)
def error_view(exc, request):
#restart the waitress or apache...
return Response("Sorry there was an error, wait seconds, we will fix it soon.")
Restarting your server is not a sensical response to an IntegrityError. This is something that is expected to happen and you need to handle it. Restarting the server really makes no sense in the context of anything other than development.
If you run into exceptions in development, fix the code and save the file and the --reload will automatically restart your server for you.
If you have to restart the application after an exception (supposedly because nothing works after an exception otherwise) it suggests your requests try to re-use the same transaction - in other words, your application is not configured properly.
You should be using a session configured with ZopeTransactionExtension as Pyramide's scaffolds generate.
If you show us some code we may be able to pinpoint the exact cause of the problem.
I am trying to do some logging in Django (mod_wsgi) of a view. I however want to do this so that the client is not held up similar to the perlcleanuphandler phase available in mod_perl. Notice the line "It is used to execute some code immediately after the request has been served (the client went away)". This is exactly what I want.
I want to client to be serviced and then I want to do the logging. Is there a good insertion point for the code in mod_wsgi or Django ? I looked into suggestions here and here. However, in both cases when I put a simple time.sleep(10) and do a curl/wget on the url, the curl doesn't return for 10 secs.
I even tried to put the time.sleep in __del__ method in the HttpResponse Object as suggested in one of the comments, but still no dice.
I am aware that I can probably put the logging data onto a queue and do some backgroud processing to store the logs, but I would like to avoid that approach if there is an other simpler/easier approach.
Any suggestions ?
See documentation at:
http://code.google.com/p/modwsgi/wiki/RegisteringCleanupCode
for a WSGI specific (not mod_wsgi specific) way.
Django as pointed out by others may have its own ways of doing things as well, although whether it is fired after all the response content is written back to the client I don't know.