Catching uncaught exceptions through django development server - python

I am looking for some way in django's development server that will make the server to stop at any uncaught exception automatically, as it is done with pdb mode in ipython console.
I know to put import pdb; pdb.set_trace() lines into the code to make application stop. But this doesn't help me, because the line where the exception is thrown is being called too many times. So I can't find out the exact conditions to define a conditional break point.
Is this possible?
Thank you...

You can set sys.excepthook to a function that does import pdb; pdb.pm(), as per this recipe.

Related

Python pdb breakpoint after 10 seconds

On some occasions, my python program won't response because there seems to be a deadlock. Since I have no idea where this deadlock happens, I'd like to set a breakpoint or dump the stack of all threads after 10 seconds in order to learn what my program is waiting for.
Use the logging module and put e.g. Logger.debug() calls in strategic places through your program. You can disable these messages by one single setting (Logger.setLevel) if you want to. And you can choose if you want to write them to e.g. stderr or to a file.
import pdb
from your_test_module import TestCase
testcase = TestCase()
testcase.setUp()
pdb.runcall(testcase.specific_test)
And then ctrl-c at your leisure. The KeyboardInterupt will cause pdb to drop into debugger prompt.
Well, as it turns out, it was because my database was locked (a connection wasn't closed) and when the tests were tearing down (and the database schema was being erased so that the database is clean for the next tests), psycopg2 just ignored the KeyboardInterrupt exception.
I solved me problem using the faulthandler module (for earlier versions, there is a pypi repo). Fault handler allows me to dump the stack trace to any file (including sys.stderr) after a period of time (repeatingly) using faulthandler.dump_traceback_later(3, repeat=True). That allowed me to set the breakpoint where my program stopped responding and tackle the issue down effectively.

Suspend on exceptions caused inside Django app in PyCharm

I debug django application and want to suspend code execution at the point where exception occurs with cursor pointing to problematic place in code. Pretty HTML display by django would be helpful either but not mandatory. My IDE is PyCharm.
If I set pycharm to suspend on termination of exception, then I never catch it, because django handles the exception with HTML debug info and exceptions never terminate. Setting DEBUG_PROPAGATE_EXCEPTIONS = True inside settings.py causes HTML debug info to disappear but the execution does not terminate either.
If I set pycharm to suspend on raise of exception, then I have to pass all existing exceptions inside py internals such as copy.py, decimal.py, gettext.py, etc, which is inconvenient (there are so many of them that I could never reach exceptions caused by my code).
If I set "temporary" setup to suspend on raise of exception which occurs after given breakpoint (which I place at the last line of settings.py) then django server does not start.
Thanks in advance for your help.
This should happen automatically in PyCharm. What you need to do is set no breakpoints, but run as debug (click on the green bug icon). When the exception occurs, execution should automatically halt.

Pyramid: restart the apps in a exception view

The command that I've been using is:
pserve development.ini --reload
and every time when i meet a error like SQLAlchemy's "IntegrityError" or something else,
I have to kill pserve an type the command again to restart the apps.
Is there a method i can restart the apps in a exception view like this?
#view_config(context=Exception)
def error_view(exc, request):
#restart the waitress or apache...
return Response("Sorry there was an error, wait seconds, we will fix it soon.")
Restarting your server is not a sensical response to an IntegrityError. This is something that is expected to happen and you need to handle it. Restarting the server really makes no sense in the context of anything other than development.
If you run into exceptions in development, fix the code and save the file and the --reload will automatically restart your server for you.
If you have to restart the application after an exception (supposedly because nothing works after an exception otherwise) it suggests your requests try to re-use the same transaction - in other words, your application is not configured properly.
You should be using a session configured with ZopeTransactionExtension as Pyramide's scaffolds generate.
If you show us some code we may be able to pinpoint the exact cause of the problem.

Why Twisted Manhole ConnectionDone is an error?

I'm using twisted manhole (https://github.com/HoverHell/pyaux/blob/master/pyaux/runlib.py#L126), and I also send errors caught by Twisted into python logging (https://github.com/HoverHell/pyaux/blob/master/pyaux/twisted_aux.py#L9).
However, as a result, the log gets ConnectionDone() errors, which isn't a very interesting thing as an error.
What would be appropriate to change to avoid getting this (and, possibly, some other) not-exactly-errors? Filtering for twisted.python.failure.Failure cases, perhaps? And where from is the ConnectionDone() even raised and why?
ConnectionDone() instance is given to connectionLost() callback after the connection has been closed. You should be seeing this, when the client side decides to close the connection.
You definitely don't want to filter the Failure out. You can think of the failure as a "asynchronous analogy" of the Exception. The usual thing to do, not to see some kind of exceptions is something like:
from twisted.internet import error
...
def connectionLost(self, reason):
if reason.check(error.ConnectionDone):
# this is normal, ignore this
pass
else:
# do whatever you have been doing for logging

500 Error without anything in the apache logs

I am currently developing an application based on flask. It runs fine spawning the server manually using app.run(). I've tried to run it through mod_wsgi now. Strangely, I get a 500 error, and nothing in the logs. I've investigated a bit and here are my findings.
Inserting a line like print >>sys.stderr, "hello" works as expected. The message shows up in the error log.
When calling a method without using a template it works just fine. No 500 Error.
Using a simple template works fine too.
BUT as soon as I trigger a database access inside the template (for example looping over a query) I get the error.
My gut tells me that it's SQLAlchemy which emits an error, and maybe some logging config causes the log to be discarded at some point in the application.
Additionally, for testing, I am using SQLite. This, as far as I can recall, can only be accessed from one thread. So if mod_wsgi spawns more threads, it may break the app.
I am a bit at a loss, because it only breaks running behind mod_wsgi, which also seems to swallow my errors. What can I do to make the errors bubble up into the apache error_log?
For reference, the code can be seen on this github permalink.
Turns out I was not completely wrong. The exception was indeed thrown by sqlalchemy. And as it's streamed to stdout by default, mod_wsgi silently ignored it (as far as I can tell).
To answer my main question: How to see the errors produced by the WSGI app?
It's actually very simple. Redirect your logs to stderr. The only thing you need to do, is add the following to your WSGI script:
import logging, sys
logging.basicConfig(stream=sys.stderr)
Now, this is the most mundane logging config. As I haven't put anything into place yet for my application this will do. But, I guess, once the application matures you will have a more sophisticated logging config anyways, so this won't bite you.
But for quick and dirty debugging, this will do just fine.
I had a similar problem: occasional "Internal Server Error" without logs. When you use mod_wsgi you should remove "app.run()" because this will always start a local WSGI server which we do not want if we deploy that application to mod_wsgi. See docs. I do not know if this is your case, but I hope this can help.
If you put this into your config.py it will help dramatically in propagating errors up to the apache error log:
PROPAGATE_EXCEPTIONS = True

Categories

Resources