Suspend on exceptions caused inside Django app in PyCharm - python

I debug django application and want to suspend code execution at the point where exception occurs with cursor pointing to problematic place in code. Pretty HTML display by django would be helpful either but not mandatory. My IDE is PyCharm.
If I set pycharm to suspend on termination of exception, then I never catch it, because django handles the exception with HTML debug info and exceptions never terminate. Setting DEBUG_PROPAGATE_EXCEPTIONS = True inside settings.py causes HTML debug info to disappear but the execution does not terminate either.
If I set pycharm to suspend on raise of exception, then I have to pass all existing exceptions inside py internals such as copy.py, decimal.py, gettext.py, etc, which is inconvenient (there are so many of them that I could never reach exceptions caused by my code).
If I set "temporary" setup to suspend on raise of exception which occurs after given breakpoint (which I place at the last line of settings.py) then django server does not start.
Thanks in advance for your help.

This should happen automatically in PyCharm. What you need to do is set no breakpoints, but run as debug (click on the green bug icon). When the exception occurs, execution should automatically halt.

Related

Python pdb breakpoint after 10 seconds

On some occasions, my python program won't response because there seems to be a deadlock. Since I have no idea where this deadlock happens, I'd like to set a breakpoint or dump the stack of all threads after 10 seconds in order to learn what my program is waiting for.
Use the logging module and put e.g. Logger.debug() calls in strategic places through your program. You can disable these messages by one single setting (Logger.setLevel) if you want to. And you can choose if you want to write them to e.g. stderr or to a file.
import pdb
from your_test_module import TestCase
testcase = TestCase()
testcase.setUp()
pdb.runcall(testcase.specific_test)
And then ctrl-c at your leisure. The KeyboardInterupt will cause pdb to drop into debugger prompt.
Well, as it turns out, it was because my database was locked (a connection wasn't closed) and when the tests were tearing down (and the database schema was being erased so that the database is clean for the next tests), psycopg2 just ignored the KeyboardInterrupt exception.
I solved me problem using the faulthandler module (for earlier versions, there is a pypi repo). Fault handler allows me to dump the stack trace to any file (including sys.stderr) after a period of time (repeatingly) using faulthandler.dump_traceback_later(3, repeat=True). That allowed me to set the breakpoint where my program stopped responding and tackle the issue down effectively.

How to prevent duplicate exception logging from a celery task

Second Edit: After a bit of digging, the question changed from how to log an exception with local variables to how to prevent celery from sending the 2nd log message which does not has the local vars. After the below attempt, I actually noticed that I was always receiving 2 emails, one with local vars per frame and the other without.
First Edit: I've managed to sort of get the local variables, by adding a custom on_failure override (using annotations for all tasks like so:
def include_f_locals(self, exc, task_id, args, kwargs, einfo):
import logging
logger = logging.getLogger('celery')
logger.error(exc, exc_info=einfo)
CELERY_ANNOTATIONS = {'*': {'on_failure': include_f_locals}}
But the problem now is that the error arrives 3 times, once through the celery logger and twice through root (although I'm not propagating 'celery' logger in my logging settings)
Original question:
I have a django/celery project to which I recently added a sentry handler as the root logger with level 'ERROR'. This works fine for most errors and exceptions occurring in django, except the ones coming from celery workers.
What happens is that sentry receives an exception with the traceback and the locals of the daemon, but does not include the f_locals (local vars) of each frame in the stack. And these do appear in normal python/django exceptions.
I guess I could try to catch all exceptions and log with exc_info manually. But this is less than ideal.
Funny enough, all my troubles went away when I simply upgraded raven to a version after 5.0 (specifically after 5.1).
While I'm not sure which changes caused exceptions to be logged properly (with f_locals correctly appearing in sentry), but the fact remains that raven < 5.0 did not work for me.
Also, there is no need to do any fancy CELERY_ANNOTATIONS legwork (as above), simply defining sentry handler for the correct logger seems to be enough to catch exceptions, as well as message of other logging levels.

Pyramid: restart the apps in a exception view

The command that I've been using is:
pserve development.ini --reload
and every time when i meet a error like SQLAlchemy's "IntegrityError" or something else,
I have to kill pserve an type the command again to restart the apps.
Is there a method i can restart the apps in a exception view like this?
#view_config(context=Exception)
def error_view(exc, request):
#restart the waitress or apache...
return Response("Sorry there was an error, wait seconds, we will fix it soon.")
Restarting your server is not a sensical response to an IntegrityError. This is something that is expected to happen and you need to handle it. Restarting the server really makes no sense in the context of anything other than development.
If you run into exceptions in development, fix the code and save the file and the --reload will automatically restart your server for you.
If you have to restart the application after an exception (supposedly because nothing works after an exception otherwise) it suggests your requests try to re-use the same transaction - in other words, your application is not configured properly.
You should be using a session configured with ZopeTransactionExtension as Pyramide's scaffolds generate.
If you show us some code we may be able to pinpoint the exact cause of the problem.

500 Error without anything in the apache logs

I am currently developing an application based on flask. It runs fine spawning the server manually using app.run(). I've tried to run it through mod_wsgi now. Strangely, I get a 500 error, and nothing in the logs. I've investigated a bit and here are my findings.
Inserting a line like print >>sys.stderr, "hello" works as expected. The message shows up in the error log.
When calling a method without using a template it works just fine. No 500 Error.
Using a simple template works fine too.
BUT as soon as I trigger a database access inside the template (for example looping over a query) I get the error.
My gut tells me that it's SQLAlchemy which emits an error, and maybe some logging config causes the log to be discarded at some point in the application.
Additionally, for testing, I am using SQLite. This, as far as I can recall, can only be accessed from one thread. So if mod_wsgi spawns more threads, it may break the app.
I am a bit at a loss, because it only breaks running behind mod_wsgi, which also seems to swallow my errors. What can I do to make the errors bubble up into the apache error_log?
For reference, the code can be seen on this github permalink.
Turns out I was not completely wrong. The exception was indeed thrown by sqlalchemy. And as it's streamed to stdout by default, mod_wsgi silently ignored it (as far as I can tell).
To answer my main question: How to see the errors produced by the WSGI app?
It's actually very simple. Redirect your logs to stderr. The only thing you need to do, is add the following to your WSGI script:
import logging, sys
logging.basicConfig(stream=sys.stderr)
Now, this is the most mundane logging config. As I haven't put anything into place yet for my application this will do. But, I guess, once the application matures you will have a more sophisticated logging config anyways, so this won't bite you.
But for quick and dirty debugging, this will do just fine.
I had a similar problem: occasional "Internal Server Error" without logs. When you use mod_wsgi you should remove "app.run()" because this will always start a local WSGI server which we do not want if we deploy that application to mod_wsgi. See docs. I do not know if this is your case, but I hope this can help.
If you put this into your config.py it will help dramatically in propagating errors up to the apache error log:
PROPAGATE_EXCEPTIONS = True

Catching uncaught exceptions through django development server

I am looking for some way in django's development server that will make the server to stop at any uncaught exception automatically, as it is done with pdb mode in ipython console.
I know to put import pdb; pdb.set_trace() lines into the code to make application stop. But this doesn't help me, because the line where the exception is thrown is being called too many times. So I can't find out the exact conditions to define a conditional break point.
Is this possible?
Thank you...
You can set sys.excepthook to a function that does import pdb; pdb.pm(), as per this recipe.

Categories

Resources