How to do post-mortem debugging within Django's runserver? - python

I am currently debugging a Django project which results in an exception. I would like to enter the ipdb post-mortem debugger. I've tried invoking ipdb as a script (cf. https://docs.python.org/3/library/pdb.html), but this just enters me to the first line of code:
> python -m ipdb manage.py runserver
> /Users/kurtpeek/myproject/manage.py(2)<module>()
1 #!/usr/bin/env python
----> 2 import os
3 import sys
ipdb>
If I press c to continue, I just run into the error, with no possibility to drop into the debugger post-mortem. Presumably I could press n (next) until I get the error, but that would be quite cumbersome.
Is there a way to run python manage.py runserver with post-mortem debugging?

If you know of a line that causes the exception, but don't know how "deep" inside it the exception is caused, you can get a post-mortem debugger for it by catching the exception and calling ipdb.post_mortem() in the exception handler.
For example, change your code from this:
def index(request):
output = function_that_causes_some_exception()
return HttpResponse(output)
To this:
def index(request):
try:
output = function_that_causes_some_exception()
except:
import ipdb
ipdb.post_mortem()
# Let the framework handle the exception as usual:
raise
return HttpResponse(output)
By the way, for server frameworks that could be spewing stuff in the console from other threads I highly recommend wdb, so that you can debug your django app from the comfort of a browser:
def index(request):
try:
output = function_that_causes_some_exception()
except:
import wdb
wdb.post_mortem()
# Let the framework handle the exception as usual:
raise
return HttpResponse(output)

Related

What's the best way to display Exception in Flask?

I'm a newbie in Flask and I am trying to display the Built-In Exceptions in python but I can't seem to have them display on my end.
NOTE:
set FLASK_DEBUG = 0
CODE:
def do_something:
try:
doing_something()
except Exception as err:
return f"{err}"
Expectation:
It will display one of the built-in exceptions:
KeyError
IndexError
NameError
Etc.
Reality:
It will return the line of code that didn't worked which is more ambiguous to the end user.
Also:
I have no problem seeing the errors when the debug mode is ON but that's not something that I want to do if I open them in public
Flask supplies you with a function that enables you to register an error handler throughout your entire app; you can do something as shown below:
def handle_exceptions(e):
# Log exception in your logs
# get traceback and sys exception info and log as required
# app.logger.error(getattr(e, 'description', str(e)))
# Print traceback
# return your response using getattr(e, 'code', 500) etc.
# Exception is used to catch all exceptions
app.register_error_handler(Exception, handle_exceptions)
In my honest opinion, this is the way to go. - Following the structure found in werkzeug.exceptions.HTTPException as an example is a solid foundation.
Having a unified exception handler that will standardise your Exception handling, visualisation and logging will make your life a tad better. :)
Try with this:
def do_something:
try:
doing_something()
except Exception as err:
return f"{err.__class__.__name__}: {err}"

Raise exception if script fails

I have a python script, tutorial.py. I want to run this script from a file test_tutorial.py, which is within my python test suite. If tutorial.py executes without any exceptions, I want the test to pass; if any exceptions are raised during execution of tutorial.py, I want the test to fail.
Here is how I am writing test_tutorial.py, which does not produce the desired behavior:
from os import system
test_passes = False
try:
system("python tutorial.py")
test_passes = True
except:
pass
assert test_passes
I find that the above control flow is incorrect: if tutorial.py raises an exception, then the assert line never executes.
What is the correct way to test if an external script raises an exception?
If there is no error s will be 0:
from os import system
s=system("python tutorial.py")
assert s == 0
Or use subprocess:
from subprocess import PIPE,Popen
s = Popen(["python" ,"tutorial.py"],stderr=PIPE)
_,err = s.communicate() # err will be empty string if the program runs ok
assert not err
Your try/except is catching nothing from the tutorial file, you can move everything outside the it and it will behave the same:
from os import system
test_passes = False
s = system("python tutorial.py")
test_passes = True
assert test_passes
from os import system
test_passes = False
try:
system("python tutorial.py")
test_passes = True
except:
pass
finally:
assert test_passes
This is going to solve your problem.
Finally block is going to process if any error is raised. Check this for more information.It's usually using for file process if it's not with open() method, to see the file is safely closed.

Sentry only shows <unknown>:None error

I want to detect errors in a standalone Python script with Sentry+Raven.
I tried to configure it and raven test ... is workging.
Then I place this on top of the script:
from raven import Client
client = Client('http://...#.../1')
client.captureException()
the exception is generated later on this:
import django
django.setup()
from django.conf import settings
And I want to see the actual stack for this error:
ImportError: Could not import settings 'settings' (Is it on sys.path? Is there an import error in the settings file?): No module named 'settings'
But all I see in Sentry is
which is completely useless.
How can I change this to have a normal traceback?
You misunderstand how client.captureException() works, its not a configuration parameter. You use it when you are catching an exception and it will capture the exception type and message:
try:
f = open('oogah-boogah.txt')
except IOError:
client.captureException()
# do something here
To capture any exceptions that could be generated in a block of code, you can use capture_exceptions:
#client.capture_exceptions
def load_django():
import django
django.setup()
from django.conf import settings
Yes you're right, but is there a way to catch an exception not
wrapping a block of code in a try-except. I can see the error in a
terminal, can I see it in Sentry?
There is a default exception handler - and when an exception is not caught, this default handler catches it and then displays the exception. This is what you see in the terminal.
The function that generates this output is sys.excepthook and it will output to stderr by default.
So, in order for you to catch all exception globally, you'll have to create a global exception handler or map your own function to sys.excepthook.
I would strongly recommend against this, though as you don't know what other side effects it may have.

Catch mako runtime errors using Bottle

I'm looking for a way to catch mako runtime errors using Bottle.
Runtime errors in python are catched using the following code:
# main.py
from lib import errors
import bottle
app = bottle.app()
app.error_handler = errors.handler
...
# lib/errors.py
from bottle import mako_template as template
def custom500(error):
return template('error/500')
handler = {
500: custom500
}
This works flawlessly, as exceptions are turned into 500 Internal Server Error.
I'd like to catch the mako runtime errors in a similar fashion, does anyone have a clue of how to achieve this?
You want to catch mako.exceptions.SyntaxException.
This code works for me:
#bottle.route('/hello')
def hello():
try:
return bottle.mako_template('hello')
except mako.exceptions.SyntaxException as exx:
return 'mako exception: {}\n'.format(exx)
EDIT: Per your comment, here are some pointers on how to install this globally. Install a bottle plugin that wraps your functions in the mako.exceptions.SyntaxException try block.
Something along these lines:
#bottle.route('/hello')
def hello():
return bottle.mako_template('hello')
def catch_mako_errors(callback):
def wrapper(*args, **kwargs):
try:
return callback(*args, **kwargs)
except mako.exceptions.SyntaxException as exx:
return 'mako exception: {}\n'.format(exx)
return wrapper
bottle.install(catch_mako_errors)

How to make pdb recognize that the source has changed between runs?

From what I can tell, pdb does not recognize when the source code has changed between "runs". That is, if I'm debugging, notice a bug, fix that bug, and rerun the program in pdb (i.e. without exiting pdb), pdb will not recompile the code. I'll still be debugging the old version of the code, even if pdb lists the new source code.
So, does pdb not update the compiled code as the source changes? If not, is there a way to make it do so? I'd like to be able to stay in a single pdb session in order to keep my breakpoints and such.
FWIW, gdb will notice when the program it's debugging changes underneath it, though only on a restart of that program. This is the behavior I'm trying to replicate in pdb.
The following mini-module may help. If you import it in your pdb session, then you can use:
pdb> pdbs.r()
at any time to force-reload all non-system modules except main. The code skips that because it throws an ImportError('Cannot re-init internal module main') exception.
# pdbs.py - PDB support
from __future__ import print_function
def r():
"""Reload all non-system modules, to reload stuff on pbd restart. """
import importlib
import sys
# This is likely to be OS-specific
SYS_PREFIX = '/usr/lib'
for k, v in list(sys.modules.items()):
if (
k == "__main__" or
k.startswith("pdb") or
not getattr(v, "__file__", None)
or v.__file__.startswith(SYS_PREFIX)
):
continue
print("reloading %s [%s]" % (k, v.__file__), file=sys.stderr)
importlib.reload(v)
Based on #pourhaus answer (from 2014), this recipe augments the pdb++ debugger with a reload command (expected to work on both Linux & Windows, on any Python installation).
TIP: the new reload command accepts an optional list of module-prefixes to reload (and to exclude), not to break already loaded globals when resuming debugging.
Just insert the following Python-3.6 code into your ~/.pdbrc.py file:
## Augment `pdb++` with a `reload` command
#
# See https://stackoverflow.com/questions/724924/how-to-make-pdb-recognize-that-the-source-has-changed-between-runs/64194585#64194585
from pdb import Pdb
def _pdb_reload(pdb, modules):
"""
Reload all non system/__main__ modules, without restarting debugger.
SYNTAX:
reload [<reload-module>, ...] [-x [<exclude-module>, ...]]
* a dot(`.`) matches current frame's module `__name__`;
* given modules are matched by prefix;
* any <exclude-modules> are applied over any <reload-modules>.
EXAMPLES:
(Pdb++) reload # reload everything (brittle!)
(Pdb++) reload myapp.utils # reload just `myapp.utils`
(Pdb++) reload myapp -x . # reload `myapp` BUT current module
"""
import importlib
import sys
## Derive sys-lib path prefix.
#
SYS_PREFIX = importlib.__file__
SYS_PREFIX = SYS_PREFIX[: SYS_PREFIX.index("importlib")]
## Parse args to decide prefixes to Include/Exclude.
#
has_excludes = False
to_include = set()
# Default prefixes to Exclude, or `pdb++` will break.
to_exclude = {"__main__", "pdb", "fancycompleter", "pygments", "pyrepl"}
for m in modules.split():
if m == "-x":
has_excludes = True
continue
if m == ".":
m = pdb._getval("__name__")
if has_excludes:
to_exclude.add(m)
else:
to_include.add(m)
to_reload = [
(k, v)
for k, v in sys.modules.items()
if (not to_include or any(k.startswith(i) for i in to_include))
and not any(k.startswith(i) for i in to_exclude)
and getattr(v, "__file__", None)
and not v.__file__.startswith(SYS_PREFIX)
]
print(
f"PDB-reloading {len(to_reload)} modules:",
*[f" +--{k:28s}:{getattr(v, '__file__', '')}" for k, v in to_reload],
sep="\n",
file=sys.stderr,
)
for k, v in to_reload:
try:
importlib.reload(v)
except Exception as ex:
print(
f"Failed to PDB-reload module: {k} ({v.__file__}) due to: {ex!r}",
file=sys.stderr,
)
Pdb.do_reload = _pdb_reload
What do you mean by "rerun the program in pdb?" If you've imported a module, Python won't reread it unless you explicitly ask to do so, i.e. with reload(module). However, reload is far from bulletproof (see xreload for another strategy).
There are plenty of pitfalls in Python code reloading. To more robustly solve your problem, you could wrap pdb with a class that records your breakpoint info to a file on disk, for example, and plays them back on command.
(Sorry, ignore the first version of this answer; it's early and I didn't read your question carefully enough.)
I decided to comment some lines in my input script, and after
(Pdb) run
I got pdb to recognize that change. The bad thing: it runs the script from the beginning. The good things below.
(Pdb) help run
run [args...]
Restart the debugged python program. If a string is supplied
it is split with "shlex", and the result is used as the new
sys.argv. History, breakpoints, actions and debugger options
are preserved. "restart" is an alias for "run".
May not work for more complex programs, but for a simple example using importlib.reload() using Python v3.5.3:
[user#machine ~] cat test.py
print('Test Message')
#
# start and run with debugger
#
[user#machine ~] python3 -m pdb test.py
> /home/user/test.py(1)<module>()
-> print('Test Message')
(Pdb) c
Test Message
The program finished and will be restarted
> /home/user/test.py(1)<module>()
-> print('Test Message')
#
# in another terminal, change test.py to say "Changed Test Message"
#
#
# back in PDB:
#
(Pdb) import importlib; import test; importlib.reload(test)
Changed Test Message
<module 'test' from '/home/user/test.py'>
(Pdb) c
Test Message
The program finished and will be restarted
> /home/user/test.py(1)<module>()
-> print('Changed Test Message')
(Pdb) c
Changed Test Message
The program finished and will be restarted
> /home/user/test.py(1)<module>()
-> print('Changed Test Message')
ipdb %autoreload extension
6.2.0 docs document http://ipython.readthedocs.io/en/stable/config/extensions/autoreload.html#module-IPython.extensions.autoreload :
In [1]: %load_ext autoreload
In [2]: %autoreload 2
In [3]: from foo import some_function
In [4]: some_function()
Out[4]: 42
In [5]: # open foo.py in an editor and change some_function to return 43
In [6]: some_function()
Out[6]: 43

Categories

Resources