Is there a way to launch an IPython shell or prompt when my program runs a line that raises an exception?
I'm mostly interested in the context, variables, in the scope (and subscopes) where the exception was raised. Something like Visual Studio's debugging, when an exception is thrown but not caught by anyone, Visual Studio will halt and give me the call stack and the variables present at every level.
Do you think there's a way to get something similar using IPython?
EDIT: The -pdb option when launching IPython doesn't seem do what I want (or maybe I don't know how to use it properly, which is entirely possible). I run the following script :
def func():
z = 2
g = 'b'
raise NameError("This error will not be caught, but IPython still"
"won't summon pdb, and I won't be able to consult"
"the z or g variables.")
x = 1
y = 'a'
func()
Using the command :
ipython -pdb exceptionTest.py
Which stops execution when the error is raised, but brings me an IPython prompt where I have access to the global variables of the script, but not the local variables of function func. pdb is only invoked when I directly type a command in ipython that causes an error, i.e. raise NameError("This, sent from the IPython prompt, will trigger pdb.").
I don't necessarily need to use pdb, I'd just like to have access to the variables inside func.
EDIT 2: It has been a while, IPython's -pdb option is now working just as I want it to. That means when I raise an exception I can go back in the scope of func and read its variables z and g without any problem. Even without setting the -pdb option, one can run IPython in interactive mode then call the magic function %debug after the program has exit with error -- that will also drop you into an interactive ipdb prompt with all scopes accessibles.
Update for IPython v0.13:
import sys
from IPython.core import ultratb
sys.excepthook = ultratb.FormattedTB(mode='Verbose',
color_scheme='Linux', call_pdb=1)
Doing:
ipython --pdb -c "%run exceptionTest.py"
kicks off the script after IPython initialises and you get dropped into the normal IPython+pdb environment.
You can try this:
from ipdb import launch_ipdb_on_exception
def main():
with launch_ipdb_on_exception():
# The rest of the code goes here.
[...]
ipdb integrates IPython features into pdb. I use the following code to throw my apps into the IPython debugger after an unhanded exception.
import sys, ipdb, traceback
def info(type, value, tb):
traceback.print_exception(type, value, tb)
ipdb.pm()
sys.excepthook = info
#snapshoe's answer does not work on newer versions of IPython.
This does however:
import sys
from IPython import embed
def excepthook(type, value, traceback):
embed()
sys.excepthook = excepthook
#Adam's works like a charm except that IPython loads a bit slowly(800ms on my machine). Here I have a trick to make the load lazy.
class ExceptionHook:
instance = None
def __call__(self, *args, **kwargs):
if self.instance is None:
from IPython.core import ultratb
self.instance = ultratb.FormattedTB(mode='Verbose',
color_scheme='Linux', call_pdb=1)
return self.instance(*args, **kwargs)
sys.excepthook = ExceptionHook()
Now we don't need to wait at the very beginning. Only when the program crashes will cause IPython to be imported.
You can do something like the following:
import sys
from IPython.Shell import IPShellEmbed
ipshell = IPShellEmbed()
def excepthook(type, value, traceback):
ipshell()
sys.excepthook = excepthook
See sys.excepthook and Embedding IPython.
If you want to both get the traceback and open a IPython shell with the environment at the point of the exception:
def exceptHook(*args):
'''A routine to be called when an exception occurs. It prints the traceback
with fancy formatting and then calls an IPython shell with the environment
of the exception location.
'''
from IPython.core import ultratb
ultratb.FormattedTB(call_pdb=False,color_scheme='LightBG')(*args)
from IPython.terminal.embed import InteractiveShellEmbed
import inspect
frame = inspect.getinnerframes(args[2])[-1][0]
msg = 'Entering IPython console at {0.f_code.co_filename} at line {0.f_lineno}'.format(frame)
savehook = sys.excepthook # save the exception hook
InteractiveShellEmbed()(msg,local_ns=frame.f_locals,global_ns=frame.f_globals)
sys.excepthook = savehook # reset IPython's change to the exception hook
import sys
sys.excepthook = exceptHook
Note that it is necessary to pull than namespace information from the last frame referenced by the traceback (arg[2])
This man page says iPython has --[no]pdb option to be passed at command line to start iPython for uncaught exceptions. Are you looking for more?
EDIT:
python -m pdb pythonscript.py can launch pdb. Not sure about similar thing with iPython though. If you are looking for the stack trace and general post-mortem of the abnormal exit of program, this should work.
Do you actually want to open a pdb session at every exception point? (as I think a pdb session opened from ipython is the same as the one open in the normal shell). If that's the case, here's the trick:
http://code.activestate.com/recipes/65287-automatically-start-the-debugger-on-an-exception/
Related
My largest issue is the fancy IPython exceptions. I want them look like normal python
exceptions, but when I try to reset sys.excepthook it does not work:
In [31]: import sys; sys.excepthook = sys.__excepthook__; (sys.excepthook, sys.__excepthook__)
Out[31]:
(<bound method TerminalInteractiveShell.excepthook of <IPython.terminal.interactiveshell.TerminalInteractiveShell object at 0x27c8e10>>,
<function sys.excepthook>)
IPython replaces sys.excepthook each time you execute a line of code, rendering it pointless to change it each time. In addition to this, IPython catches all exceptions and handles them itself, without having to invoke sys.excepthook.
This answer to a related question provides a way of overriding this behaviour: Basically, you've to override Ipython's showtraceback function with one that will format and display your exception the way you want it.
def showtraceback(self):
traceback_lines = traceback.format_exception(*sys.exc_info())
del traceback_lines[1]
message = ''.join(traceback_lines)
sys.stderr.write(message)
import sys
import traceback
import IPython
IPython.core.interactiveshell.InteractiveShell.showtraceback = showtraceback
I have a few long-running experiments in my Jupyter Notebooks. Because I don't know when they will finish, I add an email function to the last cell of the notebook, so I automatically get an email, when the notebook is done.
But when there is a random exception in one of the cells, the whole notebook stops executing and I never get any email. So I'm wondering if there is some magic function that could execute a function in case of an exception / kernel stop.
Like
def handle_exception(stacktrace):
send_mail_to_myself(stacktrace)
%%in_case_of_notebook_exception handle_exception # <--- this is what I'm looking for
The other option would be to encapsulate every cell in try-catch, right? But that's soooo tedious.
Thanks in advance for any suggestions.
Such a magic command does not exist, but you can write it yourself.
from IPython.core.magic import register_cell_magic
#register_cell_magic('handle')
def handle(line, cell):
try:
exec(cell)
except Exception as e:
send_mail_to_myself(e)
raise # if you want the full trace-back in the notebook
It is not possible to load the magic command for the entire notebook automatically, you have to add it at each cell where you need this feature.
%%handle
some_code()
raise ValueError('this exception will be caught by the magic command')
#show0k gave the correct answer to my question (in regards to magic methods). Thanks a lot! :)
That answer inspired me to dig a little deeper and I came across an IPython method that lets you define a custom exception handler for the whole notebook.
I got it to work like this:
from IPython.core.ultratb import AutoFormattedTB
# initialize the formatter for making the tracebacks into strings
itb = AutoFormattedTB(mode = 'Plain', tb_offset = 1)
# this function will be called on exceptions in any cell
def custom_exc(shell, etype, evalue, tb, tb_offset=None):
# still show the error within the notebook, don't just swallow it
shell.showtraceback((etype, evalue, tb), tb_offset=tb_offset)
# grab the traceback and make it into a list of strings
stb = itb.structured_traceback(etype, evalue, tb)
sstb = itb.stb2text(stb)
print (sstb) # <--- this is the variable with the traceback string
print ("sending mail")
send_mail_to_myself(sstb)
# this registers a custom exception handler for the whole current notebook
get_ipython().set_custom_exc((Exception,), custom_exc)
So this can be put into a single cell at the top of any notebook and as a result it will do the mailing in case something goes wrong.
Note to self / TODO: make this snippet into a little python module that can be imported into a notebook and activated via line magic.
Be careful though. The documentation contains a warning for this set_custom_exc method: "WARNING: by putting in your own exception handler into IPython’s main execution loop, you run a very good chance of nasty crashes. This facility should only be used if you really know what you are doing."
Since notebook 5.1 you can use a new tag: raises-exception
This will indicate that exception in the specific cell is expected and jupyter will not stop the execution.
(In order to set a tag you have to choose from the main menu: View -> Cell Toolbar -> Tags)
Why exec is not always the solution
It's some years later and I had a similar issue trying to handle errors with Jupyter magics. However, I needed variables to persist in the actual Jupyter notebook.
%%try_except print
a = 12
raise ValueError('test')
In this example, I want the error to print (but could be anything such as e-mail as in the opening post), but also a == 12 to be true in the next cell. For that reason, the method exec suggested does not work when you define the magic in a different file. The solution I found is to use the IPython functionalities.
How you can solve it
from IPython.core.magic import line_magic, cell_magic, line_cell_magic, Magics, magics_class
#magics_class
class CustomMagics(Magics):
#cell_magic
def try_except(self, line, cell):
""" This magic wraps a cell in try_except functionality """
try:
self.shell.ex(cell) # This executes the cell in the current namespace
except Exception as e:
if ip.ev(f'callable({how})'): # check we have a callable handler
self.shell.user_ns['error'] = e # add error to namespace
ip.ev(f'{how}(error)') # call the handler with the error
else:
raise e
# Register
from IPython import get_ipython
ip = get_ipython()
ip.register_magics(CustomMagics)
I don't think there is an out-of-the-box way to do that not using a try..except statement in your cells. AFAIK a 4 years old issue mentions this, but is still in open status.
However, the runtools extension may do the trick.
Say I have an IPython session, from which I call some script:
> run my_script.py
Is there a way to induce a breakpoint in my_script.py from which I can inspect my workspace from IPython?
I remember reading that in previous versions of IPython one could do:
from IPython.Debugger import Tracer;
def my_function():
x = 5
Tracer()
print 5;
but the submodule Debugger does not seem to be available anymore.
Assuming that I have an IPython session open already: how can I stop my program a location of my choice and inspect my workspace with IPython?
In general, I would prefer solutions that do not require me to pre-specify line numbers, since I would like to possibly have more than one such call to Tracer() above and not have to keep track of the line numbers where they are.
The Tracer() still exists in ipython in a different module. You can do the following:
from IPython.core.debugger import Tracer
def my_function():
x = 5
Tracer()()
print 5
Note the additional call parentheses around Tracer
edit: For IPython 6 onwards Tracer is deprecated so you should use set_trace() instead:
from IPython.core.debugger import set_trace
def my_function():
x = 5
set_trace()
print 5
You can run it and set a breakpoint at a given line with:
run -d -b12 myscript
Where -b12 sets a breakpoint at line 12. When you enter this line, you'll immediately drop into pdb, and you'll need to enter c to execute up to that breakpoint.
This is the version using the set_trace() method instead of the deprecated Tracer() one.
from IPython.core.debugger import Pdb
def my_function():
x = 5
Pdb().set_trace()
print 5
Inside the IPython shell, you can do
from IPython.core.debugger import Pdb
pdb = Pdb()
pdb.runcall(my_function)
for example, or do the normal pdb.set_trace() inside your function.
With Python 3 (v3.7+), there's the new breakpoint() function. You can modify it's behaviour so it'll call ipython's debugger for you.
Basically you can set an environment variable that points to a debugger function. (If you don't set the variable, breakpoint() defaults to calling pdb.)
To set breakpoint() to call ipython's debugger, set the environment variable (in your shell) like so:
# for bash/zsh users
export PYTHONBREAKPOINT='IPython.core.debugger.set_trace'
# powershell users
$env:PYTHONBREAKPOINT='IPython.core.debugger.set_trace'
(Note, obviously if you want to permanently set the environment variable, you'll need to modify your shell profile or system preferences.)
You can write:
def my_function():
x = 5
breakpoint()
print(5)
And it'll break into ipython's debugger for you. I think it's handier than having to import from IPython.core.debugger import set_trace and call set_trace().
I have always had the same question and the best workaround I have found which is pretty hackey is to add a line that will break my code, like so:
...
a = 1+2
STOP
...
Then when I run that code it will break, and I can do %debug to go there and inspect. You can also turn on %pdb to always go to point where your code breaks but this can be bothersome if you don't want to inspect everywhere and everytime your code breaks. I would love a more elegant solution.
I see a lot of options here, but maybe not the following simple option.
Fire up ipython in the directory where my_script.py is.
Turn the debugger on if you want the code to go into debug mode when it fails. Type %pdb.
In [1]: %pdb
Automatic pdb calling has been turned ON
Next type
In [2]: %run -d ./my_script.py
*** Blank or comment
*** Blank or comment
NOTE: Enter 'c' at the ipdb> prompt to continue execution.
> c:\users\c81196\lgd\mortgages-1\nmb\lgd\run_lgd.py(2)<module>()
1 # system imports
----> 2 from os.path import join
Now you can set a breakpoint where ever you want it.
Type b 100 to have a breakpoint at line 100, or b whatever.py:102 to have a breakpoint at line 102 in whatever.py.
For instance:
ipdb> b 100
Then continue to run, or continue.
ipdb> c
Once the code fails, or reaches the breakpoint you can start using the full power of the python debugger pdb.
Note that pdb also allows the setting of a breakpoint at a function.
b(reak) [([filename:]lineno | function) [, condition]]
So you do not necessarily need to use line numbers.
Suppose I have a python program where assert has been used to define how things should be, and I would like to capture anomalies with the read-eval-loop rather than having AssertionError be thrown.
Granted, I could have
if (reality!=expectation):
print("assertion failed");
import pdb; pdb.set_trace();
but that's far more ugly in the code than a plain assert(reality==expectation).
I could have pdb.set_trace() called in an except: block at top-level, but then I'd have lost all the context of the failure, right ? (I mean, stacktrace could be recovered from the exception object, but not argument values, etc.)
Is there anything like a --magic command-line flag that could turn the python3 interpreter into what I need ?
Mainly taken from this great snippet:
import sys
def info(type, value, tb):
if hasattr(sys, 'ps1') or not sys.stderr.isatty() or type != AssertionError:
# we are in interactive mode or we don't have a tty-like
# device, so we call the default hook
sys.__excepthook__(type, value, tb)
else:
import traceback, pdb
# we are NOT in interactive mode, print the exception...
traceback.print_exception(type, value, tb)
print
# ...then start the debugger in post-mortem mode.
pdb.pm()
sys.excepthook = info
When you initialize your code with this, all AssertionErrors should invoke pdb.
Have a look at the nose project. You can use it with the --pdb option to drop into the debugger on errors.
Is there a convenient way to get a more detailed stack trace on a Python exception? I'm hoping to find a wrapper utility/module or some other way to get a bit more info from the stack trace without having to actually modify the Python script that generates it. I'd like to be able to use this when running unit tests, or doctests, or when running utilities or inline scripts from the shell.
Specifically I think I'd like to have the values of local variables, or maybe just the values of the arguments passed to the innermost function in the stack trace. Some options to set the detail level would be nifty.
Not specifically related to your problem, but you might find this code useful -- automatically starts up the python debugger when a fatal exception occurs. Good for working with interactive code. It's originally from ActiveState
# code snippet, to be included in 'sitecustomize.py'
import sys
def info(type, value, tb):
if hasattr(sys, 'ps1') or not sys.stderr.isatty():
# we are in interactive mode or we don't have a tty-like
# device, so we call the default hook
sys.__excepthook__(type, value, tb)
else:
import traceback, pdb
# we are NOT in interactive mode, print the exception...
traceback.print_exception(type, value, tb)
print
# ...then start the debugger in post-mortem mode.
pdb.pm()
sys.excepthook = info
Did you have a look at traceback module?
http://docs.python.org/library/traceback.html
Also on SO:
Showing the stack trace from a running Python application
As mentionned by pyfunc, you can use the function in the traceback module but you only get a stacktrace.
If you want to inspect the stack you have to use the sys.exc_info() function and walk the traceback member and dump information from its frame (tb_frame). See the python Reference Manual for further information on these types.
Here is an example:
def killit(a):
a[10000000000000] = 1
def test(a):
killit(a)
def iterate_traceback(tb):
while tb is not None:
yield tb
tb = tb.tb_next
try:
test(tuple())
except Exception as e:
import sys
exception_info = sys.exc_info()
traceback = exception_info[2]
for tb in iterate_traceback(traceback):
print "-" * 10
print tb.tb_frame.f_code
print tb.tb_frame.f_locals
print tb.tb_frame.f_globals