I have a few long-running experiments in my Jupyter Notebooks. Because I don't know when they will finish, I add an email function to the last cell of the notebook, so I automatically get an email, when the notebook is done.
But when there is a random exception in one of the cells, the whole notebook stops executing and I never get any email. So I'm wondering if there is some magic function that could execute a function in case of an exception / kernel stop.
Like
def handle_exception(stacktrace):
send_mail_to_myself(stacktrace)
%%in_case_of_notebook_exception handle_exception # <--- this is what I'm looking for
The other option would be to encapsulate every cell in try-catch, right? But that's soooo tedious.
Thanks in advance for any suggestions.
Such a magic command does not exist, but you can write it yourself.
from IPython.core.magic import register_cell_magic
#register_cell_magic('handle')
def handle(line, cell):
try:
exec(cell)
except Exception as e:
send_mail_to_myself(e)
raise # if you want the full trace-back in the notebook
It is not possible to load the magic command for the entire notebook automatically, you have to add it at each cell where you need this feature.
%%handle
some_code()
raise ValueError('this exception will be caught by the magic command')
#show0k gave the correct answer to my question (in regards to magic methods). Thanks a lot! :)
That answer inspired me to dig a little deeper and I came across an IPython method that lets you define a custom exception handler for the whole notebook.
I got it to work like this:
from IPython.core.ultratb import AutoFormattedTB
# initialize the formatter for making the tracebacks into strings
itb = AutoFormattedTB(mode = 'Plain', tb_offset = 1)
# this function will be called on exceptions in any cell
def custom_exc(shell, etype, evalue, tb, tb_offset=None):
# still show the error within the notebook, don't just swallow it
shell.showtraceback((etype, evalue, tb), tb_offset=tb_offset)
# grab the traceback and make it into a list of strings
stb = itb.structured_traceback(etype, evalue, tb)
sstb = itb.stb2text(stb)
print (sstb) # <--- this is the variable with the traceback string
print ("sending mail")
send_mail_to_myself(sstb)
# this registers a custom exception handler for the whole current notebook
get_ipython().set_custom_exc((Exception,), custom_exc)
So this can be put into a single cell at the top of any notebook and as a result it will do the mailing in case something goes wrong.
Note to self / TODO: make this snippet into a little python module that can be imported into a notebook and activated via line magic.
Be careful though. The documentation contains a warning for this set_custom_exc method: "WARNING: by putting in your own exception handler into IPython’s main execution loop, you run a very good chance of nasty crashes. This facility should only be used if you really know what you are doing."
Since notebook 5.1 you can use a new tag: raises-exception
This will indicate that exception in the specific cell is expected and jupyter will not stop the execution.
(In order to set a tag you have to choose from the main menu: View -> Cell Toolbar -> Tags)
Why exec is not always the solution
It's some years later and I had a similar issue trying to handle errors with Jupyter magics. However, I needed variables to persist in the actual Jupyter notebook.
%%try_except print
a = 12
raise ValueError('test')
In this example, I want the error to print (but could be anything such as e-mail as in the opening post), but also a == 12 to be true in the next cell. For that reason, the method exec suggested does not work when you define the magic in a different file. The solution I found is to use the IPython functionalities.
How you can solve it
from IPython.core.magic import line_magic, cell_magic, line_cell_magic, Magics, magics_class
#magics_class
class CustomMagics(Magics):
#cell_magic
def try_except(self, line, cell):
""" This magic wraps a cell in try_except functionality """
try:
self.shell.ex(cell) # This executes the cell in the current namespace
except Exception as e:
if ip.ev(f'callable({how})'): # check we have a callable handler
self.shell.user_ns['error'] = e # add error to namespace
ip.ev(f'{how}(error)') # call the handler with the error
else:
raise e
# Register
from IPython import get_ipython
ip = get_ipython()
ip.register_magics(CustomMagics)
I don't think there is an out-of-the-box way to do that not using a try..except statement in your cells. AFAIK a 4 years old issue mentions this, but is still in open status.
However, the runtools extension may do the trick.
Related
Situation
The xlwings package provides a convenient way to call python functions from an excel VBA module. The xlwings documentation gives the following basic example:
Write the code below into a VBA module.
Sub HelloWorld()
RunPython ("import hello; hello.world()")
End Sub
This calls the following code in hello.py:
# hello.py
import numpy as np
import xlwings as xw
def world():
wb = xw.Book.caller()
wb.sheets[0].range('A1').value = 'Hello World!'
Trying to run the python function world() directly (instead of calling it from excel VBA) gives the following error message:
Exception: Book.caller() must not be called directly. Call through
Excel or set a mock caller first with Book.set_mock_caller().
Question
I would like to modify the world() function such that it raises a custom exception instead when being run directly. In order to achieve this I first need to determine programmatically whether the world() function is being run directly or being called from excel VBA (at least that's what I'd think). How could I do this?
You can catch the exception and then raise your own:
def world():
try:
wb = xw.Book.caller()
except Exception:
raise CustomException(custom_message)
wb.sheets[0].range('A1').value = 'Hello World!'
You are worried that Exception is too generic, and rightly so. But that's xlwings' fault, not yours. If it is raising a generic Exception that's all you are left with to catch. You could check the exception message to make sure that you are not catching the wrong exception, but that would be brittle. Error messages are usually undocumented and not to be regarded as public, stable API.
Alternatively you can fix the problem where the problem is, xlwings' source code, and make it do what looks to me like the right thing to do: raising a more specific exception.
class NotFromExcelError(Exception):
pass
And at the end of caller:
raise NotFromExcelError('Book.caller() must not be called directly. Call through Excel '
'or set a mock caller first with Book.set_mock_caller().')
I hope a pull request like this would be accepted because raising a bare Exception like it currently does looks really wrong.
The Python standard library and other libraries I use (e.g. PyQt) sometimes use exceptions for non-error conditions. Look at the following except of the function os.get_exec_path(). It uses multiple try statements to catch exceptions that are thrown while trying to find some environment data.
try:
path_list = env.get('PATH')
except TypeError:
path_list = None
if supports_bytes_environ:
try:
path_listb = env[b'PATH']
except (KeyError, TypeError):
pass
else:
if path_list is not None:
raise ValueError(
"env cannot contain 'PATH' and b'PATH' keys")
path_list = path_listb
if path_list is not None and isinstance(path_list, bytes):
path_list = fsdecode(path_list)
These exceptions do not signify an error and are thrown under normal conditions. When using exception breakpoints for one of these exceptions, the debugger will also break in these library functions.
Is there a way in PyCharm or in Python in general to have the debugger not break on exceptions that are thrown and caught inside a library without any involvement of my code?
in PyCharm, go to Run-->View Breakpoints, and check "On raise" and "Ignore library files".
The first option makes the debugger stop whenever an exception is raised, instead of just when the program terminates, and the second option gives PyCharm the policy to ignore library files, thus searching mainly in your code.
The solution was found thanks to CrazyCoder's link to the feature request, which has since been added.
For a while I had a complicated scheme which involved something like the following:
try( Closeable ignore = Debugger.newBreakSuppression() )
{
... library call which may throw ...
} <-- exception looks like it is thrown here
This allowed me to never be bothered by exceptions that were thrown and swallowed within library calls. If an exception was thrown by a library call and was not caught, then it would appear as if it occurred at the closing curly bracket.
The way it worked was as follows:
Closeable is an interface which extends AutoCloseable without declaring any checked exceptions.
ignore is just a name that tells IntelliJ IDEA to not complain about the unused variable, and it is necessary because silly java does not support try( Debugger.newBreakSuppression() ).
Debugger is my own class with debugging-related helper methods.
newBreakSuppression() was a method which would create a thread-local instance of some BreakSuppression class which would take note of the fact that we want break-on-exception to be temporarily suspended.
Then I had an exception breakpoint with a break condition that would invoke my Debugger class to ask whether it is okay to break, and the Debugger class would respond with a "no" if any BreakSuppression objects were instantiated.
That was extremely complicated, because the VM throws exceptions before my code has loaded, so the filter could not be evaluated during program startup, and the debugger would pop up a dialog complaining about that instead of ignoring it. (I am not complaining about that, I hate silent errors.) So, I had to have a terrible, horrible, do-not-try-this-at-home hack where the break condition would look like this: java.lang.System.err.equals( this ) Normally, this would never return
true, because System.err is not equal to a thrown exception, therefore the debugger would never break. However, when my Debugger class would get initialized, it would replace System.err with a class of its own,
which provided an implementation for equals(Object) and returned true if the debugger should break. So, essentially, I was using System.err as an eternal global variable.
Eventually I ditched this whole scheme because it is overly complicated and it performs very bad, because exceptions apparently get thrown very often in the java software ecosystem, so evaluating an expression every time an exception is thrown tremendously slows down everything.
This feature is not implemented yet, you can vote for it:
add ability to break (add breakpoint) on exceptions only for my files
There is another SO answer with a solution:
Debugging with pycharm, how to step into project, without entering django libraries
It is working for me, except I still go into the "_pydev_execfile.py" file, but I haven't stepped into other files after adding them to the exclusion in the linked answer.
Is there a way to launch an IPython shell or prompt when my program runs a line that raises an exception?
I'm mostly interested in the context, variables, in the scope (and subscopes) where the exception was raised. Something like Visual Studio's debugging, when an exception is thrown but not caught by anyone, Visual Studio will halt and give me the call stack and the variables present at every level.
Do you think there's a way to get something similar using IPython?
EDIT: The -pdb option when launching IPython doesn't seem do what I want (or maybe I don't know how to use it properly, which is entirely possible). I run the following script :
def func():
z = 2
g = 'b'
raise NameError("This error will not be caught, but IPython still"
"won't summon pdb, and I won't be able to consult"
"the z or g variables.")
x = 1
y = 'a'
func()
Using the command :
ipython -pdb exceptionTest.py
Which stops execution when the error is raised, but brings me an IPython prompt where I have access to the global variables of the script, but not the local variables of function func. pdb is only invoked when I directly type a command in ipython that causes an error, i.e. raise NameError("This, sent from the IPython prompt, will trigger pdb.").
I don't necessarily need to use pdb, I'd just like to have access to the variables inside func.
EDIT 2: It has been a while, IPython's -pdb option is now working just as I want it to. That means when I raise an exception I can go back in the scope of func and read its variables z and g without any problem. Even without setting the -pdb option, one can run IPython in interactive mode then call the magic function %debug after the program has exit with error -- that will also drop you into an interactive ipdb prompt with all scopes accessibles.
Update for IPython v0.13:
import sys
from IPython.core import ultratb
sys.excepthook = ultratb.FormattedTB(mode='Verbose',
color_scheme='Linux', call_pdb=1)
Doing:
ipython --pdb -c "%run exceptionTest.py"
kicks off the script after IPython initialises and you get dropped into the normal IPython+pdb environment.
You can try this:
from ipdb import launch_ipdb_on_exception
def main():
with launch_ipdb_on_exception():
# The rest of the code goes here.
[...]
ipdb integrates IPython features into pdb. I use the following code to throw my apps into the IPython debugger after an unhanded exception.
import sys, ipdb, traceback
def info(type, value, tb):
traceback.print_exception(type, value, tb)
ipdb.pm()
sys.excepthook = info
#snapshoe's answer does not work on newer versions of IPython.
This does however:
import sys
from IPython import embed
def excepthook(type, value, traceback):
embed()
sys.excepthook = excepthook
#Adam's works like a charm except that IPython loads a bit slowly(800ms on my machine). Here I have a trick to make the load lazy.
class ExceptionHook:
instance = None
def __call__(self, *args, **kwargs):
if self.instance is None:
from IPython.core import ultratb
self.instance = ultratb.FormattedTB(mode='Verbose',
color_scheme='Linux', call_pdb=1)
return self.instance(*args, **kwargs)
sys.excepthook = ExceptionHook()
Now we don't need to wait at the very beginning. Only when the program crashes will cause IPython to be imported.
You can do something like the following:
import sys
from IPython.Shell import IPShellEmbed
ipshell = IPShellEmbed()
def excepthook(type, value, traceback):
ipshell()
sys.excepthook = excepthook
See sys.excepthook and Embedding IPython.
If you want to both get the traceback and open a IPython shell with the environment at the point of the exception:
def exceptHook(*args):
'''A routine to be called when an exception occurs. It prints the traceback
with fancy formatting and then calls an IPython shell with the environment
of the exception location.
'''
from IPython.core import ultratb
ultratb.FormattedTB(call_pdb=False,color_scheme='LightBG')(*args)
from IPython.terminal.embed import InteractiveShellEmbed
import inspect
frame = inspect.getinnerframes(args[2])[-1][0]
msg = 'Entering IPython console at {0.f_code.co_filename} at line {0.f_lineno}'.format(frame)
savehook = sys.excepthook # save the exception hook
InteractiveShellEmbed()(msg,local_ns=frame.f_locals,global_ns=frame.f_globals)
sys.excepthook = savehook # reset IPython's change to the exception hook
import sys
sys.excepthook = exceptHook
Note that it is necessary to pull than namespace information from the last frame referenced by the traceback (arg[2])
This man page says iPython has --[no]pdb option to be passed at command line to start iPython for uncaught exceptions. Are you looking for more?
EDIT:
python -m pdb pythonscript.py can launch pdb. Not sure about similar thing with iPython though. If you are looking for the stack trace and general post-mortem of the abnormal exit of program, this should work.
Do you actually want to open a pdb session at every exception point? (as I think a pdb session opened from ipython is the same as the one open in the normal shell). If that's the case, here's the trick:
http://code.activestate.com/recipes/65287-automatically-start-the-debugger-on-an-exception/
I'm trying to save myself just a few keystrokes for a command I type fairly regularly in Python.
In my python startup script, I define a function called load which is similar to import, but adds some functionality. It takes a single string:
def load(s):
# Do some stuff
return something
In order to call this function I have to type
>>> load('something')
I would rather be able to simply type:
>>> load something
I am running Python with readline support, so I know there exists some programmability there, but I don't know if this sort of thing is possible using it.
I attempted to get around this by using the InteractivConsole and creating an instance of it in my startup file, like so:
import code, re, traceback
class LoadingInteractiveConsole(code.InteractiveConsole):
def raw_input(self, prompt = ""):
s = raw_input(prompt)
match = re.match('^load\s+(.+)', s)
if match:
module = match.group(1)
try:
load(module)
print "Loaded " + module
except ImportError:
traceback.print_exc()
return ''
else:
return s
console = LoadingInteractiveConsole()
console.interact("")
This works with the caveat that I have to hit Ctrl-D twice to exit the python interpreter: once to get out of my custom console, once to get out of the real one.
Is there a way to do this without writing a custom C program and embedding the interpreter into it?
Edit
Out of channel, I had the suggestion of appending this to the end of my startup file:
import sys
sys.exit()
It works well enough, but I'm still interested in alternative solutions.
You could try ipython - which gives a python shell which does allow many things including automatic parentheses which gives you the function call as you requested.
I think you want the cmd module.
See a tutorial here:
http://wiki.python.org/moin/CmdModule
Hate to answer my own question, but there hasn't been an answer that works for all the versions of Python I use. Aside from the solution I posted in my question edit (which is what I'm now using), here's another:
Edit .bashrc to contain the following lines:
alias python3='python3 ~/py/shellreplace.py'
alias python='python ~/py/shellreplace.py'
alias python27='python27 ~/py/shellreplace.py'
Then simply move all of the LoadingInteractiveConsole code into the file ~/py/shellreplace.py Once the script finishes executing, python will cease executing, and the improved interactive session will be seamless.
Is there a convenient way to get a more detailed stack trace on a Python exception? I'm hoping to find a wrapper utility/module or some other way to get a bit more info from the stack trace without having to actually modify the Python script that generates it. I'd like to be able to use this when running unit tests, or doctests, or when running utilities or inline scripts from the shell.
Specifically I think I'd like to have the values of local variables, or maybe just the values of the arguments passed to the innermost function in the stack trace. Some options to set the detail level would be nifty.
Not specifically related to your problem, but you might find this code useful -- automatically starts up the python debugger when a fatal exception occurs. Good for working with interactive code. It's originally from ActiveState
# code snippet, to be included in 'sitecustomize.py'
import sys
def info(type, value, tb):
if hasattr(sys, 'ps1') or not sys.stderr.isatty():
# we are in interactive mode or we don't have a tty-like
# device, so we call the default hook
sys.__excepthook__(type, value, tb)
else:
import traceback, pdb
# we are NOT in interactive mode, print the exception...
traceback.print_exception(type, value, tb)
print
# ...then start the debugger in post-mortem mode.
pdb.pm()
sys.excepthook = info
Did you have a look at traceback module?
http://docs.python.org/library/traceback.html
Also on SO:
Showing the stack trace from a running Python application
As mentionned by pyfunc, you can use the function in the traceback module but you only get a stacktrace.
If you want to inspect the stack you have to use the sys.exc_info() function and walk the traceback member and dump information from its frame (tb_frame). See the python Reference Manual for further information on these types.
Here is an example:
def killit(a):
a[10000000000000] = 1
def test(a):
killit(a)
def iterate_traceback(tb):
while tb is not None:
yield tb
tb = tb.tb_next
try:
test(tuple())
except Exception as e:
import sys
exception_info = sys.exc_info()
traceback = exception_info[2]
for tb in iterate_traceback(traceback):
print "-" * 10
print tb.tb_frame.f_code
print tb.tb_frame.f_locals
print tb.tb_frame.f_globals