I would like to handle one specific exception in my script in a single place without resorting to a try/exception everytime*. I was hoping that the code below would do this:
import sys
def handle(exc_type, exc_value, exc_traceback):
if issubclass(exc_type, ValueError):
print("ValueError handled here and the script continues")
return
# follow default behaviour for the exception
sys.__excepthook__(exc_type, exc_value, exc_traceback)
sys.excepthook = handle
print("hello")
raise ValueError("wazaa")
print("world")
a = 1/0
The idea was that ValueError would be handled "manually" and the script would continue running (return to the script). For any other error (ZeroDivisionError in the case above), the normal traceback and script crash would ensue.
What happens is
$ python scratch_13.py
hello
ValueError handled here and the script continues
Process finished with exit code 1
The documentation mentions that (emphasis mine)
When an exception is raised and uncaught, the interpreter calls
sys.excepthook with three arguments, the exception class, exception
instance, and a traceback object. In an interactive session this
happens just before control is returned to the prompt; in a Python
program this happens just before the program exits.
which would mean that when I am in handler() it is already too late as the script has decided to die anyway and my only possibility is to influence how the traceback will look like.
Is there a way to ignore a specific exception globally in a script ?
* this is for a debugging context where the exception would normally be raised and crash the script (in production) but in some specific cases (a dev platform for instance), this specific exception needs to just be discarded. Otherwise I would have put a try/exception clause everywhere where the issue could arise.
One way to do it is to use contextlib.suppress and have a global tuple of suppressed Exceptions:
suppressed = (ValueError,)
And then anywhere where the error might occure you just wrap it in with suppress(*suppressed):
print("hello")
with suppress(*suppressed): # gets ignored
raise ValueError("wazaa")
print("world")
a = 1/0 # raise ZeroDivisionError
And then in production you just change suppressed to ():
suppressed = ()
print("hello")
with suppress(*suppressed):
raise ValueError("wazaa") # raises the error
print("world")
a = 1/0 # doesn't get executed
I think this is the best you can do. You can't ignore the exception completly globally, but you can make it so you only have to change on place.
Related
What is the difference between those two:
except:
# do something
and
except BaseException as be:
print(be)
I mean in the first case all possible exception are caught, but is this true for the second?
Also can the error message be printed using the first case?
The accepted answer is incorrect incomplete (at least for Python 3.6 and above).
By catching Exception you catch most errors - basically all the errors that any module you use might throw.
By catching BaseException, in addition to all the above exceptions, you also catch exceptions of the types SystemExit, KeyboardInterrupt, and GeneratorExit.
By catching KeyboardInterrupt, for example, you may stop your code from exiting after an initiated exit by the user (like pressing ^C in the console, or stopping launched application on some interpreters). This could be a wanted behavior (for example - to log an exit), but should be used with extreme care!
In the above example, by catching BaseException, you may cause your application to hang when you want it to exit.
Practically speaking, there is no difference between except: and except BaseException:, for any current Python release.
That's because you can't just raise any type of object as an exception. The raise statement explicitly disallows raising anything else:
[...] raise evaluates the first expression as the exception object. It must be either a subclass or an instance of BaseException.
Bold emphasis mine. This has not always been the case however, in older Python releases (2.4 and before) you could use strings as exceptions too.
The advantage then is that you get to have easy access to the caught exception. In order to be able to add as targetname, you must catch a specific class of exceptions, and only BaseException is going to do that.
You can still access the currently active exception by using sys.exc_info() though:
except:
be = sys.exc_info()[1]
Pick what you feel is more readable for your future self and for your colleagues.
I write a server which handles events and uncaught exceptions during handling the event must not terminate the server.
The server is a single non-threaded python process.
I want to terminate on these errors types:
KeyboardInterrupt
MemoryError
...
The list of built in exceptions is long: https://docs.python.org/2/library/exceptions.html
I don't want to re-invent this exception handling, since I guess it was done several times before.
How to proceed?
Have a white-list: A list of exceptions which are ok and processing the next event is the right choice
Have a black-list: A list of exceptions which indicate that terminating the server is the right choice.
Hint: This question is not about running a unix daemon in background. It is not about double fork and not about redirecting stdin/stdout :-)
I would do this in a similar way you're thinking of, using the 'you shall not pass' Gandalf exception handler except Exception to catch all non-system-exiting exceptions while creating a black-listed set of exceptions that should pass and end be re-raised.
Using the Gandalf handler will make sure GeneratorExit, SystemExit and KeyboardInterrupt (all system-exiting exceptions) pass and terminate the program if no other handlers are present higher in the call stack. Here is where you can check with type(e) that a __class__ of a caught exception e actually belongs in the set of black-listed exceptions and re-raise it.
As a small demonstration:
import exceptions # Py2.x only
# dictionary holding {exception_name: exception_class}
excptDict = vars(exceptions)
exceptionNames = ['MemoryError', 'OSError', 'SystemError'] # and others
# set containing black-listed exceptions
blackSet = {excptDict[exception] for exception in exceptionNames}
Now blackSet = {OSError, SystemError, MemoryError} holding the classes of the non-system-exiting exceptions we want to not handle.
A try-except block can now look like this:
try:
# calls that raise exceptions:
except Exception as e:
if type(e) in blackSet: raise e # re-raise
# else just handle it
An example which catches all exceptions using BaseException can help illustrate what I mean. (this is done for demonstration purposes only, in order to see how this raising will eventually terminate your program). Do note: I'm not suggesting you use BaseException; I'm using it in order to demonstrate what exception will actually 'pass through' and cause termination (i.e everything that BaseException catches):
for i, j in excptDict.iteritems():
if i.startswith('__'): continue # __doc__ and other dunders
try:
try:
raise j
except Exception as ex:
# print "Handler 'Exception' caught " + str(i)
if type(ex) in blackSet:
raise ex
except BaseException:
print "Handler 'BaseException' caught " + str(i)
# prints exceptions that would cause the system to exit
Handler 'BaseException' caught GeneratorExit
Handler 'BaseException' caught OSError
Handler 'BaseException' caught SystemExit
Handler 'BaseException' caught SystemError
Handler 'BaseException' caught KeyboardInterrupt
Handler 'BaseException' caught MemoryError
Handler 'BaseException' caught BaseException
Finally, in order to make this Python 2/3 agnostic, you can try and import exceptions and if that fails (which it does in Python 3), fall-back to importing builtins which contains all Exceptions; we search the dictionary by name so it makes no difference:
try:
import exceptions
excDict = vars(exceptions)
except ImportError:
import builtins
excDict = vars(builtins)
I don't know if there's a smarter way to actually do this, another solution might be instead of having a try-except with a signle except, having 2 handlers, one for the black-listed exceptions and the other for the general case:
try:
# calls that raise exceptions:
except tuple(blackSet) as be: # Must go first, of course.
raise be
except Exception as e:
# handle the rest
The top-most exception is BaseException. There are two groups under that:
Exception derived
everything else
Things like Stopiteration, ValueError, TypeError, etc., are all examples of Exception.
Things like GeneratorExit, SystemExit and KeyboardInterrupt are not descended from Execption.
So the first step is to catch Exception and not BaseException which will allow you to easily terminate the program. I recommend also catching GeneratorExit as 1) it should never actually be seen unless it is raised manually; 2) you can log it and restart the loop; and 3) it is intended to signal a generator has exited and can be cleaned up, not that the program should exit.
The next step is to log each exception with enough detail that you have the possibility of figuring out what went wrong (when you later get around to debugging).
Finally, you have to decide for yourself which, if any, of the Exception derived exceptions you want to terminate on: I would suggest RuntimeError and MemoryError, although you may be able to get around those by simply stopping and restarting your server loop.
So, really, it's up to you.
If there is some other error (such as IOError when trying to load a config file) that is serious enough to quit on, then the code responsible for loading the config file should be smart enough to catch that IOError and raise SystemExit instead.
As far as whitelist/blacklist -- use a black list, as there should only be a handful, if any, Exception-based exceptions that you need to actually terminate the server on.
I'm using custom exceptions to differ my exceptions from Python's default exceptions.
Is there a way to define a custom exit code when I raise the exception?
class MyException(Exception):
pass
def do_something_bad():
raise MyException('This is a custom exception')
if __name__ == '__main__':
try:
do_something_bad()
except:
print('Oops') # Do some exception handling
raise
In this code, the main function runs a few functions in a try code.
After I catch an exception I want to re-raise it to preserve the traceback stack.
The problem is that 'raise' always exits 1.
I want to exit the script with a custom exit code (for my custom exception), and exit 1 in any other case.
I've looked at this solution but it's not what I'm looking for:
Setting exit code in Python when an exception is raised
This solution forces me to check in every script I use whether the exception is a default or a custom one.
I want my custom exception to be able to tell the raise function what exit code to use.
You can override sys.excepthook to do what you want yourself:
import sys
class ExitCodeException(Exception):
"base class for all exceptions which shall set the exit code"
def getExitCode(self):
"meant to be overridden in subclass"
return 3
def handleUncaughtException(exctype, value, trace):
oldHook(exctype, value, trace)
if isinstance(value, ExitCodeException):
sys.exit(value.getExitCode())
sys.excepthook, oldHook = handleUncaughtException, sys.excepthook
This way you can put this code in a special module which all your code just needs to import.
Some programmers use sys.exit, others use SystemExit.
What is the difference?
When do I need to use SystemExit or sys.exit inside a function?
Example:
ref = osgeo.ogr.Open(reference)
if ref is None:
raise SystemExit('Unable to open %s' % reference)
or:
ref = osgeo.ogr.Open(reference)
if ref is None:
print('Unable to open %s' % reference)
sys.exit(-1)
No practical difference, but there's another difference in your example code - print goes to standard out, but the exception text goes to standard error (which is probably what you want).
sys.exit(s) is just shorthand for raise SystemExit(s), as described in the former's docstring; try help(sys.exit). So, instead of either one of your example programs, you can do
sys.exit('Unable to open %s' % reference)
There are 3 exit functions, in addition to raising SystemExit.
The underlying one is os._exit, which requires 1 int argument, and exits immediately with no cleanup. It's unlikely you'll ever want to touch this one, but it is there.
sys.exit is defined in sysmodule.c and just runs PyErr_SetObject(PyExc_SystemExit, exit_code);, which is effectively the same as directly raising SystemExit. In fine detail, raising SystemExit is probably faster, since sys.exit requires an LOAD_ATTR and CALL_FUNCTION vs RAISE_VARARGS opcalls. Also, raise SystemExit produces slightly smaller bytecode (4bytes less), (1 byte extra if you use from sys import exit since sys.exit is expected to return None, so includes an extra POP_TOP).
The last exit function is defined in site.py, and aliased to exit or quit in the REPL. It's actually an instance of the Quitter class (so it can have a custom __repr__, so is probably the slowest running. Also, it closes sys.stdin prior to raising SystemExit, so it's recommended for use only in the REPL.
As for how SystemExit is handled, it eventually causes the VM to call os._exit, but before that, it does some cleanup. It also runs atexit._run_exitfuncs() which runs any callbacks registered via the atexit module. Calling os._exit directly bypasses the atexit step.
My personal preference is that at the very least SystemExit is raised (or even better - a more meaningful and well documented custom exception) and then caught as close to the "main" function as possible, which can then have a last chance to deem it a valid exit or not. Libraries/deeply embedded functions that have sys.exit is just plain nasty from a design point of view. (Generally, exiting should be "as high up" as possible)
According to documentation sys.exit(s) effectively does raise SystemExit(s), so it's pretty much the same thing.
While the difference has been answered by many answers, Cameron Simpson makes an interesting point in https://mail.python.org/pipermail/python-list/2016-April/857869.html:
TL;DR: It's better to just raise a "normal" exception, and use SystemExit or sys.exit only at the top levels of a script.
I m on python 2.7 and Linux , I have a simple code need suggestion if I
I could replace sys.exit(1) with raise SystemExit .
==Actual code==
def main():
try:
create_logdir()
create_dataset()
unittest.main()
except Exception as e:
logging.exception(e)
sys.exit(EXIT_STATUS_ERROR)
if __name__ == '__main__': main()
==Changed Code==
def main():
try:
create_logdir()
create_dataset()
unittest.main()
except Exception as e:
logging.exception(e)
raise SystemExit
if __name__ == '__main__':
main()
I am against both of these personally. My preferred pattern is like
this:
def main(argv):
try:
...
except Exception as e:
logging.exception(e)
return 1
if __name__ == '__main__':
sys.exit(main(sys.argv))
Notice that main() is back to being a normal function with normal
returns.
Also, most of us would avoid the "except Exception" and just let a top
level except bubble out: that way you get a stack backtrace for
debugging. I agree it prevents logging the exception and makes for
uglier console output, but I think it is a win. And if you do want
to log the exception there is always this:
try:
... except Exception as e:
logging.exception(e)
raise
to recite the exception into the log and still let it bubble out
normally.
The problem with the "except Exception" pattern is that it catches and
hides
every exception, not merely the narrow set of specific exceptions that you understand.
Finally, it is frowned upon to raise a bare Exception class. In
python 3 I believe it is actually forbidden, so it is nonportable
anyway. But even In Python to it is best to supply an Exception
instance, not the class:
raise SystemExit(1)
All the functions in try block have exception bubbled out using raise
Example for create_logdir() here is the function definition
def create_logdir():
try:
os.makedirs(LOG_DIR)
except OSError as e:
sys.stderr.write("Failed to create log directory...Exiting !!!")
raise
print "log file: " + corrupt_log
return True
def main():
try:
create_logdir()
except Exception as e:
logging.exception(e)
raise SystemExit
(a) In case if create_logdir() fails we will get the below error ,is
this fine or do I need to improve this code.
Failed to create log directory...Exiting !!!ERROR:root:[Errno 17] File
exists: '/var/log/dummy'
Traceback (most recent call last):
File "corrupt_test.py", line 245, in main
create_logdir()
File "corrupt_test.py", line 53, in create_logdir
os.makedirs(LOG_DIR)
File "/usr/local/lib/python2.7/os.py", line 157, in makedirs
OSError: [Errno 17] File exists: '/var/log/dummy'
I prefer the bubble out approach, perhap with a log or warning
messages as you have done, eg:
logging.exception("create_logdir failed: makedirs(%r): %s" %
(LOG_DIR, e)) raise
(Also not that that log message records more context: context is very
useful when debugging problems.)
For very small scripts sys.stderr.write is ok, but in general any of
your functions that turned out to be generally useful might migrate
into a library in order to be reused; consider that stderr is not
always the place for messages; instead reading for the logging module
with error() or wanr() or exception() as appropriate. There is more
scope for configuring where the output goes that way without wiring
it into your inner functions.
Can I have just raise , instead of SystemExit or sys.exit(1) . This
looks wrong to me
def main():
try:
create_logdir()
except Exception as e
logging.exception(e)
raise
This is what I would do, myself.
Think: has the exception been "handled", meaning has the situation
been dealt with because it was expected? If not, let the exception
bubble out so that the user knows that something not understood by
the program has occurred.
Finally, it is generally bad to SystemExit or sys.exit() from inside
anything other than the outermost main() function. And I resist it
even there; the main function, if written well, may often be called
from somewhere else usefully, and that makes it effectively a library
function (it has been reused). Such a function should not
unilaterally abort the program. How rude! Instead, let the exception
bubble out: perhaps the caller of main() expects it and can handle
it. By aborting and not "raise"ing, you have deprived the caller of
the chance to do something appropriate, even though you yourself
(i.e. "main") do not know enough context to handle the exception.
So I am for "raise" myself. And then only because you want to log the
error. If you didn't want to log the exception you could avoid the
try/except entirely and have simpler code: let the caller worry
about unhandled exceptions!
SystemExit is an exception, which basically means that your progam had a behavior such that you want to stop it and raise an error. sys.exit is the function that you can call to exit from your program, possibily giving a return code to the system.
EDIT: they are indeed the same thing, so the only difference is in the logic behind in your program. An exception is some kind of "unwanted" behaviour, whether a call to a function is, from a programmer point of view, more of a "standard" action.
I want to know what is the best way of checking an condition in Python definition and prevent it from further execution if condition is not satisfied. Right now i am following the below mentioned scheme but it actually prints the whole trace stack. I want it to print only an error message and do not execute the rest of code. Is there any other cleaner solution for doing it.
def Mydef(n1,n2):
if (n1>n2):
raise ValueError("Arg1 should be less than Arg2)
# Some Code
Mydef(2,1)
That is what exceptions are created for. Your scheme of raising exception is good in general; you just need to add some code to catch it and process it
try:
Mydef(2,1)
except ValueError, e:
# Do some stuff when exception is raised, e.message will contain your message
In this case, execution of Mydef stops when it encounters raise ValueError line of code, and goes to the code block under except.
You can read more about exceptions processing in the documentation.
If you don't want to deal with exceptions processing, you can gracefully stop function to execute further code with return statement.
def Mydef(n1,n2):
if (n1>n2):
return
def Mydef(n1,n2):
if (n1>n2):
print "Arg1 should be less than Arg2"
return None
# Some Code
Mydef(2,1)
Functions stop executing when they reach to return statement or they run the until the end of definition. You should read about flow control in general (not specifically to python)