Using sys.exit or SystemExit; when to use which? - python

Some programmers use sys.exit, others use SystemExit.
What is the difference?
When do I need to use SystemExit or sys.exit inside a function?
Example:
ref = osgeo.ogr.Open(reference)
if ref is None:
raise SystemExit('Unable to open %s' % reference)
or:
ref = osgeo.ogr.Open(reference)
if ref is None:
print('Unable to open %s' % reference)
sys.exit(-1)

No practical difference, but there's another difference in your example code - print goes to standard out, but the exception text goes to standard error (which is probably what you want).

sys.exit(s) is just shorthand for raise SystemExit(s), as described in the former's docstring; try help(sys.exit). So, instead of either one of your example programs, you can do
sys.exit('Unable to open %s' % reference)

There are 3 exit functions, in addition to raising SystemExit.
The underlying one is os._exit, which requires 1 int argument, and exits immediately with no cleanup. It's unlikely you'll ever want to touch this one, but it is there.
sys.exit is defined in sysmodule.c and just runs PyErr_SetObject(PyExc_SystemExit, exit_code);, which is effectively the same as directly raising SystemExit. In fine detail, raising SystemExit is probably faster, since sys.exit requires an LOAD_ATTR and CALL_FUNCTION vs RAISE_VARARGS opcalls. Also, raise SystemExit produces slightly smaller bytecode (4bytes less), (1 byte extra if you use from sys import exit since sys.exit is expected to return None, so includes an extra POP_TOP).
The last exit function is defined in site.py, and aliased to exit or quit in the REPL. It's actually an instance of the Quitter class (so it can have a custom __repr__, so is probably the slowest running. Also, it closes sys.stdin prior to raising SystemExit, so it's recommended for use only in the REPL.
As for how SystemExit is handled, it eventually causes the VM to call os._exit, but before that, it does some cleanup. It also runs atexit._run_exitfuncs() which runs any callbacks registered via the atexit module. Calling os._exit directly bypasses the atexit step.

My personal preference is that at the very least SystemExit is raised (or even better - a more meaningful and well documented custom exception) and then caught as close to the "main" function as possible, which can then have a last chance to deem it a valid exit or not. Libraries/deeply embedded functions that have sys.exit is just plain nasty from a design point of view. (Generally, exiting should be "as high up" as possible)

According to documentation sys.exit(s) effectively does raise SystemExit(s), so it's pretty much the same thing.

While the difference has been answered by many answers, Cameron Simpson makes an interesting point in https://mail.python.org/pipermail/python-list/2016-April/857869.html:
TL;DR: It's better to just raise a "normal" exception, and use SystemExit or sys.exit only at the top levels of a script.
I m on python 2.7 and Linux , I have a simple code need suggestion if I
I could replace sys.exit(1) with raise SystemExit .
==Actual code==
def main():
try:
create_logdir()
create_dataset()
unittest.main()
except Exception as e:
logging.exception(e)
sys.exit(EXIT_STATUS_ERROR)
if __name__ == '__main__': main()
==Changed Code==
def main():
try:
create_logdir()
create_dataset()
unittest.main()
except Exception as e:
logging.exception(e)
raise SystemExit
if __name__ == '__main__':
main()
I am against both of these personally. My preferred pattern is like
this:
def main(argv):
try:
...
except Exception as e:
logging.exception(e)
return 1
if __name__ == '__main__':
sys.exit(main(sys.argv))
Notice that main() is back to being a normal function with normal
returns.
Also, most of us would avoid the "except Exception" and just let a top
level except bubble out: that way you get a stack backtrace for
debugging. I agree it prevents logging the exception and makes for
uglier console output, but I think it is a win. And if you do want
to log the exception there is always this:
try:
... except Exception as e:
logging.exception(e)
raise
to recite the exception into the log and still let it bubble out
normally.
The problem with the "except Exception" pattern is that it catches and
hides
every exception, not merely the narrow set of specific exceptions that you understand.
Finally, it is frowned upon to raise a bare Exception class. In
python 3 I believe it is actually forbidden, so it is nonportable
anyway. But even In Python to it is best to supply an Exception
instance, not the class:
raise SystemExit(1)
All the functions in try block have exception bubbled out using raise
Example for create_logdir() here is the function definition
def create_logdir():
try:
os.makedirs(LOG_DIR)
except OSError as e:
sys.stderr.write("Failed to create log directory...Exiting !!!")
raise
print "log file: " + corrupt_log
return True
def main():
try:
create_logdir()
except Exception as e:
logging.exception(e)
raise SystemExit
(a) In case if create_logdir() fails we will get the below error ,is
this fine or do I need to improve this code.
Failed to create log directory...Exiting !!!ERROR:root:[Errno 17] File
exists: '/var/log/dummy'
Traceback (most recent call last):
File "corrupt_test.py", line 245, in main
create_logdir()
File "corrupt_test.py", line 53, in create_logdir
os.makedirs(LOG_DIR)
File "/usr/local/lib/python2.7/os.py", line 157, in makedirs
OSError: [Errno 17] File exists: '/var/log/dummy'
I prefer the bubble out approach, perhap with a log or warning
messages as you have done, eg:
logging.exception("create_logdir failed: makedirs(%r): %s" %
(LOG_DIR, e)) raise
(Also not that that log message records more context: context is very
useful when debugging problems.)
For very small scripts sys.stderr.write is ok, but in general any of
your functions that turned out to be generally useful might migrate
into a library in order to be reused; consider that stderr is not
always the place for messages; instead reading for the logging module
with error() or wanr() or exception() as appropriate. There is more
scope for configuring where the output goes that way without wiring
it into your inner functions.
Can I have just raise , instead of SystemExit or sys.exit(1) . This
looks wrong to me
def main():
try:
create_logdir()
except Exception as e
logging.exception(e)
raise
This is what I would do, myself.
Think: has the exception been "handled", meaning has the situation
been dealt with because it was expected? If not, let the exception
bubble out so that the user knows that something not understood by
the program has occurred.
Finally, it is generally bad to SystemExit or sys.exit() from inside
anything other than the outermost main() function. And I resist it
even there; the main function, if written well, may often be called
from somewhere else usefully, and that makes it effectively a library
function (it has been reused). Such a function should not
unilaterally abort the program. How rude! Instead, let the exception
bubble out: perhaps the caller of main() expects it and can handle
it. By aborting and not "raise"ing, you have deprived the caller of
the chance to do something appropriate, even though you yourself
(i.e. "main") do not know enough context to handle the exception.
So I am for "raise" myself. And then only because you want to log the
error. If you didn't want to log the exception you could avoid the
try/except entirely and have simpler code: let the caller worry
about unhandled exceptions!

SystemExit is an exception, which basically means that your progam had a behavior such that you want to stop it and raise an error. sys.exit is the function that you can call to exit from your program, possibily giving a return code to the system.
EDIT: they are indeed the same thing, so the only difference is in the logic behind in your program. An exception is some kind of "unwanted" behaviour, whether a call to a function is, from a programmer point of view, more of a "standard" action.

Related

Logging and exceptions in Python

I'm currently coding my first larger script which is a console based GUI where the user selects options by entering numbers to start several tasks:
I've recently implemented a lot of error handling to prevent the window from closing if something goes wrong in the background. I'm a little bit confused whether or not my approach is correct.
The basic structure of my code is as following:
There is a function read_excel() which loads some Excel files with pandas:
def read_excel(excel_path):
try:
df = pd.read_excel(excel_path, encoding="utf-8")
except FileNotFoundError:
raise FileNotFoundError('Unable to load assignment file, maybe choose custom')
return
# do stuff....
if not all(len(x) == len(signalnames) for x in [frequencies, cans]):
raise ValueError('Frequency and can number must be given for every signal in assignment file!')
else:
logging.info("Successfully loaded assignment file")
return signalnames, frequencies, group_names, cans, can_paths
This function is then used together with others in the function ft_14() which is called by the GUI:
def ft_14(files, draft assignement_path):
try:
signalnames, frequencies, group_names, cans, can_paths = read_excel(assignement_path)
except (ValueError, FileNotFoundError) as e:
logging.error(e)
return
# do stuff..
try:
wb.save(os.path.join(os.path.dirname(files[0]), "FT.14_results.xlsx"))
wb.close()
except Exception:
logging.error('Unable to save report excel')
So my attempt is to raise exceptions in the backend and then except them in functions which are called by the GUI and use logging to display them for the user. So my question is if this approach is the correct way to use exceptions and logging together or if there is a smarter way, because calling:
try:
# some function()
except Exection as e:
logging.error(e)
doesn't feel right to me.
The code you posted at the end, which makes you uncomfortable, is correct in that it does what it says it will do. If there is an exception, it logs the error.
The concern is that it also handles the exception. In many cases, you don't want to change anything else in the exception handling process -- you just want to log, and let the exception propagate normally.
https://docs.python.org/3/tutorial/errors.html#raising-exceptions
import logging
logger = logging.getLogger(__name__)
def fn(x):
try:
return x / 0
except Exception as e:
logger.error(str(e))
raise e
print ("let's do something risky")
try:
fn(20)
except Exception as e:
pass
print ("it has been done")
Notice how fn(x) can detect and handle exceptions, then re-raise them as if it had not done anything to the exception at all? You do that with "raise" and no arguments.
In your first code example, you catch an exception, and then raise a completely different exception, which happens to be of the same type:
except FileNotFoundError:
raise FileNotFoundError('Unable to load assignment file, maybe choose custom')
This can be fair game if you want to hide info, for example if the exception has internal details you do not want to expose to the outside. But you don't know what info you lost by replacing the exception entirely with a new instance. But none of those is "right" or "wrong," they're just choices you make with how to handle exceptions. (edit: it is good practice to handle them eventually or document what it can throw, but we're off topic)
None of that matters to the logger. You could have logged it right there and re-raised. You could log the original exception, then raise the sanitized version. You could handle it there, log it, and not raise (as the example you posted). The logger doesn't care what you do with the exception. If you want it logged, log it immediately.

How to handle a specific uncaught exception?

I would like to handle one specific exception in my script in a single place without resorting to a try/exception everytime*. I was hoping that the code below would do this:
import sys
def handle(exc_type, exc_value, exc_traceback):
if issubclass(exc_type, ValueError):
print("ValueError handled here and the script continues")
return
# follow default behaviour for the exception
sys.__excepthook__(exc_type, exc_value, exc_traceback)
sys.excepthook = handle
print("hello")
raise ValueError("wazaa")
print("world")
a = 1/0
The idea was that ValueError would be handled "manually" and the script would continue running (return to the script). For any other error (ZeroDivisionError in the case above), the normal traceback and script crash would ensue.
What happens is
$ python scratch_13.py
hello
ValueError handled here and the script continues
Process finished with exit code 1
The documentation mentions that (emphasis mine)
When an exception is raised and uncaught, the interpreter calls
sys.excepthook with three arguments, the exception class, exception
instance, and a traceback object. In an interactive session this
happens just before control is returned to the prompt; in a Python
program this happens just before the program exits.
which would mean that when I am in handler() it is already too late as the script has decided to die anyway and my only possibility is to influence how the traceback will look like.
Is there a way to ignore a specific exception globally in a script ?
* this is for a debugging context where the exception would normally be raised and crash the script (in production) but in some specific cases (a dev platform for instance), this specific exception needs to just be discarded. Otherwise I would have put a try/exception clause everywhere where the issue could arise.
One way to do it is to use contextlib.suppress and have a global tuple of suppressed Exceptions:
suppressed = (ValueError,)
And then anywhere where the error might occure you just wrap it in with suppress(*suppressed):
print("hello")
with suppress(*suppressed): # gets ignored
raise ValueError("wazaa")
print("world")
a = 1/0 # raise ZeroDivisionError
And then in production you just change suppressed to ():
suppressed = ()
print("hello")
with suppress(*suppressed):
raise ValueError("wazaa") # raises the error
print("world")
a = 1/0 # doesn't get executed
I think this is the best you can do. You can't ignore the exception completly globally, but you can make it so you only have to change on place.

Python: Make exceptions 'exiting'

In Python, is there any (proper) way to change the the default exception handling behaviour so that any uncaught exception will terminate/exit the program?
I don't want to wrap the entire program in a generic try-except block:
try:
// write code here
except Exception:
sys.exit(1)
For those asking for more specificity and/or claiming this is already the case, it's my understanding that not all Python exceptions are system-exiting: docs
Edit: It looks like I have forked processes complicating matters so won't be posting any specific details about my own mess.
If you're looking for an answer to the original question, Dmitry's comment is interesting and useful, references the 2nd answer to this question
You can use Specific exception instead of Exception because Exception is a Base class for all exceptions. For more details refer Exception tutorial
You can write your script like this-
try:
# write code here
except OverflowError:
raise SystemExit
except ArithmeticError:
sys.exit()
except IOError:
quit()
Try this different approaches to find what is exactly you are missing.
Edit 1 - Maintain Program Execution
In order to maintain your program execution try this one-
consider error_handler function is raising SystemExit exception then In your main method you need to add below code so you can maintain your program execution.
try:
error_handler()
except SystemExit:
print "sys.exit was called but I'm proceeding anyway (so there!-)."

Robust endless loop for server written in Python

I write a server which handles events and uncaught exceptions during handling the event must not terminate the server.
The server is a single non-threaded python process.
I want to terminate on these errors types:
KeyboardInterrupt
MemoryError
...
The list of built in exceptions is long: https://docs.python.org/2/library/exceptions.html
I don't want to re-invent this exception handling, since I guess it was done several times before.
How to proceed?
Have a white-list: A list of exceptions which are ok and processing the next event is the right choice
Have a black-list: A list of exceptions which indicate that terminating the server is the right choice.
Hint: This question is not about running a unix daemon in background. It is not about double fork and not about redirecting stdin/stdout :-)
I would do this in a similar way you're thinking of, using the 'you shall not pass' Gandalf exception handler except Exception to catch all non-system-exiting exceptions while creating a black-listed set of exceptions that should pass and end be re-raised.
Using the Gandalf handler will make sure GeneratorExit, SystemExit and KeyboardInterrupt (all system-exiting exceptions) pass and terminate the program if no other handlers are present higher in the call stack. Here is where you can check with type(e) that a __class__ of a caught exception e actually belongs in the set of black-listed exceptions and re-raise it.
As a small demonstration:
import exceptions # Py2.x only
# dictionary holding {exception_name: exception_class}
excptDict = vars(exceptions)
exceptionNames = ['MemoryError', 'OSError', 'SystemError'] # and others
# set containing black-listed exceptions
blackSet = {excptDict[exception] for exception in exceptionNames}
Now blackSet = {OSError, SystemError, MemoryError} holding the classes of the non-system-exiting exceptions we want to not handle.
A try-except block can now look like this:
try:
# calls that raise exceptions:
except Exception as e:
if type(e) in blackSet: raise e # re-raise
# else just handle it
An example which catches all exceptions using BaseException can help illustrate what I mean. (this is done for demonstration purposes only, in order to see how this raising will eventually terminate your program). Do note: I'm not suggesting you use BaseException; I'm using it in order to demonstrate what exception will actually 'pass through' and cause termination (i.e everything that BaseException catches):
for i, j in excptDict.iteritems():
if i.startswith('__'): continue # __doc__ and other dunders
try:
try:
raise j
except Exception as ex:
# print "Handler 'Exception' caught " + str(i)
if type(ex) in blackSet:
raise ex
except BaseException:
print "Handler 'BaseException' caught " + str(i)
# prints exceptions that would cause the system to exit
Handler 'BaseException' caught GeneratorExit
Handler 'BaseException' caught OSError
Handler 'BaseException' caught SystemExit
Handler 'BaseException' caught SystemError
Handler 'BaseException' caught KeyboardInterrupt
Handler 'BaseException' caught MemoryError
Handler 'BaseException' caught BaseException
Finally, in order to make this Python 2/3 agnostic, you can try and import exceptions and if that fails (which it does in Python 3), fall-back to importing builtins which contains all Exceptions; we search the dictionary by name so it makes no difference:
try:
import exceptions
excDict = vars(exceptions)
except ImportError:
import builtins
excDict = vars(builtins)
I don't know if there's a smarter way to actually do this, another solution might be instead of having a try-except with a signle except, having 2 handlers, one for the black-listed exceptions and the other for the general case:
try:
# calls that raise exceptions:
except tuple(blackSet) as be: # Must go first, of course.
raise be
except Exception as e:
# handle the rest
The top-most exception is BaseException. There are two groups under that:
Exception derived
everything else
Things like Stopiteration, ValueError, TypeError, etc., are all examples of Exception.
Things like GeneratorExit, SystemExit and KeyboardInterrupt are not descended from Execption.
So the first step is to catch Exception and not BaseException which will allow you to easily terminate the program. I recommend also catching GeneratorExit as 1) it should never actually be seen unless it is raised manually; 2) you can log it and restart the loop; and 3) it is intended to signal a generator has exited and can be cleaned up, not that the program should exit.
The next step is to log each exception with enough detail that you have the possibility of figuring out what went wrong (when you later get around to debugging).
Finally, you have to decide for yourself which, if any, of the Exception derived exceptions you want to terminate on: I would suggest RuntimeError and MemoryError, although you may be able to get around those by simply stopping and restarting your server loop.
So, really, it's up to you.
If there is some other error (such as IOError when trying to load a config file) that is serious enough to quit on, then the code responsible for loading the config file should be smart enough to catch that IOError and raise SystemExit instead.
As far as whitelist/blacklist -- use a black list, as there should only be a handful, if any, Exception-based exceptions that you need to actually terminate the server on.

How to prevent "too broad exception" in this case?

I have a list of functions that may fail and, if one fails, I don't want the script to stop, but to continue with next function.
I am executing it with something like this :
list_of_functions = [f_a, f_b, f_c]
for current_function in list_of_functions:
try:
current_function()
except Exception:
print(traceback.format_exc())
It's working fine, but it is not PEP8 compliant:
When catching exceptions, mention specific exceptions whenever
possible instead of using a bare except: clause.
For example, use:
try:
import platform_specific_module
except ImportError:
platform_specific_module = None
A bare except: clause will catch SystemExit and KeyboardInterrupt
exceptions, making it harder to interrupt a program with Control-C,
and can disguise other problems. If you want to catch all exceptions
that signal program errors, use except Exception: (bare except is
equivalent to except BaseException: ).
A good rule of thumb is to limit use of bare 'except' clauses to two
cases:
If the exception handler will be printing out or logging the traceback; at least the user will be aware that an error has occurred.
If the code needs to do some cleanup work, but then lets the exception propagate upwards with raise . try...finally can be a better
way to handle this case.
How can I do this the good way?
The PEP8 guide you quote suggests that it is okay to use a bare exception in your case provided you are logging the errors. I would think that you should cover as many exceptions as you can/know how to deal with and then log the rest and pass, e.g.
import logging
list_of_functions = [f_a,f_b,f_c]
for current_function in list_of_functions:
try:
current_function()
except KnownException:
raise
except Exception as e:
logging.exception(e)
Use this to cheat PEP8:
try:
"""code"""
except (Exception,):
pass
I think in some rare cases catching general exception is just justified and there is a way to trick PEP8 inspection:
list_of_functions = [f_a,f_b,f_c]
for current_function in list_of_functions:
try:
current_function()
except (ValueError, Exception):
print(traceback.format_exc())
You can replace ValueError by any other. It works for me (at least in PyCharm).
You can just put a comment like except Exception as error: # pylint: disable=broad-except that's worked for me actually. I hope it could be work for you.
From issue PY-9715 on yourtrack.jetbrains.com:
From pep-0348:
BaseException
The superclass that all exceptions must inherit from. It's name was
chosen to reflect that it is at the base of the exception hierarchy
while being an exception itself. "Raisable" was considered as a name,
it was passed on because its name did not properly reflect the fact
that it is an exception itself.
Direct inheritance of BaseException is not expected, and will be
discouraged for the general case. Most user-defined exceptions should
inherit from Exception instead. This allows catching Exception to
continue to work in the common case of catching all exceptions that
should be caught. Direct inheritance of BaseException should only be
done in cases where an entirely new category of exception is desired.
But, for cases where all exceptions should be caught blindly, except
BaseException will work.
You can avoid the error if you then re-raise the Exception. This way you are able to do damage control and not endanger loosing track of its occurance.
Do you perhaps mean that each function can raise different exceptions? When you name the exception type in the except clause it can be any name that refers to an exception, not just the class name.
eg.
def raise_value_error():
raise ValueError
def raise_type_error():
raise TypeError
def raise_index_error():
doesnt_exist
func_and_exceptions = [(raise_value_error, ValueError), (raise_type_error, TypeError),
(raise_index_error, IndexError)]
for function, possible_exception in func_and_exceptions:
try:
function()
except possible_exception as e:
print("caught", repr(e), "when calling", function.__name__)
prints:
caught ValueError() when calling raise_value_error
caught TypeError() when calling raise_type_error
Traceback (most recent call last):
File "run.py", line 14, in <module>
function()
File "run.py", line 8, in raise_index_error
doesnt_exist
NameError: name 'doesnt_exist' is not defined
Of course that leaves you with not knowing what to do when each exception occurs. But since you just want to ignore it and carry on then that's not a problem.
First, generate the pylintrc using the below command
pylint --generate-rcfile > .pylintrc
For reference:
https://learn.microsoft.com/en-us/visualstudio/python/linting-python-code?view=vs-2022
Search for disable (uncomment if needed) in the generate pylintrc file and add the below exception.
broad-except
Rerun the pylint command and see the magic

Categories

Resources