I have a method with the following signature
def read_a_file(file_name, line_number=False):
if line_number:
raise NotImplementedError
# CODE TO READ THE FILE
The argument line_number has not been implemented yet though I plan to do it soon. I would like to make this clear to end users when they try to call read_a_file() with some value for line_number greater than 0.
Would it be correct to raise a NotImplementedError or is there some better way to notify the callers ?
It's quite strange behaviour to have an argument on a function that you do not want people to use — why not just add it when it is implemented?
Nobody will miss something which isn't there and it's likely to just add more confusion. The parameter will be suggested by autocomplete tools and only be identifiable as unsupported once code is run.
If you still do want to do this, I would provide a bit more informative message for the exception, e.g.
def read_a_file(file_name, line_number=False):
if line_number:
raise NotImplementedError("line_number parameter is not yet supported.")
Related
This is a hard question to phrase, but here's a stripped-down version of the situation. I'm using some library code that accepts a callback. It has its own error-handling, and raises an error if anything goes wrong while executing the callback.
class LibraryException(Exception):
pass
def library_function(callback, string):
try:
# (does other stuff here)
callback(string)
except:
raise LibraryException('The library code hit a problem.')
I'm using this code inside an input loop. I know of potential errors that could arise in my callback function, depending on values in the input. If that happens, I'd like to reprompt, after getting helpful feedback from its error message. I imagine it looking something like this:
class MyException(Exception):
pass
def my_callback(string):
raise MyException("Here's some specific info about my code hitting a problem.")
while True:
something = input('Enter something: ')
try:
library_function(my_callback, something)
except MyException as e:
print(e)
continue
Of course, this doesn't work, because MyException will be caught within library_function, which will raise its own (much less informative) Exception and halt the program.
The obvious thing to do would be to validate my input before calling library_function, but that's a circular problem, because parsing is what I'm using the library code for in the first place. (For the curious, it's Lark, but I don't think my question is specific enough to Lark to warrant cluttering it with all the specific details.)
One alternative would be to alter my code to catch any error (or at least the type of error the library generates), and directly print the inner error message:
def my_callback(string):
error_str = "Here's some specific info about my code hitting a problem."
print(error_str)
raise MyException(error_str)
while True:
something = input('Enter something: ')
try:
library_function(my_callback, something)
except LibraryException:
continue
But I see two issues with this. One is that I'm throwing a wide net, potentially catching and ignoring errors other than in the scope I'm aiming at. Beyond that, it just seems... inelegant, and unidiomatic, to print the error message, then throw the exception itself into the void. Plus the command line event loop is only for testing; eventually I plan to embed this in a GUI application, and without printed output, I'll still want to access and display the info about what went wrong.
What's the cleanest and most Pythonic way to achieve something like this?
There seems to be many ways to achieve what you want. Though, which one is more robust - I cannot find a clue about. I'll try to explain all the methods that seemed apparent to me. Perhaps you'll find one of them useful.
I'll be using the example code you provided to demonstrate these methods, here's a fresher on how it looks-
class MyException(Exception):
pass
def my_callback(string):
raise MyException("Here's some specific info about my code hitting a problem.")
def library_function(callback, string):
try:
# (does other stuff here)
callback(string)
except:
raise Exception('The library code hit a problem.')
The simplest approach - traceback.format_exc
import traceback
try:
library_function(my_callback, 'boo!')
except:
# NOTE: Remember to keep the `chain` parameter of `format_exc` set to `True` (default)
tb_info = traceback.format_exc()
This does not require much know-how about exceptions and stack traces themselves, nor does it require you to pass any special frame/traceback/exception to the library function. But look at what this returns (as in, the value of tb_info)-
'''
Traceback (most recent call last):
File "path/to/test.py", line 14, in library_function
callback(string)
File "path/to/test.py", line 9, in my_callback
raise MyException("Here's some specific info about my code hitting a problem.")
MyException: Here's some specific info about my code hitting a problem.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "path/to/test.py", line 19, in <module>
library_function(my_callback, 'boo!')
File "path/to/test.py", line 16, in library_function
raise Exception('The library code hit a problem.')
Exception: The library code hit a problem.
'''
That's a string, the same thing you'd see if you just let the exception happen without catching. Notice the exception chaining here, the exception at the top is the exception that happened prior to the exception at the bottom. You could parse out all the exception names-
import re
exception_list = re.findall(r'^(\w+): (\w+)', tb_info, flags=re.M)
With that, you'll get [('MyException', "Here's some specific info about my code hitting a problem"), ('Exception', 'The library code hit a problem')] in exception_list
Although this is the easiest way out, it's not very context aware. I mean, all you get are class names in string form. Regardless, if that is what suits your needs - I don't particularly see a problem with this.
The "robust" approach - recursing through __context__/__cause__
Python itself keeps track of the exception trace history, the exception currently at hand, the exception that caused this exception and so on. You can read about the intricate details of this concept in PEP 3134
Whether or not you go through the entirety of the PEP, I urge you to at least familiarize yourself with implicitly chained exceptions and explicitly chained exceptions. Perhaps this SO thread will be useful for that.
As a small refresher, raise ... from is for explicitly chaining exceptions. The method you show in your example, is implicit chaining
Now, you need to make a mental note - TracebackException#__cause__ is for explicitly chained exceptions and TracebackException#__context__ is for implicitly chained exceptions. Since your example uses implicit chaining, you can simply follow __context__ backwards and you'll reach MyException. In fact, since this is only one level of nesting, you'll reach it instantly!
import sys
import traceback
try:
library_function(my_callback, 'boo!')
except:
previous_exc = traceback.TracebackException(*sys.exc_info()).__context__
This first constructs the TracebackException from sys.exc_info. sys.exc_info returns a tuple of (exc_type, exc_value, exc_traceback) for the exception at hand (if any). Notice that those 3 values, in that specific order, are exactly what you need to construct TracebackException - so you can simply destructure it using * and pass it to the class constructor.
This returns a TracebackException object about the current exception. The exception that it is implicitly chained from is in __context__, the exception that it is explicitly chained from is in __cause__.
Note that both __cause__ and __context__ will return either a TracebackException object, or None (if you're at the end of the chain). This means, you can call __cause__/__context__ again on the return value and basically keep going till you reach the end of the chain.
Printing a TracebackException object just prints the message of the exception, if you want to get the class itself (the actual class, not a string), you can do .exc_type
print(previous_exc)
# prints "Here's some specific info about my code hitting a problem."
print(previous_exc.exc_type)
# prints <class '__main__.MyException'>
Here's an example of recursing through .__context__ and printing the types of all exceptions in the implicit chain. (You can do the same for .__cause__)
def classes_from_excs(exc: traceback.TracebackException):
print(exc.exc_type)
if not exc.__context__:
# chain exhausted
return
classes_from_excs(exc.__context__)
Let's use it!
try:
library_function(my_callback, 'boo!')
except:
classes_from_excs(traceback.TracebackException(*sys.exc_info()))
That will print-
<class 'Exception'>
<class '__main__.MyException'>
Once again, the point of this is to be context aware. Ideally, printing isn't the thing you'll want to do in a practical environment, you have the class objects themselves on your hands, with all the info!
NOTE: For implicitly chained exceptions, if an exception is explicitly suppressed, it'll be a bad day trying to recover the chain - regardless, you might give __supressed_context__ a shot.
The painful way - walking through traceback.walk_tb
This is probably the closest you can get to the low level stuff of exception handling. If you want to capture entire frames of information instead of just the exception classes and messages and such, you might find walk_tb useful....and a bit painful.
import traceback
try:
library_function(my_callback, 'foo')
except:
tb_gen = traceback.walk_tb(sys.exc_info()[2])
There is....entirely too much to discuss here. .walk_tb takes a traceback object, you may remember from the previous method that the 2nd index of the returned tuple from sys.exec_info is just that. It then returns a generator of tuples of frame object and int (Iterator[Tuple[FrameType, int]]).
These frame objects have all kinds of intricate information. Though, whether or not you'll actually find exactly what you're looking for, is another story. They may be complex, but they aren't exhaustive unless you play around with a lot of frame inspection. Regardless, this is what the frame objects represent.
What you do with the frames is upto you. They can be passed to many functions. You can pass the entire generator to StackSummary.extract to get framesummary objects, you can iterate through each frame to have a look at [0].f_locals (The [0] on Tuple[FrameType, int] returns the actual frame object) and so on.
for tb in tb_gen:
print(tb[0].f_locals)
That will give you a dict of the locals for each frame. Within the first tb from tb_gen, you'll see MyException as part of the locals....among a load of other stuff.
I have a creeping feeling I have overlooked some methods, most probably with inspect. But I hope the above methods will be good enough so that no one has to go through the jumble that is inspect :P
Chase's answer above is phenomenal. For completeness's sake, here's how I implemented their second approach in this situation. First, I made a function that can search the stack for the specified error type. Even though the chaining in my example is implicit, this should be able to follow implicit and/or explicit chaining:
import sys
import traceback
def find_exception_in_trace(exc_type):
"""Return latest exception of exc_type, or None if not present"""
tb = traceback.TracebackException(*sys.exc_info())
prev_exc = tb.__context__ or tb.__cause__
while prev_exc:
if prev_exc.exc_type == exc_type:
return prev_exc
prev_exc = prev_exc.__context__ or prev_exc.__cause__
return None
With that, it's as simple as:
while True:
something = input('Enter something: ')
try:
library_function(my_callback, something)
except LibraryException as exc:
if (my_exc := find_exception_in_trace(MyException)):
print(my_exc)
continue
raise exc
That way I can access my inner exception (and print it for now, although eventually I may do other things with it) and continue. But if my exception wasn't in there, I simply reraise whatever the library raised. Perfect!
Suppose I have a simple function. For example:
def if_a_float(string):
try:
float(string)
except ValueError:
return False
else:
return True
Should I include the Raises: ValueError statement into my docstring or should I avoid it as the error was already handled in the code? Is it done for any error (caught/uncaught)? I do understand that it probably depends on the style, so let's say I am using the Google Docstring style(though I guess it doesn't matter that much)
You should document the exception raised explicitly, as well as those that may be relevant to the interface, as per the Google Style Guidelines (the same document you mention yourself).
This code does not raise an exception explicitly (there is no raise), and you do not need to mention that you are catching one.
Actually, this code cannot even accidentally raise one (you are catching the only line that could) and therefore it would be misleading if you were to document that the if_a_float() was raising a ValueError.
You should only document the exceptions that callers need to be aware of and may want to catch. If the function catches an exception itself and doesn't raise it to the caller, it's an internal implementation detail that callers don't need to be aware of, so it doesn't need to be documented.
According to a given protocol (which I cannot change, only implement), some function initialize_foo() is supposed to be called only once:
def initialize_foo():
"""
...
Note:
You must call this function exactly once.
"""
I would like to recognize a protocol abuse where it is called twice, and raise an exception:
_foo_initialized = False
def initialize_foo():
"""
...
Note:
You must call this function exactly once.
"""
if _foo_initialized:
raise <what>?
...
_foo_initialized = True
The problem is what class's object to raise. Looking at the standard exceptions, I can't find anything to subclass except Exception, which seems too general.
What is the general practice in this case?
I'd use RuntimeError.
It is often used for that sort of stuff, even in the standard library. You can find an example very similar to your use case in the warnings module:
if self._entered:
raise RuntimeError("Cannot enter %r twice" % self)
Another example is in threading:
if self._started.is_set():
raise RuntimeError("threads can only be started once")
You can also consider raising an ad-hoc exception (possibly a subclass of RuntimeError) if that error is supposed to be caught and if you feel that RuntimeError may be ambiguous.
I would recommend you to subclass a warning, instead of having an exception, since I have a feeling that a lot of times you'd rather continue running after this happens.
So this is a little bit of a strange question, but it could be fun!
I need to somehow reliably cause an exception in python. I would prefer it to be human triggered, but I am also willing to embed something in my code that will always cause an exception. (I have set up some exception handling and would like to test it)
I've been looking around and some ideas appear to be division by zero or something along those lines will always cause an exception--Is there a better way? The most ideal would be to simulate a loss of internet connection while the program is running....any ideas would be great!
Have fun!
Yes, there is: You can explicitly raise your own exceptions.
raise Exception("A custom message as to why you raised this.")
You would want to raise an appropriate exception/error for loss of network connectivity.
You can define your own Exceptions in Python, so you can create custom errors to suit your needs. You can test that certain conditions exist, and use the truthiness of that test to decide whether or not to raise your shiny, custom Exception:
class MyFancyException(Exception): pass
def do_something():
if sometestFunction() is True:
raise MyFancyException
carry_on_theres_nothing_to_see()
try:
do_something()
except MyFancyException:
# This is entirely up to you!
# What needs to happen if the exception is caught?
The documentation has some useful examples.
Yup, you can just plop
1 / 0
anywhere in your code for a run time error to occur, specifically in this case a ZeroDivisionError: integer division or modulo by zero.
This is the simplest way to get an exception by embedding something in your code (as you mentioned in your post). You can of course raise your own Exceptions too .. depends on your specific needs.
I was wondering about the best practices for indicating invalid argument combinations in Python. I've come across a few situations where you have a function like so:
def import_to_orm(name, save=False, recurse=False):
"""
:param name: Name of some external entity to import.
:param save: Save the ORM object before returning.
:param recurse: Attempt to import associated objects as well. Because you
need the original object to have a key to relate to, save must be
`True` for recurse to be `True`.
:raise BadValueError: If `recurse and not save`.
:return: The ORM object.
"""
pass
The only annoyance with this is that every package has its own, usually slightly differing BadValueError. I know that in Java there exists java.lang.IllegalArgumentException -- is it well understood that everybody will be creating their own BadValueErrors in Python or is there another, preferred method?
I would just raise ValueError, unless you need a more specific exception..
def import_to_orm(name, save=False, recurse=False):
if recurse and not save:
raise ValueError("save must be True if recurse is True")
There's really no point in doing class BadValueError(ValueError):pass - your custom class is identical in use to ValueError, so why not use that?
I would inherit from ValueError
class IllegalArgumentError(ValueError):
pass
It is sometimes better to create your own exceptions, but inherit from a built-in one, which is as close to what you want as possible.
If you need to catch that specific error, it is helpful to have a name.
I think the best way to handle this is the way python itself handles it. Python raises a TypeError. For example:
$ python -c 'print(sum())'
Traceback (most recent call last):
File "<string>", line 1, in <module>
TypeError: sum expected at least 1 arguments, got 0
Our junior dev just found this page in a google search for "python exception wrong arguments" and I'm surprised that the obvious (to me) answer wasn't ever suggested in the decade since this question was asked.
It depends on what the problem with the arguments is.
If the argument has the wrong type, raise a TypeError. For example, when you get a string instead of one of those Booleans.
if not isinstance(save, bool):
raise TypeError(f"Argument save must be of type bool, not {type(save)}")
Note, however, that in Python we rarely make any checks like this. If the argument really is invalid, some deeper function will probably do the complaining for us. And if we only check the boolean value, perhaps some code user will later just feed it a string knowing that non-empty strings are always True. It might save him a cast.
If the arguments have invalid values, raise ValueError. This seems more appropriate in your case:
if recurse and not save:
raise ValueError("If recurse is True, save should be True too")
Or in this specific case, have a True value of recurse imply a True value of save. Since I would consider this a recovery from an error, you might also want to complain in the log.
if recurse and not save:
logging.warning("Bad arguments in import_to_orm() - if recurse is True, so should save be")
save = True
I've mostly just seen the builtin ValueError used in this situation.
You would most likely use ValueError (raise ValueError() in full) in this case, but it depends on the type of bad value. For example, if you made a function that only allows strings, and the user put in an integer instead, you would you TypeError instead. If a user inputted a wrong input (meaning it has the right type but it does not qualify certain conditions) a Value Error would be your best choice. Value Error can also be used to block the program from other exceptions, for example, you could use a ValueError to stop the shell form raising a ZeroDivisionError, for example, in this function:
def function(number):
if not type(number) == int and not type(number) == float:
raise TypeError("number must be an integer or float")
if number == 5:
raise ValueError("number must not be 5")
else:
return 10/(5-number)
P.S. For a list of python built-in exceptions, go here:
https://docs.python.org/3/library/exceptions.html (This is the official python databank)
Agree with Markus' suggestion to roll your own exception, but the text of the exception should clarify that the problem is in the argument list, not the individual argument values. I'd propose:
class BadCallError(ValueError):
pass
Used when keyword arguments are missing that were required for the specific call, or argument values are individually valid but inconsistent with each other. ValueError would still be right when a specific argument is right type but out of range.
Shouldn't this be a standard exception in Python?
In general, I'd like Python style to be a bit sharper in distinguishing bad inputs to a function (caller's fault) from bad results within the function (my fault). So there might also be a BadArgumentError to distinguish value errors in arguments from value errors in locals.
I'm not sure I agree with inheritance from ValueError -- my interpretation of the documentation is that ValueError is only supposed to be raised by builtins... inheriting from it or raising it yourself seems incorrect.
Raised when a built-in operation or
function receives an argument that has
the right type but an inappropriate
value, and the situation is not
described by a more precise exception
such as IndexError.
-- ValueError documentation