Which exception should I raise on bad/illegal argument combinations in Python? - python

I was wondering about the best practices for indicating invalid argument combinations in Python. I've come across a few situations where you have a function like so:
def import_to_orm(name, save=False, recurse=False):
"""
:param name: Name of some external entity to import.
:param save: Save the ORM object before returning.
:param recurse: Attempt to import associated objects as well. Because you
need the original object to have a key to relate to, save must be
`True` for recurse to be `True`.
:raise BadValueError: If `recurse and not save`.
:return: The ORM object.
"""
pass
The only annoyance with this is that every package has its own, usually slightly differing BadValueError. I know that in Java there exists java.lang.IllegalArgumentException -- is it well understood that everybody will be creating their own BadValueErrors in Python or is there another, preferred method?

I would just raise ValueError, unless you need a more specific exception..
def import_to_orm(name, save=False, recurse=False):
if recurse and not save:
raise ValueError("save must be True if recurse is True")
There's really no point in doing class BadValueError(ValueError):pass - your custom class is identical in use to ValueError, so why not use that?

I would inherit from ValueError
class IllegalArgumentError(ValueError):
pass
It is sometimes better to create your own exceptions, but inherit from a built-in one, which is as close to what you want as possible.
If you need to catch that specific error, it is helpful to have a name.

I think the best way to handle this is the way python itself handles it. Python raises a TypeError. For example:
$ python -c 'print(sum())'
Traceback (most recent call last):
File "<string>", line 1, in <module>
TypeError: sum expected at least 1 arguments, got 0
Our junior dev just found this page in a google search for "python exception wrong arguments" and I'm surprised that the obvious (to me) answer wasn't ever suggested in the decade since this question was asked.

It depends on what the problem with the arguments is.
If the argument has the wrong type, raise a TypeError. For example, when you get a string instead of one of those Booleans.
if not isinstance(save, bool):
raise TypeError(f"Argument save must be of type bool, not {type(save)}")
Note, however, that in Python we rarely make any checks like this. If the argument really is invalid, some deeper function will probably do the complaining for us. And if we only check the boolean value, perhaps some code user will later just feed it a string knowing that non-empty strings are always True. It might save him a cast.
If the arguments have invalid values, raise ValueError. This seems more appropriate in your case:
if recurse and not save:
raise ValueError("If recurse is True, save should be True too")
Or in this specific case, have a True value of recurse imply a True value of save. Since I would consider this a recovery from an error, you might also want to complain in the log.
if recurse and not save:
logging.warning("Bad arguments in import_to_orm() - if recurse is True, so should save be")
save = True

I've mostly just seen the builtin ValueError used in this situation.

You would most likely use ValueError (raise ValueError() in full) in this case, but it depends on the type of bad value. For example, if you made a function that only allows strings, and the user put in an integer instead, you would you TypeError instead. If a user inputted a wrong input (meaning it has the right type but it does not qualify certain conditions) a Value Error would be your best choice. Value Error can also be used to block the program from other exceptions, for example, you could use a ValueError to stop the shell form raising a ZeroDivisionError, for example, in this function:
def function(number):
if not type(number) == int and not type(number) == float:
raise TypeError("number must be an integer or float")
if number == 5:
raise ValueError("number must not be 5")
else:
return 10/(5-number)
P.S. For a list of python built-in exceptions, go here:
https://docs.python.org/3/library/exceptions.html (This is the official python databank)

Agree with Markus' suggestion to roll your own exception, but the text of the exception should clarify that the problem is in the argument list, not the individual argument values. I'd propose:
class BadCallError(ValueError):
pass
Used when keyword arguments are missing that were required for the specific call, or argument values are individually valid but inconsistent with each other. ValueError would still be right when a specific argument is right type but out of range.
Shouldn't this be a standard exception in Python?
In general, I'd like Python style to be a bit sharper in distinguishing bad inputs to a function (caller's fault) from bad results within the function (my fault). So there might also be a BadArgumentError to distinguish value errors in arguments from value errors in locals.

I'm not sure I agree with inheritance from ValueError -- my interpretation of the documentation is that ValueError is only supposed to be raised by builtins... inheriting from it or raising it yourself seems incorrect.
Raised when a built-in operation or
function receives an argument that has
the right type but an inappropriate
value, and the situation is not
described by a more precise exception
such as IndexError.
-- ValueError documentation

Related

Testing for None on a non Optional input parameter

Let's say I have a python module with the following function:
def is_plontaria(plon: str) -> bool:
if plon is None:
raise RuntimeError("None found")
return plon.find("plontaria") != -1
For that function, I have the unit test that follows:
def test_is_plontaria_null(self):
with self.assertRaises(RuntimeError) as cmgr:
is_plontaria(None)
self.assertEqual(str(cmgr.exception), "None found")
Given the type hints in the function, the input parameter should always be a defined string. But type hints are... hints. Nothing prevents the user from passing whatever it wants, and None in particular is a quite common option when previous operations fail to return the expected results and those results are not checked.
So I decided to test for None in the unit tests and to check the input is not None in the function.
The issue is: the type checker (pylance) warns me that I should not use None in that call:
Argument of type "None" cannot be assigned to parameter "plon" of type "str" in function "is_plontaria"
Type "None" cannot be assigned to type "str"
Well, I already know that, and that is the purpose of that test.
Which is the best way to get rid of that error? Telling pylance to ignore this kind of error in every test/file? Or assuming that the argument passed will be always of the proper type and remove that test and the None check in the function?
This is a good question. I think that silencing that type error in your test is not the right way to go.
Don't patronize the user
While I would not go so far as to say that this is universally the right way to do it, in this case I would definitely recommend getting rid of your None check from is_plontaria.
Think about what you accomplish with this check. Say a user calls is_plontaria(None) even though you annotated it with str. Without the check he causes an AttributeError: 'NoneType' object has no attribute 'find' with a traceback to the line return plon.find("plontaria") != -1. The user thinks to himself "oops, that function expects a str". With your check he causes a RuntimeError ideally telling him that plon is supposed to be a str.
What purpose did the check serve? I would argue none whatsoever. Either way, an error is raised because your function was misused.
What if the user passes a float accidentally? Or a bool? Or literally anything other than a str? Do you want to hold the user's hand for every parameter of every function you write?
And I don't buy the "None is a special case"-argument. Sure, it is a common type to be "lying around" in code, but that is still on the user, as you pointed out yourself.
If you are using properly type annotated code (as you should) and the user is too, such a situation should never happen. Say the user has another function foo that he wants to use like this:
def foo() -> str | None:
...
s = foo()
b = is_plontaria(s)
That last line should cause any static type checker worth its salt to raise an error, saying that is_plontaria only accepts str, but a union of str and None was provided. Even most IDEs mark that line as problematic.
The user should see that before he even runs his code. Then he is forced to rethink and either change foo or introduce his own type check before calling your function:
s = foo()
if isinstance(s, str):
b = is_plontaria(s)
else:
# do something else
Qualifier
To be fair, there are situations where error messages are very obscure and don't properly tell the caller what went wrong. In those cases it may be useful to introduce your own. But aside from those, I would always argue in the spirit of Python that the user should be considered mature enough to do his own homework. And if he doesn't, that is on him, not you. (So long as you did your homework.)
There may be other situations, where raising your own type-errors makes sense, but I would consider those to be the exception.
If you must, use Mock
As a little bonus, in case you absolutely do want to keep that check in place and need to cover that if-branch in your test, you can simply pass a Mock as an argument, provided your if-statement is adjusted to check for anything other than str:
from unittest import TestCase
from unittest.mock import Mock
def is_plontaria(plon: str) -> bool:
if not isinstance(plon, str):
raise RuntimeError("None found")
return plon.find("plontaria") != -1
class Test(TestCase):
def test_is_plontaria(self) -> None:
not_a_string = Mock()
with self.assertRaises(RuntimeError):
is_plontaria(not_a_string)
...
Most type checkers consider Mock to be a special case and don't complain about its type, assuming you are running tests. mypy for example is perfectly happy with such code.
This comes in handy in other situations as well. For example, when the function being tested expects an instance of some custom class of yours as its argument. You obviously want to isolate the function from that class, so you can just pass a mock to it that way. The type checker won't mind.
Hope this helps.
You can disable type checking for on a specific line with a comment.
def test_is_plontaria_null(self):
with self.assertRaises(RuntimeError) as cmgr:
is_plontaria(None) # type: ignore
self.assertEqual(str(cmgr.exception), "None found")

Should Docstring contain a 'Raises' statement if the error is handled in the code

Suppose I have a simple function. For example:
def if_a_float(string):
try:
float(string)
except ValueError:
return False
else:
return True
Should I include the Raises: ValueError statement into my docstring or should I avoid it as the error was already handled in the code? Is it done for any error (caught/uncaught)? I do understand that it probably depends on the style, so let's say I am using the Google Docstring style(though I guess it doesn't matter that much)
You should document the exception raised explicitly, as well as those that may be relevant to the interface, as per the Google Style Guidelines (the same document you mention yourself).
This code does not raise an exception explicitly (there is no raise), and you do not need to mention that you are catching one.
Actually, this code cannot even accidentally raise one (you are catching the only line that could) and therefore it would be misleading if you were to document that the if_a_float() was raising a ValueError.
You should only document the exceptions that callers need to be aware of and may want to catch. If the function catches an exception itself and doesn't raise it to the caller, it's an internal implementation detail that callers don't need to be aware of, so it doesn't need to be documented.

Handling an unimplemented argument in Python

I have a method with the following signature
def read_a_file(file_name, line_number=False):
if line_number:
raise NotImplementedError
# CODE TO READ THE FILE
The argument line_number has not been implemented yet though I plan to do it soon. I would like to make this clear to end users when they try to call read_a_file() with some value for line_number greater than 0.
Would it be correct to raise a NotImplementedError or is there some better way to notify the callers ?
It's quite strange behaviour to have an argument on a function that you do not want people to use — why not just add it when it is implemented?
Nobody will miss something which isn't there and it's likely to just add more confusion. The parameter will be suggested by autocomplete tools and only be identifiable as unsupported once code is run.
If you still do want to do this, I would provide a bit more informative message for the exception, e.g.
def read_a_file(file_name, line_number=False):
if line_number:
raise NotImplementedError("line_number parameter is not yet supported.")

Should I always test the types of each function's parameters?

I have a function which has a parameter intTup. That function is being called by a user and hence the input might be erroneous, so I want to validate it.
In particular, intTup must :
be a tuple
be of length 3
contain only ints
So, as soon as the function is executed, those properties are checked :
def myFunc(intTup)
if type(intTup) is not tuple:
print("intTup = ", intTup)
print("type(intTup) = ", type(intTup))
raise Exception('In parameter intTup : should be a tuple of float')
elif (len(h) != 3) :
print("intTup = ", intTup)
print("len(intTup) = ", len(intTup))
raise Exception('In parameter intTup : should contain three atoms')
elif [int]*len(intTup) == map(type,intTup):
print("intTup = ", intTup)
print("type(intTup) = ", map(type,intTup))
raise Exception('In parameter h : all atoms should be ints')
# Some actual code follow these tests
Of course that type of checks is frequently encountered : anytime a function admits a parameter of type tuple containing n values of type t a similar check must be run. Hence, should I define a function checkTuple(tupleToTest, shouldBeLen, shouldBeType) to automate these checks ? Or should I leave how it is ? How do you proceed in your scripts ?
No. In Python you use "duck type": if something walks like a duck, quacks like a duck, and swims like a duck, it is identified with a duck.
In other words, with Python, we believe that it is what you do that defines what you are.
Which means you should rely on the property of an object, rather than its type. In your case, you just use the input the way a "3-tuple containing ints" is meant to be used, and handle exceptions.
See also: isinstance considered harmful.
Perhaps you should consider EAFP.
Easier to ask for forgiveness than permission. This common Python coding style assumes the existence of valid keys or attributes and catches exceptions if the assumption proves false. This clean and fast style is characterized by the presence of many try and except statements. The technique contrasts with the LBYL style common to many other languages such as C.
docs
Try to process user input and if it fails for whatever reason, raise an exception. This is not how would you normally do it in most of other languages where emphasis is on prevention of error, but for python it is common to use EAFP principle.
In other words, do not validate.
What's the point of using a language without static type checking if you are going to implement runtime type checking anyway? The answer to your question is that: no; you should not always check the parameter constraints. If you want to have that kind of control there are better tools for the job. There are cases where you may want to type check, such as when accepting input from users or outside processes, but you do not want to always do it.
If you really (really?) must break ducktyping, you could assert the validity of the params as follows:
def myFunc(intTup):
"""intTup is a tuple of size 3"""
assert(isinstance(intTup, tuple)), "not a tuple"
assert(len(intTup) == 3), "not length 3"
Easier to Ask Forgiveness than Permission :)
You could take a look at the warnings module.
You can make a decorator to check if
def validate(func):
def f(*args, **kwargs):
if args[0] is not tuple or len(args[0])! =3:
raise Exception("invalid parameter")
return func(*args, **kwargs)
return f

Python: return a default value if function or expression fails

Does Python has a feature that allows one to evaluate a function or expression and if the evaluation fails (an exception is raised) return a default value.
Pseudo-code:
evaluator(function/expression, default_value)
The evaluator will try to execute the function or expression and return the result is the execution is successful, otherwise the default_value is returned.
I know I create a user defined function using try and except to achieve this but I want to know if the batteries are already included before going off and creating a custom solution.
In order to reuse code, you can create a decorating function (that accepts a default value) and decorate your functions with it:
def handle_exceptions(default):
def wrap(f):
def inner(*a):
try:
return f(*a)
except Exception, e:
return default
return inner
return wrap
Now let's see an example:
#handle_exceptions("Invalid Argument")
def test(num):
return 15/num
#handle_exceptions("Input should be Strings only!")
def test2(s1, s2):
return s2 in s1
print test(0) # "Invalid Argument"
print test(15) # 1
print test2("abc", "b") # True
print test2("abc", 1) # Input should be Strings only!
No, the standard way to do this is with try... except.
There is no mechanism to hide or suppress any generic exception within a function. I suspect many Python users would consider indiscriminate use of such a function to be un-Pythonic for a couple reasons:
It hides information about what particular exception occurred. (You might not want to handle all exceptions, since some could come from other libraries and indicate conditions that your program can't recover from, like running out of disk space.)
It hides the fact that an exception occurred at all; the default value returned in case of an exception might coincide with a valid non-default value. (Sometimes reasonable, sometimes not really so.)
One of the principles of the Pythonic philosophy, I believe, is that "explicit is better than implicit," so Python generally avoids automatic type casting and error recovery, which are features of more "implicit- friendly"languages like Perl.
Although the try... except form can be a bit verbose, in my opinion it has a lot of advantages in terms of clearly showing where an exception may occur and what the control flow is around that exception.

Categories

Resources