Let's say I have a python module with the following function:
def is_plontaria(plon: str) -> bool:
if plon is None:
raise RuntimeError("None found")
return plon.find("plontaria") != -1
For that function, I have the unit test that follows:
def test_is_plontaria_null(self):
with self.assertRaises(RuntimeError) as cmgr:
is_plontaria(None)
self.assertEqual(str(cmgr.exception), "None found")
Given the type hints in the function, the input parameter should always be a defined string. But type hints are... hints. Nothing prevents the user from passing whatever it wants, and None in particular is a quite common option when previous operations fail to return the expected results and those results are not checked.
So I decided to test for None in the unit tests and to check the input is not None in the function.
The issue is: the type checker (pylance) warns me that I should not use None in that call:
Argument of type "None" cannot be assigned to parameter "plon" of type "str" in function "is_plontaria"
Type "None" cannot be assigned to type "str"
Well, I already know that, and that is the purpose of that test.
Which is the best way to get rid of that error? Telling pylance to ignore this kind of error in every test/file? Or assuming that the argument passed will be always of the proper type and remove that test and the None check in the function?
This is a good question. I think that silencing that type error in your test is not the right way to go.
Don't patronize the user
While I would not go so far as to say that this is universally the right way to do it, in this case I would definitely recommend getting rid of your None check from is_plontaria.
Think about what you accomplish with this check. Say a user calls is_plontaria(None) even though you annotated it with str. Without the check he causes an AttributeError: 'NoneType' object has no attribute 'find' with a traceback to the line return plon.find("plontaria") != -1. The user thinks to himself "oops, that function expects a str". With your check he causes a RuntimeError ideally telling him that plon is supposed to be a str.
What purpose did the check serve? I would argue none whatsoever. Either way, an error is raised because your function was misused.
What if the user passes a float accidentally? Or a bool? Or literally anything other than a str? Do you want to hold the user's hand for every parameter of every function you write?
And I don't buy the "None is a special case"-argument. Sure, it is a common type to be "lying around" in code, but that is still on the user, as you pointed out yourself.
If you are using properly type annotated code (as you should) and the user is too, such a situation should never happen. Say the user has another function foo that he wants to use like this:
def foo() -> str | None:
...
s = foo()
b = is_plontaria(s)
That last line should cause any static type checker worth its salt to raise an error, saying that is_plontaria only accepts str, but a union of str and None was provided. Even most IDEs mark that line as problematic.
The user should see that before he even runs his code. Then he is forced to rethink and either change foo or introduce his own type check before calling your function:
s = foo()
if isinstance(s, str):
b = is_plontaria(s)
else:
# do something else
Qualifier
To be fair, there are situations where error messages are very obscure and don't properly tell the caller what went wrong. In those cases it may be useful to introduce your own. But aside from those, I would always argue in the spirit of Python that the user should be considered mature enough to do his own homework. And if he doesn't, that is on him, not you. (So long as you did your homework.)
There may be other situations, where raising your own type-errors makes sense, but I would consider those to be the exception.
If you must, use Mock
As a little bonus, in case you absolutely do want to keep that check in place and need to cover that if-branch in your test, you can simply pass a Mock as an argument, provided your if-statement is adjusted to check for anything other than str:
from unittest import TestCase
from unittest.mock import Mock
def is_plontaria(plon: str) -> bool:
if not isinstance(plon, str):
raise RuntimeError("None found")
return plon.find("plontaria") != -1
class Test(TestCase):
def test_is_plontaria(self) -> None:
not_a_string = Mock()
with self.assertRaises(RuntimeError):
is_plontaria(not_a_string)
...
Most type checkers consider Mock to be a special case and don't complain about its type, assuming you are running tests. mypy for example is perfectly happy with such code.
This comes in handy in other situations as well. For example, when the function being tested expects an instance of some custom class of yours as its argument. You obviously want to isolate the function from that class, so you can just pass a mock to it that way. The type checker won't mind.
Hope this helps.
You can disable type checking for on a specific line with a comment.
def test_is_plontaria_null(self):
with self.assertRaises(RuntimeError) as cmgr:
is_plontaria(None) # type: ignore
self.assertEqual(str(cmgr.exception), "None found")
Related
In django I am writing a test that is checking whether or not a value is null.
This seems to be a pretty standard procedure, but for some reason when i pass a value to the method
assert self.assertIsNotNone(foo)
even when the value is most certainly not None, the assertion still fails.
Another odd thing is that the following if block passes even though its the same intended behavior as the assertIsNotNone function.
foo = Load.objects.all().first()
if foo is not None:
print("Passed")
For context, the foo variable in the first example is an instance of a django model. And I repeat, the object is most definitively not None.
Does anybody have any idea what would be causing something like this?
And if my understanding of the function is incorrect please let me know.
Full code:
foo = Load.objects.all().first()
if foo is not None: # passes
print("Passed")
assert self.assertIsNotNone(foo) # fails
I've created the following example:
from typing import List, Sequence
class Circle:
pass
def foo(circle: Circle) -> Sequence[Circle]:
return_value: List[Circle] = [circle]
return return_value
def bar(circle: Circle) -> List[Sequence[Circle]]:
# Incompatible return value type (got "List[List[Circle]]", expected "List[Sequence[Circle]]")
return_value: List[List[Circle]] = [[circle]]
return return_value
Why is it okay to return a List[Circle] when it's expecting a Sequence[Circle], but not a List[List[Circle]] when it's expecting a List[Sequence[Circle]]?
More specifically, why is this not okay when the value is a return value? I think I understand why it's not okay as a parameter, but I don't get why this value is not accepted as a return value.
The docs give a great example displaying why Lists are invariant:
class Shape:
pass
class Circle(Shape):
def rotate(self):
...
def add_one(things: List[Shape]) -> None:
things.append(Shape())
my_things: List[Circle] = []
add_one(my_things) # This may appear safe, but...
my_things[0].rotate() # ...this will fail
Here, the idea is if you take your List[Subclass] and pass it to something that thinks it's a List[Superclass], the function can edit your List[Subclass] so that it contains Superclass elements, so it becomes a List[Superclass] after the function is run.
However, as a return value, I don't see why this is an issue. Once it exits that function, everyone will treat it as a List[Sequence[Circle]], which it is, so there should be no issues.
Once again, while typing up this question, I think I have figured out an answer to it.
Consider the following case:
from typing import List, Sequence
class Circle:
pass
def baz(circle_list_matrix: List[List[Circle]]) -> List[Sequence[Circle]]:
# Incompatible return value type (got "List[List[Circle]]", expected "List[Sequence[Circle]]")
return circle_list_matrix
Here, Mypy is absolutely right to raise the error, because the other functions that are using the circle_list_matrix may depend on it being a List[List[Circle]], but other functions afterwards may modify it to be a List[Sequence[Circle]].
In order to determine which case we're in, Mypy would have to keep track of when our variables were declared, and ensure that nothing ever depends on treating the return value as a List[List[Circle]] after the function returns (even though it is typed as such) before allowing us to use it as a return value.
(Note that treating it like a List[List[Circle]] before the function returns shouldn't be a bad thing, since it is a List[List[Circle]] at those points. Also if it was always treated like it was a List[Sequence[Circle]], then we could just type it as such with no problem. The question arises when something treats it like a List[List[Circle]], for example with circle_list_matrix[0].append(Circle()), so we have to type it as a List[List[Circle]] in order to do that operation, but then it's treated as a List[Sequence[Circle]] every single time after the function returns.)
The bottom line is that Mypy doesn't do that sort of analysis. So, in order to let Mypy know that this is okay, we should just cast it.
In other words, we know that the return value will never be used as a List[List[Circle]] again, so baz should be written as:
def baz(circle_list_matrix: List[List[Circle]]) -> List[Sequence[Circle]]:
# works fine
return cast(List[Sequence[Circle]], circle_list_matrix)
where cast is imported from typing.
The same casting technique can be applied to bar in the question code.
I just start with python and get one question. is it a good idea to design a function return multi type of value? I read some information on sit and totally understand it is better to rise exception when an error is encountered or a precondition is unsatisfied. but what if there is no error but just different multi type of return value? it is a dummy function but for multi_value function, i do not need to write something like multi_value()[0] if I need to access the value from function
refer:https://docs.quantifiedcode.com/python-anti-patterns/maintainability/returning_more_than_one_variable_type_from_function_call.html
from typing import Union
def multi_value(para : Union[list, int]):
return para[0] if len(para) == 1 else para
def fun(para : Union[list, int]):
return para
print(type(multi_value([1,2,3]))) #--> [1,2,3]
print(type(multi_value(['1']))) #--> '1'
print(type(multi_value([1]))) #--> 1
The idea of not having different types is to make your functions easier to use. If the caller has to check the return type he gets unnecessarily complicated, hard to read and error prone code. Even worse: the caller might not do the check at all and then gets caught by surprise. You show a nice example: returning a list or a scalar value. If you do as shown the caller has to write something like
res = multi_value(x)
try:
for i in res:
do_something_with_res(i)
except TypeError:
do_something_with_res(res)
Given your function really does not throw anything this all would collapse to
for i in multi_value(x)
do_something_with_res(i)
if you would returning single (or no) values also as lists. The advantage should be obvious. You may think you do the caller a favor - but that is just not true.
Some remark to the linked article: I think they gave a sub-optimal example on the matter. The example is more about returning error code vs raising exceptions, which is a little different.
This one's a structure design problem, I guess. Back for some advice.
To start: I'm writing a module. Hence the effort of making it as usable to potential developers as possible.
Inside an object (let's call it Swoosh) I have a method which, when called, may result in either success (a new object is returned -- for insight: it's an httplib.HTTPResponse) or failure (surprising, isn't it?).
I'm having trouble deciding how to handle failures. There are two main cases here:
user supplied data that was incorrect
data was okay, but user interaction will be needed () - I need to pass back to the user a string that he or she will need to use in some way.
In (1) I decided to raise ValueError() with an appropriate description.
In (2), as I need to actually pass a str back to the user.. I'm not sure about whether it would be best to just return a string and leave it to the user to check what the function returned (httplib.HTTPResponse or str) or raise a custom exception? Is passing data through raising exceptions a good idea? I don't think I've seen this done anywhere, but on the other hand - I haven't seen much.
What would you, as a developer, expect from an object/function like this?
Or perhaps you find the whole design ridiculous - let me know, I'll happily learn.
As much as I like the approach of handling both cases with specifically-typed exceptions, I'm going to offer a different approach in case it helps: callbacks.
Callbacks tend to work better if you're already using an asynchronous framework like Twisted, but that's not their only place. So you might have a method that takes a function for each outcome, like this:
def do_request(on_success, on_interaction_needed, on_failure):
"""
Submits the swoosh request, and awaits a response.
If no user interaction is needed, calls on_success with a
httplib.HTTPResponse object.
If user interaction is needed, on_interaction_needed is
called with a single string parameter.
If the request failed, a ValueError is passed to on_failure
"""
response = sumbit_request()
if response.is_fine():
on_success(response)
elif response.is_partial()
on_interaction_needed(response.message)
else:
on_failure(ValueError(response.message))
Being Python, there are a million ways to do this. You might not like passing an exception to a function, so you maybe just take a callback for the user input scenario. Also, you might pass the callbacks in to the Swoosh initialiser instead.
But there are drawbacks to this too, such as:
Carelessness may result in spaghetti code
You're allowing your caller to inject logic into your function (eg. exceptions raised in the callback will propagate out of Swoosh)
My example here is simple, your actual function might not be
As usual, careful consideration and good documentation should avoid these problems. In theory.
I think raising an exception may actually be a pretty good idea in this case. Squashing multiple signals into a single return value of a function isn't ideal in Python, due to duck typing. It's not very Pythonic; every time you need to do something like:
result = some_function(...)
if isinstance(result, TypeA):
do_something(result)
elif isinstance(result, TypeB):
do_something_else(result)
you should be thinking about whether it's really the best design (as you're doing).
In this case, if you implement a custom exception, then the code that calls your function can just treat the returned value as a HTTPResponse. Any path where the function is unable to return something its caller can treat that way is handled by throwing an exception.
Likewise, the code that catches the exception and prompts the user with the message doesn't have to worry about the exact type of the thing its getting. It just knows that it's been explicitly instructed (by the exception) to show something to the user.
If the user interaction case means the calling code has to show a prompt, get some input and them pass control back to your function, it might be ugly trying to handle that with an exception. Eg,
try:
Swoosh.method()
except UserInteraction, ex:
# do some user interaction stuff
# pass it back to Swoosh.method()?
# did Swoosh need to save some state from the last call?
except ValueError:
pass # whatever
If this user interaction is a normal part of the control flow, it might be cleaner to pass a user-interaction function into your method in the first place - then it can return a result to the Swoosh code. For example:
# in Swoosh
def method(self, userinteractor):
if more_info_needed:
more_info = userinteractor.prompt("more info")
...
ui = MyUserInteractor(self) # or other state
Swoosh.method(ui)
You can return a tuple of (httplib.HTTPResponse, str) with the str being optionally None.
Definitely raise an exception for 1).
If you don't like returning a tuple, you can also create a "response object" i.e. an instance of a new class ( lets say SomethingResponse ) that encapsulates the HTTPResponse with optional messages to the end-user( in the simplest case, just a str).
I was wondering about the best practices for indicating invalid argument combinations in Python. I've come across a few situations where you have a function like so:
def import_to_orm(name, save=False, recurse=False):
"""
:param name: Name of some external entity to import.
:param save: Save the ORM object before returning.
:param recurse: Attempt to import associated objects as well. Because you
need the original object to have a key to relate to, save must be
`True` for recurse to be `True`.
:raise BadValueError: If `recurse and not save`.
:return: The ORM object.
"""
pass
The only annoyance with this is that every package has its own, usually slightly differing BadValueError. I know that in Java there exists java.lang.IllegalArgumentException -- is it well understood that everybody will be creating their own BadValueErrors in Python or is there another, preferred method?
I would just raise ValueError, unless you need a more specific exception..
def import_to_orm(name, save=False, recurse=False):
if recurse and not save:
raise ValueError("save must be True if recurse is True")
There's really no point in doing class BadValueError(ValueError):pass - your custom class is identical in use to ValueError, so why not use that?
I would inherit from ValueError
class IllegalArgumentError(ValueError):
pass
It is sometimes better to create your own exceptions, but inherit from a built-in one, which is as close to what you want as possible.
If you need to catch that specific error, it is helpful to have a name.
I think the best way to handle this is the way python itself handles it. Python raises a TypeError. For example:
$ python -c 'print(sum())'
Traceback (most recent call last):
File "<string>", line 1, in <module>
TypeError: sum expected at least 1 arguments, got 0
Our junior dev just found this page in a google search for "python exception wrong arguments" and I'm surprised that the obvious (to me) answer wasn't ever suggested in the decade since this question was asked.
It depends on what the problem with the arguments is.
If the argument has the wrong type, raise a TypeError. For example, when you get a string instead of one of those Booleans.
if not isinstance(save, bool):
raise TypeError(f"Argument save must be of type bool, not {type(save)}")
Note, however, that in Python we rarely make any checks like this. If the argument really is invalid, some deeper function will probably do the complaining for us. And if we only check the boolean value, perhaps some code user will later just feed it a string knowing that non-empty strings are always True. It might save him a cast.
If the arguments have invalid values, raise ValueError. This seems more appropriate in your case:
if recurse and not save:
raise ValueError("If recurse is True, save should be True too")
Or in this specific case, have a True value of recurse imply a True value of save. Since I would consider this a recovery from an error, you might also want to complain in the log.
if recurse and not save:
logging.warning("Bad arguments in import_to_orm() - if recurse is True, so should save be")
save = True
I've mostly just seen the builtin ValueError used in this situation.
You would most likely use ValueError (raise ValueError() in full) in this case, but it depends on the type of bad value. For example, if you made a function that only allows strings, and the user put in an integer instead, you would you TypeError instead. If a user inputted a wrong input (meaning it has the right type but it does not qualify certain conditions) a Value Error would be your best choice. Value Error can also be used to block the program from other exceptions, for example, you could use a ValueError to stop the shell form raising a ZeroDivisionError, for example, in this function:
def function(number):
if not type(number) == int and not type(number) == float:
raise TypeError("number must be an integer or float")
if number == 5:
raise ValueError("number must not be 5")
else:
return 10/(5-number)
P.S. For a list of python built-in exceptions, go here:
https://docs.python.org/3/library/exceptions.html (This is the official python databank)
Agree with Markus' suggestion to roll your own exception, but the text of the exception should clarify that the problem is in the argument list, not the individual argument values. I'd propose:
class BadCallError(ValueError):
pass
Used when keyword arguments are missing that were required for the specific call, or argument values are individually valid but inconsistent with each other. ValueError would still be right when a specific argument is right type but out of range.
Shouldn't this be a standard exception in Python?
In general, I'd like Python style to be a bit sharper in distinguishing bad inputs to a function (caller's fault) from bad results within the function (my fault). So there might also be a BadArgumentError to distinguish value errors in arguments from value errors in locals.
I'm not sure I agree with inheritance from ValueError -- my interpretation of the documentation is that ValueError is only supposed to be raised by builtins... inheriting from it or raising it yourself seems incorrect.
Raised when a built-in operation or
function receives an argument that has
the right type but an inappropriate
value, and the situation is not
described by a more precise exception
such as IndexError.
-- ValueError documentation