I find myself writing the same argument checking code all the time for number-crunching:
def myfun(a, b):
if a < 0:
raise ValueError('a cannot be < 0 (was a=%s)' % a)
# more if.. raise exception stuff here ...
return a + b
Is there a better way? I was told not to use 'assert' for these things (though I don't see the problem, apart from not knowing the value of the variable that caused the error).
edit: To clarify, the arguments are usually just numbers and the error checking conditions can be complex, non-trivial and will not necessarily lead to an exception later, but simply to a wrong result. (unstable algorithms, meaningless solutions etc)
assert gets optimized away if you run with python -O (modest optimizations, but sometimes nice to have). One preferable alternative if you have patterns that often repeat may be to use decorators -- great way to factor out repetition. E.g., say you have a zillion functions that must be called with arguments by-position (not by-keyword) and must have their first arguments positive; then...:
def firstargpos(f):
def wrapper(first, *args):
if first < 0:
raise ValueError(whateveryouwish)
return f(first, *args)
return wrapper
then you say something like:
#firstargpos
def myfun(a, b):
...
and the checks are performed in the decorators (or rather the wrapper closure it returns) once and for all. So, the only tricky part is figuring out exactly what checks your functions need and how best to call the decorator(s) to express those (hard to say, without seeing the set of functions you're defining and the set of checks each needs!-). Remember, DRY ("Don't Repeat Yourself") is close to the top spot among guiding principles in software development, and Python has reasonable support to allow you to implement DRY and avoid boilerplatey, repetitious code!-)
You don't want to use assert because your code can be run (and is by default on some systems) in such a way that assert lines are not checked and do not raise errors (-O command line flag).
If you're using a lot of variables that are all supposed to have those same properties, why not subclass whatever type you're using and add that check to the class itself? Then when you use your new class, you know you never have an invalid value, and don't have to go checking for it all over the place.
I'm not sure if this will answer your question, but it strikes me that checking a lot of arguments at the start of a function isn't very pythonic.
What I mean by this is that it is the assumption of most pythonistas that we are all consenting adults, and we trust each other not to do something stupid. Here's how I'd write your example:
def myfun(a, b):
'''a cannot be < 0'''
return a + b
This has three distinct advantages. First off, it's concise, there's really no extra code doing anything unrelated to what you're actually trying to get done. Second, it puts the information exactly where it belongs, in help(myfun), where pythonistas are expected to look for usage notes. Finally, is a non-positive value for a really an error? Although you might think so, unless something definitely will break if a is zero (here it probably wont), then maybe letting it slip through and cause an error up the call stream is wiser. after all, if a + b is in error, it raises an exception which gets passed up the call stack and behavior is still pretty much the same.
Related
I'm asking about situations where if a wrong type of argument is passed to the function, it could:
Blow up the whole thing.
Return unexpected results
Return nothing
For instance, the function below expects the argument name to be a string. It would throw an exception for all other types that doesn't have a startswith method.
def fruits(name):
if name.startswith('O'):
print('Is it Orange?')
There are other cases where a function could halt or cause damage to the system if execution proceeds without type-checking. Whenever there are a lot of functions or functions with a lot of arguments, type checking is tedious and makes the code unreadable. So, is there a standard for doing this? As to 'how to type check' - there are plenty of examples here on stackexchange, but I couldn't find any about where it would be appropriate to do so.
Another example would be:
def fruits(names):
with open('important_file.txt', 'r+') as fil:
for name in names:
if name in fil:
# Edit the file
Here if the name is a string each character in it will influence the editing of the file. If it is any other iterable, each element provided by it would influence the editing. Both of these could produce different results.
So, when should we type-check an argument and should we not?
The answer off the top of my head would be: it depends where the input comes from.
If the functions are class methods that get invokes internally or things like that, you can assume the inputs are valid, because you wrote it!
For example
def add(x,y):
return x + y
def multiply(a,b):
product = 0
for i in range(a):
product = add(product, b)
return product
In my add function, I could check that there is a + operator for the parameters x and y. But since I wrote the multiply function, and that is the only function that uses add, it is safe to assume the inputs will be int because that's how I wrote it. Now that argument stands on shaky ground for large code bases where you (hopefully) have shared code, so you can't be sure people don't misuse your functions. But that's why you comment them well to describe the correct use of said function.
If it has to read from a file, get user input, etc, then you may want to do some validation first.
I almost never do type checking in Python. In accordance with Pythonic philosophy I assume that me and other programmers are adult people capable of reading the code (or at least the documentation) and using it properly. I assume that we test our code before we let it destroy something important. After all in most cases if you do something wrong, you'll just see an error and Python's error messages are quite informative most of the time.
The only occasion when I sometimes check types is when I want my function to behave differently depending on the argument's type. But although I sometimes feel compelled to do this, I don't consider it a good practice.
Most often it happens when my function iterates over a list of strings and I fear (or want) I could get a single string passed into it by accident - this won't throw an error at once because unfortunately string is an iterable too.
This has probably been asked before, but I don't know how to look up the answer, because I'm not sure what's a good way to phrase the question.
The topic is function arguments that can semantically be expressed in many different ways. For example, to give a file to a function, you could either give the file directly, or you could give a string, which is the path to the file. To specify a number, you might allow an integer as an argument, or maybe you might allow a string (the numeral), or you might even allow a string such as "one". Another example might be a function that takes a list (of numbers, say), but as a convenience, it will convert a number into a list containing one element: that number.
Is there a more or less standard approach in Python to allowing this sort of flexibility? It certainly complicates the code for a program if you're not certain what types the arguments are, so my guess would be to try factor out the convenience functions into just one place, instead of scattered everywhere, but I don't really know how best to do that kind of factoring.
No, there is not more or less a "standard" approach to this.
Don't do that! ;)
But if you still want to, I'd suggest that you have an intermediate class or function handling this for you:
Pseudocode:
def printTheNumber(num):
print num
def intermediatePrintTheNumber(input):
num_int_dict = {'one':1, "two":2 ....
if input.isstring():
printTheNumber(num_int_dict[input])
elif input.isint():
printTheNumber(input)
else:
print "Sorry Dave, I don't understand you"
If this is pythonic I don't know, but that's how I'd solve it if I had to, of course with some more checking of the input to deem it valid.
When it comes to your comment you mention semantic similarity i.e "one" and 1 might mean the same thing.
Where should this kind of conversion be made you ask.
Well that depends on the design of your system, but I can tell you that it should not be done in the same function that I call printTheNumber for one very simple reason, and that is that that would give that function way to much responsibility.
Depending on the complexity of the input it could be the integer 1 or the string "1" or, in the worse case, "one" or maybe even worse "uno"|"one"|"yxi"|"ett" .. and so on. This should be handled by a function that has only that responsibility maybe with a database handling the mapping.
I would split it up so that I have one function handling the the strings "one", "two" ... etc, and one handling integers and have a third function that checks the input to see if it can be converted to an integer or not.
As I see it, there is a warning for a fundamental flaw in the design if you have to take measures for this kind of complexity, but you seem to be aware of that so I won't go on about it.
A good way to factor out code common to several functions is through decorators. For example,
from functools import wraps
def takes_list(func):
#wraps(func)
def wrapper(arg):
if not isinstance(arg, list):
arg = [arg]
return func(arg)
return wrapper
#takes_list
def my_func(x):
"Does something with list x."
I should note that for cases such as files, you don't want to get in the way of Python's duck typing: doing a check isinstance(arg, file) has the problem that it won't allow file-like things such as io.StringIO. Instead, check against str (or basestring) or even let open do the checking for you, with try-except.
However, it's generally better practice just to let the caller pass what they like into the function, and fail if it isn't valid.
Since you can not add methods at runtime to builtin classes such as int or str you would make a switch case statement like structure as mentioned by Daniel Figueroa.
Another way would be to just convert:
def func(i):
if not isinstance(i, int):
i = int(i) # objects can overwrite __int__ if needed.
If you have own classes that you can add methods to, you may use double dispatch to do the same thing for you. Smalltalk uses this for the Integer-Float-... conversion.
Another way would be to use subject oriented programming for which I have not found an implementation yet but I tried: https://gist.github.com/niccokunzmann/4971938
[Edit] changed return 0 to return. Side effects of beinga Python n00b. :)
I'm defining a function, where i'm doing some 20 lines of processing. Before processing, i need to check if a certain condition is met. If so, then I should bypass all processing. I have defined the function this way.
def test_funciton(self,inputs):
if inputs == 0:
<Display Message box>
return
<20 line logic here>
Note that the 20 line logic does not return any value, and i'm not using the 0 returned in the first 'if'.
I want to know if this is better than using the below type of code (in terms of performance, or readability, or for any other matter), because the above method looks good to me as it is one indentation less:
def test_function(self,inputs):
if inputs == 0:
<Display Message box>
else:
<20 line logic here>
In general, it improves code readability to handle failure conditions as early as possible. Then the meat of your code doesn't have to worry about these, and the reader of your code doesn't have to consider them any more. In many cases you'd be raising exceptions, but if you really want to do nothing, I don't see that as a major problem even if you generally hew to the "single exit point" style.
But why return 0 instead of just return, since you're not using the value?
First, you can use return without anything after, you don't have to force a return 0.
For the performance way, this question seems to prove you won't notice any difference (except if you're realy unlucky ;) )
In this context, I think it's important to know why inputs can't be zero? Typically, I think the way most programs will handle this is to raise an exception if a bad value is passed. Then the exception can be handled (or not) in the calling routine.
You'll often see it written "Better to ask forgiveness" as opposed to "Look before you leap". Of course, If you're often passing 0 into the function, then the try / except clause could get expensive (try is cheap, except is not).
If you're set on "looking before you leap", I would probably use the first form to keep indentation down.
I doubt the performance is going to be significantly different in either case. Like you I would tend to lean more toward the first method for readability.
In addition to the smaller indentation(which doesn't really matter much IMO), it precludes the necessity to read further for when your inputs == 0:
In the second method one might assume that there is additional processing after the if/else statement, whereas the first one makes it obvious that the method is complete upon that condition.
It really just comes down to personal preference though, you will see both methods used in practice.
Your second example will return after it displays the message box in this case.
I prefer to "return early" as in my opinion it leads to better readability. But then again, most of my returns that happen prior to the actual end of the function tend to be more around short circuiting application logic if certain conditions are not met.
Specifically the ":int" part...
I assumed it somehow checked the type of the parameter at the time the function is called and perhaps raised an exception in the case of a violation. But the following run without problems:
def some_method(param:str):
print("blah")
some_method(1)
def some_method(param:int):
print("blah")
some_method("asdfaslkj")
In both cases "blah" is printed - no exception raised.
I'm not sure what the name of the feature is so I wasn't sure what to google.
EDIT: OK, so it's http://www.python.org/dev/peps/pep-3107/. I can see how it'd be useful in frameworks that utilize metadata. It's not what I assumed it was. Thanks for the responses!
FOLLOW-UP QUESTION - Any thoughts on whether it's a good idea or bad idea to define my functions as def some_method(param:int) if I really only can handle int inputs - even if, as pep 3107 explains, it's just metadata - no enforcement as I originally assumed? At least the consumers of the methods will see clearly what I intended. It's an alternative to documentation. Think this is good/bad/waste of time? Granted, good parameter naming (unlike my contrived example) usually makes it clear what types are meant to be passed in.
it's not used for anything much - it's just there for experimentation (you can read them from within python if you want, for example). they are called "function annotations" and are described in pep 3107.
i wrote a library that builds on it to do things like type checking (and more - for example you can map more easily from JSON to python objects) called pytyp (more info), but it's not very popular... (i should also add that the type checking part of pytyp is not at all efficient - it can be useful for tracking down a bug, but you wouldn't want to use it across an entire program).
[update: i would not recommend using function annotations in general (ie with no particular use in mind, just as docs) because (1) they might eventually get used in a way that you didn't expect and (2) the exact type of things is often not that important in python (more exactly, it's not always clear how best to specify the type of something in a useful way - objects can be quite complex, and often only "parts" are used by any one function, with multiple classes implementing those parts in different ways...). this is a consequence of duck typing - see the "more info" link for related discussion on how python's abstract base classes could be used to tackle this...]
Function annotations are what you make of them.
They can be used for documentation:
def kinetic_energy(mass: 'in kilograms', velocity: 'in meters per second'):
...
They can be used for pre-condition checking:
def validate(func, locals):
for var, test in func.__annotations__.items():
value = locals[var]
msg = 'Var: {0}\tValue: {1}\tTest: {2.__name__}'.format(var, value, test)
assert test(value), msg
def is_int(x):
return isinstance(x, int)
def between(lo, hi):
def _between(x):
return lo <= x <= hi
return _between
def f(x: between(3, 10), y: is_int):
validate(f, locals())
print(x, y)
>>> f(0, 31.1)
Traceback (most recent call last):
...
AssertionError: Var: y Value: 31.1 Test: is_int
Also see http://www.python.org/dev/peps/pep-0362/ for a way to implement type checking.
Not experienced in python, but I assume the point is to annotate/declare the parameter type that the method expects. Whether or not the expected type is rigidly enforced at runtime is beside the point.
For instance, consider:
intToHexString(param:int)
Although the language may technically allow you to call intToHexString("Hello"), it's not semantically meaningful to do so. Having the :int as part of the method declaration helps to reinforce that.
It's basically just used for documentation. When some examines the method signature, they'll see that param is labelled as an int, which will tell them the author of the method expected them to pass an int.
Because Python programmers use duck typing, this doesn't mean you have to pass an int, but it tells you the code is expecting something "int-like". So you'll probably have to pass something basically "numeric" in nature, that supports arithmetic operations. Depending on the method it may have to be usable as an index, or it may not.
However, because it's syntax and not just a comment, the annotation is visible to any code that wants to introspect it. This opens up the possibility of writing a typecheck decorator that can enforce strict type checking on arbitrary functions; this allows you to put the type checking logic in one place, and have each method declare which parameters it wants strictly type checked (by attaching a type annotation) with a minimum on syntax, in a way that is visible to client programmers who are browsing method definitions to find out the interface.
Or you could do other things with those annotations. No standardized meaning has yet been developed. Maybe if someone comes up with a killer feature that uses them and has huge adoption, then it'll one day become part of the Python language, but I suspect the flexibility of using them however you want will be too useful to ever do that.
You might also use the "-> returnValue" notation to indicate what type the function might return.
def mul(a:int, b:int) -> None:
print(a*b)
My work place has imposed a rules for no use of exception (catching is allowed). If I have code like this
def f1()
if bad_thing_happen():
raise Exception('bad stuff')
...
return something
I could change it to
def f1()
if bad_thing_happen():
return [-1, None]
...
return [0, something]
f1 caller would be like this
def f1_caller():
code, result = f1(param1)
if code < 0:
return code
actual_work1()
# call f1 again
code, result = f1(param2)
if code < 0:
return code
actual_work2()
...
Are there more elegant ways than this in Python ?
Exceptions in python are not something to be avoided, and are often a straightforward way to solve problems. Additionally, an exception carries a great deal of information with it that can help quickly locate (via stack trace) and identify problems (via exception class or message).
Whoever has come up with this blanket policy was surely thinking of another language (perhaps C++?) where throwing exceptions is a more expensive operation (and will reduce performance if your code is executing on a 20 year old computer).
To answer your question: the alternative is to return an error code. This means that you are mixing function results with error handling, which raises (ha!) it's own problems. However, returning None is often a perfectly reasonable way to indicate function failure.
Returning None is reasonably common and works well conceptually. If you are expecting a return value, and you get none, that is a good indication that something went wrong.
Another possible approach, if you are expecting to return a list (or dictionary, etc.) is to return an empty list or dict. This can easily be tested for using if, because an empty container evaluates to False in Python, and if you are going to iterate over it, you may not even need to check for it (depending on what you want to do if the function fails).
Of course, these approaches don't tell you why the function failed. So you could return an exception instance, such as return ValueError("invalid index"). Then you can test for particular exceptions (or Exceptions in general) using isinstance() and print them to get decent error messages. (Or you could provide a helper function that tests a return code to see if it's derived from Exception.) You can still create your own Exception subclasses; you would simply be returning them rather than raising them.
Finally, I would work toward getting this ridiculous policy changed, as exceptions are an important part of how Python works, have low overhead, and will be expected by anyone using your functions.
You have to use return codes. Other alternatives would involve mutable global state (think C's errno) or passing in a mutable object (such as a list), but you almost always want to avoid both in Python. Perhaps you could try explaining to them how exceptions let you write better post-conditions instead of adding complication to return values, but are otherwise equivalent.