Is it pythonic to call one function after the other? I have two functions, and one depends of the result of the other:
function1() # if something goes wrong, will raise a error, if not, will return None
function2()
And I was thinking about using:
function1() is None and function2()
Is this pythonic?
You shouldn't think of the return value of None as indicating success, but rather the absence of an exception. Use a try statement to make it explicit that you are aware of the possibility of an exception, but are intentionally letting it pass up the call chain should one be raised:
try:
function1()
else:
function2()
If you want, you can be explicit:
try:
function1()
except Exception:
raise
else:
function2()
I would be tempted to use a try....except test instead of two separate functions
MyFunction()
try:
<your first action goes here>
except:
<what you want to happen if an error occurs go here>
You might need to use two except statements, one for a None return and one for any others. There is plenty of useful information in the documentation: https://wiki.python.org/moin/HandlingExceptions
Related
Both these snippets do the same thing:
Try/except in function declaration:
def something():
try:
# code goes here
except:
print("Error")
sys.exit(1)
something()
Try/except in function call:
def something():
# code goes here
try:
something()
except:
print("Error")
sys.exit(1)
Is there one that is better/more Pythonic/recommended by PEP8 or is it just up to personal preference? I understand that the second method would get tedious and repetitive if the function needs to be called more than once, but assuming the function is only called once, which one should I use?
the general rule is "only catch exceptions you can handle", see here for an explanation
note that an uncaught exception (in most languages) will cause the program to exit with an unsuccessful status code (i.e. your sys.exit(1)), it will probably also print out a message saying that an exception occurred. your demo therefore is emulating default behaviour, but doing it worse
further, you're catching every exception and this is generally bad style, e.g. you'll implicitly catch SystemExit and other internal exceptions that you probably shouldn't be dealing interacting with
I am working on a function which takes different kinds of date_formats as an argument and dispatches it to a function, which is in charge for parsing this format
In other words:
def parse_format(date_format):
# descision making happens here
try:
from_timestamp(date_format)
except:
pass
try:
from_dateformat(date_format)
except:
pass
def from_timestamp(format):
# raise if not in charge
def from_dateformat(format):
# raise if not in charge
def from_custom_format(format):
# raise if not in charge
Currently, parse_format has multiple try/except blocks. Is this the way to go, or is there a more obvious way to do it? Furthermore, how do I handle the case, where every function call fails?
I would do something like this:
class UnrecognizedFormatError(Exception):
pass
def parse_format(date_format):
methods = (from_timestamp, from_dateformat)
for method in methods:
try:
return method(date_format)
except:
pass
raise UnrecognizedFormatError
But also some key points:
except without a specific exception is bad, because a exception can be thrown from unexpected places, such as running out of memory, or a keyboard interrupt in a script. So please use the except SomeException as e form, and use a specific exception type.
If every function fails, this code will throw a UnrecognizedFormatError, allowing the function's user to respond appropriately.
Well, I would look at this as a great place for try/except/else/finally - the else catching your final case where every function call fails, and the 'finally' being run whatever happens in your try/except statements. If your exceptions are appropriately chosen, then it will pick the right function for you.
Also, I'm guessing that this is a learning exercise, as the activity you're describing would be better done in date.strftime()
def from_timestamp(format):
# raise if not in charge
def from_dateformat(format):
# raise if not in charge
def from_custom_format(format):
# raise if not in charge
def parse_format(date_format):
# decision making happens here
try:
from_timestamp(date_format)
except(FirstException):
from_dateformat(date_format)
except(SecondException):
from_custom_format(date_format)
else:
whatever_you_do_if_it_all_goes_wrong()
finally:
thing_that_happens_regardless_of_what's_called()
What is the point of using an else clause if there is a return instruction in the except clause?
def foo():
try:
# Some code
except:
# Some code
return
else:
# Some code
I'm asking this question because the Django documentation does it at some point, in the vote() function. Considering that the return instruction in the except clause will anyway stop the execution of the function, why did they use an else clause to isolate the code that should only be executed if no exception was raised? They could have just omitted the else clause entirely.
If there is no exception in the try: suite, then the else: suite is executed. In other words, only if there is an actual exception is the except: suite reached and the return statement used.
In my view, the return statement is what is redundant here; a pass would have sufficed. I'd use an else: suite to a try when there is additional code that should only be executed if no exception is raised, but could raise exceptions itself that should not be caught.
You are right that a return in the except clause makes using an else: for that section of code somewhat redundant. The whole suite could be de-dented and the else: line removed:
def foo():
try:
# Some code
except:
# Some code
return
# Some code
From the docs:
The use of the else clause is better than adding additional code to the try clause because it avoids accidentally catching an exception that wasn’t raised by the code being protected by the try ... except statement.
http://docs.python.org/2/tutorial/errors.html#handling-exceptions
I had a function that returned a random member of several groups in order of preference. It went something like this:
def get_random_foo_or_bar():
"I'd rather have a foo than a bar."
if there_are_foos():
return get_random_foo()
if there_are_bars():
return get_random_bar()
raise IndexError, "No foos, no bars"
However, the first thing get_random_foo does is verify there are foos and raise an IndexError if not, so there_are_foos is redundant. Moreover, a database is involved and using separate functions creates a concurrency issue. Accordingly, I rewrote it something like this:
def get_random_foo_or_bar():
"Still prefer foos."
try:
return get_random_foo()
except IndexError:
pass
try:
return get_random_bar()
except IndexError:
pass
raise IndexError, "No foos, no bars"
But I find this much less readable, and as I've never had reason to use pass before it feels instictively wrong.
Is there a neater efficient pattern, or should I learn to accept pass?
Note: I'd like to avoid any nesting since other types may be added later.
Edit
Thanks everyone who said that pass is fine - that's reassuring!
Also thanks to those who suggested replacing the exception with a return value of None. I can see how this is a useful pattern, but I would argue it's semantically wrong in this situation: the functions have been asked to perform an impossible task so they should raise an exception. I prefer to follow the behaviour of the random module (eg. random.choice([])).
That is exactly how I would write it. It's simple and it makes sense. I see no problem with the pass statements.
If you want to reduce the repetition and you anticipate adding future types, you could roll this up into a loop. Then you could change the pass to a functionally-equivalent continue statement, if that's more pleasing to your eyes:
for getter in (get_random_foo, get_random_bar):
try:
return getter()
except IndexError:
continue # Ignore the exception and try the next type.
raise IndexError, "No foos, no bars"
Using try, except, pass is acceptable, but there is a cleaner way to write this using contextlib.suppress() available for python 3.4+.
from contextlib import suppress
def get_random_foo_or_bar():
"Still prefer foos."
with suppress(IndexError):
return get_random_foo()
with suppress(IndexError):
return get_random_bar()
raise IndexError("No foos, no bars")
pass is fine (there's a reason it's in the language!-), but a pass-free alternative just takes a bit more nesting:
try: return get_random_foo()
except IndexError:
try: return get_random_bar()
except IndexError:
raise IndexError "no foos, no bars"
Python's Zen (import this from the interactive interpreter prompt) says "flat is better than nested", but nesting is also in the language, for you to use when you decide (presumably being enlightened) that you can do better than that wise koan!-) (As in, "if you meet the Buddha on the road"...).
It looks a little weird to me that get_random_foo() is raising an IndexError when it doesn't take an index as a param (but it might make more sense in context). Why not have get_random_foo(), or a wrapper, catch the error and return None instead?
def get_random_foo_wrapper():
try:
return get_random_foo()
except IndexError:
return None
def get_random_foo_or_bar():
"I'd rather have a foo than a bar."
return get_random_foo_wrapper() or get_random_bar_wrapper() or None
Edit: I should mention that if foo & bar are objects that may evaluate to False (0 or '' say) then the or comparison will skip over them which is BAD
If it's just those two, could always just...
try:
return get_random_foo()
except IndexError:
try:
return get_random_bar()
except IndexError:
raise IndexError, "No foos, no bars"
If it's more than two, what you have written seems perfectly acceptable.
Building on Peter Gibson's suggestion, you could create a generic wrapper function that swallows a given exception. And then you could write a function that returns such a generic wrapper for a provided exception. Or heck, for a provided list of exceptions.
def maketrap(*exceptions):
def trap(func, *args, **kwargs):
try:
return func(*args, **kwargs)
except exceptions:
return None
return trap
def get_random_foo_or_bar():
mytrap = maketrap(IndexError)
return mytrap(get_random_foo) or mytrap(get_random_bar) or None
If you don't really need the exception message (just the type):
def get_random_foo_or_bar():
try:
return get_random_foo()
except IndexError:
return get_random_bar() # if failing at this point,
# the whole function will raise IndexError
Is it necessary that get_random_foo/bar() raise an IndexError if it's unable to succeed?
If they returned None, you could do:
def get_random_foo_or_bar():
return get_random_foo() or get_random_bar()
I have some test cases. The test cases rely on data which takes time to compute. To speed up testing, I've cached the data so that it doesn't have to be recomputed.
I now have foo(), which looks at the cached data. I can't tell ahead of time what it will look at, as that depends a lot on the test case.
If a test case fails cause it doesn't find the right cached data, I don't want it to fail - I want it to compute the data and then try again. I also don't know what exception in particular it will throw cause of missing data.
My code right now looks like this:
if cacheExists:
loadCache()
dataComputed = False
else:
calculateData()
dataComputed = True
try:
foo()
except:
if not dataComputed:
calculateData()
dataComputed = True
try:
foo()
except:
#error handling code
else:
#the same error handling code
What's the best way to re-structure this code?
I disagree with the key suggestion in the existing answers, which basically boils down to treating exceptions in Python as you would in, say, C++ or Java -- that's NOT the preferred style in Python, where often the good old idea that "it's better to ask forgiveness than permission" (attempt an operation and deal with the exception, if any, rather than obscuring your code's main flow and incurring overhead by thorough preliminary checks). I do agree with Gabriel that a bare except is hardly ever a good idea (unless all it does is some form of logging followed by a raise to let the exception propagate). So, say you have a tuple with all the exception types that you do expect and want to handle the same way, say:
expected_exceptions = KeyError, AttributeError, TypeError
and always use except expected_exceptions: rather than bare except:.
So, with that out of the way, one slightly less-repetitious approach to your needs is:
try:
foo1()
except expected_exceptions:
try:
if condition:
foobetter()
else:
raise
except expected_exceptions:
handleError()
A different approach is to use an auxiliary function to wrap the try/except logic:
def may_raise(expected_exceptions, somefunction, *a, **k):
try:
return False, somefunction(*a, **k)
except expected_exceptions:
return True, None
Such a helper may often come in useful in several different situations, so it's pretty common to have something like this somewhere in a project's "utilities" modules. Now, for your case (no arguments, no results) you could use:
failed, _ = may_raise(expected_exceptions, foo1)
if failed and condition:
failed, _ = may_raise(expected_exceptions, foobetter)
if failed:
handleError()
which I would argue is more linear and therefore simpler. The only issue with this general approach is that an auxiliary function such as may_raise does not FORCE you to deal in some way or other with exceptions, so you might just forget to do so (just like the use of return codes, instead of exceptions, to indicate errors, is prone to those return values mistakenly being ignored); so, use it sparingly...!-)
Using blanket exceptions isn't usually a great idea. What kind of Exception are you expecting there? Is it a KeyError, AttributeError, TypeError...
Once you've identified what type of error you're looking for you can use something like hasattr() or the in operator or many other things that will test for your condition before you have to deal with exceptions.
That way you can clean up your logic flow and save your exception handling for things that are really broken!
Sometimes there's no nice way to express a flow, it's just complicated. But here's a way to call foo() in only one place, and have the error handling in only one place:
if cacheExists:
loadCache()
dataComputed = False
else:
calculateData()
dataComputed = True
while True:
try:
foo()
break
except:
if not dataComputed:
calculateData()
dataComputed = True
continue
else:
#the error handling code
break
You may not like the loop, YMMV...
Or:
if cacheExists:
loadCache()
dataComputed = False
else:
calculateData()
dataComputed = True
done = False
while !done:
try:
foo()
done = True
except:
if not dataComputed:
calculateData()
dataComputed = True
continue
else:
#the error handling code
done = True
I like the alternative approach proposed by Alex Martelli.
What do you think about using a list of functions as argument of the may_raise. The functions would be executed until one succeed!
Here is the code
def foo(x):
raise Exception("Arrrgh!")
return 0
def foobetter(x):
print "Hello", x
return 1
def try_many(functions, expected_exceptions, *a, **k):
ret = None
for f in functions:
try:
ret = f(*a, **k)
except expected_exceptions, e:
print e
else:
break
return ret
print try_many((foo, foobetter), Exception, "World")
result is
Arrrgh!
Hello World
1
Is there a way to tell if you want to do foobetter() before making the call? If you get an exception it should be because something unexpected (exceptional!) happened. Don't use exceptions for flow control.