Python try: ing not to repeat code. try: except and else - python

I have this:
try:
if session.var:
otherVar = session.var
else:
util = db.utility[1]
otherVar = session.var = util.freshOutTheBank
except AttributeError:
util = db.utility[1]
otherVar = session.var = util.freshOutTheBank
...do stuff with otherVar
The case is that the session.var might not exist or could be None. This code is also run more than once by a user during a session.
How do I avoid repeating the code. I basically want to do an 'except and else' or am I looking at this incorrectly?

Assuming this is a web2py session object, note that it is an instance of gluon.Storage, which is like a dictionary with two exceptions: (1) keys can be accessed like properties, and (2) accessing a non-existent key/property returns None rather than raising an exception. So, you can simply do something like:
otherVar = session.var = session.var if session.var else db.utility[1].freshOutTheBank
Note, if you want to distinguish between non-existent keys and keys that have an explicit value of None, you cannot use hasattr(session, 'var'), as that will return True even if there is no 'var' key. Instead, you can check session.has_key('var'), which will return False if there is no 'var' key.

You can avoid using session.var if it doesn't exist by checking for it first, using hasattr. This avoids the need for the try/except block all together.
if hasattr(session, 'var') and session.var is not None:
...
else:
...
An alternative might be to have the else in your original code just raise an exception to get to the except block, but it's sort of ugly:
try:
if session.var:
...
else:
raise AttributeError
except AttributeError:
...
In this situation, I think the "Look Before you Leap" style of programming (using hasattr) is nicer than the usually more Pythonic style of "Easier to Ask Forgiveness than Permission" (which uses exceptions as part of flow control). But either one can work.
If your code was compartmentalized into smaller functions, it might be even easier to deal with the issue. For instance, if you wrote a get_session_var function, it could return from the successful case (inside the try and if blocks), and the two error cases could be resolved later:
def get_session_var(session):
try:
if session.var:
return session.var
except AttributeError:
pass
util = db.utility[1]
session.var = util.freshOutTheBank
return session.var

Related

What is the elegant/Pythonic way to keep variables in scope, yet also catch exceptions?

I've been using Python for a few months now, and I love it so far. I also like how there is usually a single, "Pythonic" way to handle common programming problems.
In code I've been writing, as I make calls to network functions and have to handle exceptions, I keep running into this template of sorts that I end up writing:
proxyTest = None
try:
proxyTest = isProxyWorking(proxy)
except TimeoutError:
break
if proxyTest:
...
This is my way of declaring proxyTest so that it is in scope when I need to use it, yet also calling the function that will return a value for it inside of the proper exception handling structure. If I declare proxyTest inside of the try block, it will be out of scope for the rest of my program.
I feel like there has to be a more elegant way to handle this, but I'm not sure. Any suggestions?
You have a couple of better options, continue your flow control in the else block:
try:
proxyTest = isProxyWorking(proxy)
except TimeoutError:
break
else:
#proxyTest is guaranteed to be bound here
Or handle the failure case in the except block.
try:
proxyTest = isProxyWorking(proxy)
except TimeoutError:
proxyTest = None
#proxyTest is guaranteed to be bound here
Whichever is better depends on context, I think.
The obvious alternative would be to do the 'false' initialization in the except block:
try:
proxyTest = isProxyWorking(proxy)
except TimeoutError:
proxyTest = None
Whether this is easier/more appropriate than your constructions depends on how complicated the logic is in the middle, though.
I would put the code after
if proxyTest:
in the try block, just after binding proxyTest.

Returning error string from a function in python

I have a class function in Python that either returns a success or a failure, but in case of a failure I want it to send a specific error string back. I have 3 approaches in mind:
Pass in an variable error_msg to the function originally set to None and in case of an error, it gets set to the error string. eg:
if !(foo(self, input, error_msg)):
print "no error"
else:
print error_msg
Return a tuple containing a bool and error_msg from the function.
I raise an exception in case of an error and catch it in the calling code. But since I don't see exceptions being used often in the codebase I am working on, so was not too sure about taking this approach.
What is the Pythonic way of doing this?
Create your own exception and raise that instead:
class MyValidationError(Exception):
pass
def my_function():
if not foo():
raise MyValidationError("Error message")
return 4
You can then call your function as:
try:
result = my_function()
except MyValidationError as exception:
# handle exception here and get error message
print exception.message
This style is called EAFP ("Easier to ask for forgiveness than permission") which means that you write the code as normal, raise exceptions when something goes wrong and handle that later:
This common Python
coding style assumes the existence of valid keys or attributes and
catches exceptions if the assumption proves false. This clean and fast
style is characterized by the presence of many try and except
statements. The technique contrasts with the LBYL style common to many
other languages such as C.
Raise an error:
if foo(self, input, error_msg):
raise SomethingError("You broke it")
And handle it:
try:
something()
except SomethingError as e:
print str(e)
It's the Pythonic approach and the most readable.
Returning a tuple like (12, None) may seem like a good solution, but it's hard to keep track of what each method returns if you're not consistent. Returning two different data types is even worse, as it will probably break code that assumes a constant data type.

Is there an analogue of this function in Python standard modules?

I'm looking for a standard function (operator, decorator) that would be equivalent to the following hand-written function:
def defaulted(func, defaultVal):
try:
result = func()
except:
result = defaultVal
return result
Thanks!
There's nothing like that in the stdlib (to the best of my knowledge). For one thing, it's bad practice: you should never use a bare except. (Instead, specify the exceptions you want to catch; that way, you won't catch everything!)
Here's a decorator:
>>> def defaultval(error, value):
... def decorator(func):
... def defaulted(*args, **kwargs):
... try:
... return func(*args, **kwargs)
... except error:
... return value
... return defaulted
... return decorator
...
>>> #defaultval(NameError, "undefined")
... def get_var():
... return name
...
>>> get_var()
'undefined'
No. Python's philosophy is that explicit is better than implicit. Most Python functions that are expected to throw exceptions regularly, like dict.__getitem__, provide equivalent versions that return a default value, like dict.get.
I've used the new with contexts for something like this before; the code looked like:
with ignoring(IOError, OSError):
# some non-critical file operations
But I didn't need to return anything either. Generally, the whole-function level is a bad place for this, and by using decorators you'd block any chance at actually getting the error should you want to handle it more gracefully elsewhere.
Also, except: is extremely dangerous; you probably mean (at least) except Exception:, and probably something even more tightly scoped, like IOError or KeyError. Otherwise you will also catch things like Ctrl-C, SystemExit, and misspelled variable names.

Is there a neater alternative to `except: pass`?

I had a function that returned a random member of several groups in order of preference. It went something like this:
def get_random_foo_or_bar():
"I'd rather have a foo than a bar."
if there_are_foos():
return get_random_foo()
if there_are_bars():
return get_random_bar()
raise IndexError, "No foos, no bars"
However, the first thing get_random_foo does is verify there are foos and raise an IndexError if not, so there_are_foos is redundant. Moreover, a database is involved and using separate functions creates a concurrency issue. Accordingly, I rewrote it something like this:
def get_random_foo_or_bar():
"Still prefer foos."
try:
return get_random_foo()
except IndexError:
pass
try:
return get_random_bar()
except IndexError:
pass
raise IndexError, "No foos, no bars"
But I find this much less readable, and as I've never had reason to use pass before it feels instictively wrong.
Is there a neater efficient pattern, or should I learn to accept pass?
Note: I'd like to avoid any nesting since other types may be added later.
Edit
Thanks everyone who said that pass is fine - that's reassuring!
Also thanks to those who suggested replacing the exception with a return value of None. I can see how this is a useful pattern, but I would argue it's semantically wrong in this situation: the functions have been asked to perform an impossible task so they should raise an exception. I prefer to follow the behaviour of the random module (eg. random.choice([])).
That is exactly how I would write it. It's simple and it makes sense. I see no problem with the pass statements.
If you want to reduce the repetition and you anticipate adding future types, you could roll this up into a loop. Then you could change the pass to a functionally-equivalent continue statement, if that's more pleasing to your eyes:
for getter in (get_random_foo, get_random_bar):
try:
return getter()
except IndexError:
continue # Ignore the exception and try the next type.
raise IndexError, "No foos, no bars"
Using try, except, pass is acceptable, but there is a cleaner way to write this using contextlib.suppress() available for python 3.4+.
from contextlib import suppress
def get_random_foo_or_bar():
"Still prefer foos."
with suppress(IndexError):
return get_random_foo()
with suppress(IndexError):
return get_random_bar()
raise IndexError("No foos, no bars")
pass is fine (there's a reason it's in the language!-), but a pass-free alternative just takes a bit more nesting:
try: return get_random_foo()
except IndexError:
try: return get_random_bar()
except IndexError:
raise IndexError "no foos, no bars"
Python's Zen (import this from the interactive interpreter prompt) says "flat is better than nested", but nesting is also in the language, for you to use when you decide (presumably being enlightened) that you can do better than that wise koan!-) (As in, "if you meet the Buddha on the road"...).
It looks a little weird to me that get_random_foo() is raising an IndexError when it doesn't take an index as a param (but it might make more sense in context). Why not have get_random_foo(), or a wrapper, catch the error and return None instead?
def get_random_foo_wrapper():
try:
return get_random_foo()
except IndexError:
return None
def get_random_foo_or_bar():
"I'd rather have a foo than a bar."
return get_random_foo_wrapper() or get_random_bar_wrapper() or None
Edit: I should mention that if foo & bar are objects that may evaluate to False (0 or '' say) then the or comparison will skip over them which is BAD
If it's just those two, could always just...
try:
return get_random_foo()
except IndexError:
try:
return get_random_bar()
except IndexError:
raise IndexError, "No foos, no bars"
If it's more than two, what you have written seems perfectly acceptable.
Building on Peter Gibson's suggestion, you could create a generic wrapper function that swallows a given exception. And then you could write a function that returns such a generic wrapper for a provided exception. Or heck, for a provided list of exceptions.
def maketrap(*exceptions):
def trap(func, *args, **kwargs):
try:
return func(*args, **kwargs)
except exceptions:
return None
return trap
def get_random_foo_or_bar():
mytrap = maketrap(IndexError)
return mytrap(get_random_foo) or mytrap(get_random_bar) or None
If you don't really need the exception message (just the type):
def get_random_foo_or_bar():
try:
return get_random_foo()
except IndexError:
return get_random_bar() # if failing at this point,
# the whole function will raise IndexError
Is it necessary that get_random_foo/bar() raise an IndexError if it's unable to succeed?
If they returned None, you could do:
def get_random_foo_or_bar():
return get_random_foo() or get_random_bar()

Nice exception handling when re-trying code

I have some test cases. The test cases rely on data which takes time to compute. To speed up testing, I've cached the data so that it doesn't have to be recomputed.
I now have foo(), which looks at the cached data. I can't tell ahead of time what it will look at, as that depends a lot on the test case.
If a test case fails cause it doesn't find the right cached data, I don't want it to fail - I want it to compute the data and then try again. I also don't know what exception in particular it will throw cause of missing data.
My code right now looks like this:
if cacheExists:
loadCache()
dataComputed = False
else:
calculateData()
dataComputed = True
try:
foo()
except:
if not dataComputed:
calculateData()
dataComputed = True
try:
foo()
except:
#error handling code
else:
#the same error handling code
What's the best way to re-structure this code?
I disagree with the key suggestion in the existing answers, which basically boils down to treating exceptions in Python as you would in, say, C++ or Java -- that's NOT the preferred style in Python, where often the good old idea that "it's better to ask forgiveness than permission" (attempt an operation and deal with the exception, if any, rather than obscuring your code's main flow and incurring overhead by thorough preliminary checks). I do agree with Gabriel that a bare except is hardly ever a good idea (unless all it does is some form of logging followed by a raise to let the exception propagate). So, say you have a tuple with all the exception types that you do expect and want to handle the same way, say:
expected_exceptions = KeyError, AttributeError, TypeError
and always use except expected_exceptions: rather than bare except:.
So, with that out of the way, one slightly less-repetitious approach to your needs is:
try:
foo1()
except expected_exceptions:
try:
if condition:
foobetter()
else:
raise
except expected_exceptions:
handleError()
A different approach is to use an auxiliary function to wrap the try/except logic:
def may_raise(expected_exceptions, somefunction, *a, **k):
try:
return False, somefunction(*a, **k)
except expected_exceptions:
return True, None
Such a helper may often come in useful in several different situations, so it's pretty common to have something like this somewhere in a project's "utilities" modules. Now, for your case (no arguments, no results) you could use:
failed, _ = may_raise(expected_exceptions, foo1)
if failed and condition:
failed, _ = may_raise(expected_exceptions, foobetter)
if failed:
handleError()
which I would argue is more linear and therefore simpler. The only issue with this general approach is that an auxiliary function such as may_raise does not FORCE you to deal in some way or other with exceptions, so you might just forget to do so (just like the use of return codes, instead of exceptions, to indicate errors, is prone to those return values mistakenly being ignored); so, use it sparingly...!-)
Using blanket exceptions isn't usually a great idea. What kind of Exception are you expecting there? Is it a KeyError, AttributeError, TypeError...
Once you've identified what type of error you're looking for you can use something like hasattr() or the in operator or many other things that will test for your condition before you have to deal with exceptions.
That way you can clean up your logic flow and save your exception handling for things that are really broken!
Sometimes there's no nice way to express a flow, it's just complicated. But here's a way to call foo() in only one place, and have the error handling in only one place:
if cacheExists:
loadCache()
dataComputed = False
else:
calculateData()
dataComputed = True
while True:
try:
foo()
break
except:
if not dataComputed:
calculateData()
dataComputed = True
continue
else:
#the error handling code
break
You may not like the loop, YMMV...
Or:
if cacheExists:
loadCache()
dataComputed = False
else:
calculateData()
dataComputed = True
done = False
while !done:
try:
foo()
done = True
except:
if not dataComputed:
calculateData()
dataComputed = True
continue
else:
#the error handling code
done = True
I like the alternative approach proposed by Alex Martelli.
What do you think about using a list of functions as argument of the may_raise. The functions would be executed until one succeed!
Here is the code
def foo(x):
raise Exception("Arrrgh!")
return 0
def foobetter(x):
print "Hello", x
return 1
def try_many(functions, expected_exceptions, *a, **k):
ret = None
for f in functions:
try:
ret = f(*a, **k)
except expected_exceptions, e:
print e
else:
break
return ret
print try_many((foo, foobetter), Exception, "World")
result is
Arrrgh!
Hello World
1
Is there a way to tell if you want to do foobetter() before making the call? If you get an exception it should be because something unexpected (exceptional!) happened. Don't use exceptions for flow control.

Categories

Resources