Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
My current situation is involved in making a game in python using a library called pygame. I have a method called getTile() that returns the tile every time the player moves. The lvlDict makes up the world.
def getTile(self, pos):
x = pos[0]
y = pos[1]
return self.lvlDict[x,y].kind
and i'm thinking about changing it to this
def getTile(self, pos):
try:
x = pos[0]
y = pos[1]
return self.lvlDict[x,y].kind
except KeyError:
return None
where there would be some interpretation for what to do if it were None. Likely just move back to where it previously was. Is this inherently bad or acceptable?
Even if it's just a matter of opinion, I'd like to know what people think about it.
In this case, I think asking for forgiveness is better than asking for permission on the off chance that something happens to your dictionary between check and indexing, but in general, I tend to prefer not using exceptions for logic, entirely because exceptions are bloody slow.
In [1]: d = {i:i for i in xrange(10000)}
In [4]: def f(d):
try:
d["blue"]
except KeyError:
pass
In [5]: def g(d):
if "blue" in d: d["blue"]
#Case: Key not in Dict
In [7]: timeit f(d)
1000000 loops, best of 3: 950 ns per loop
In [8]: timeit g(d)
10000000 loops, best of 3: 135 ns per loop
#Case: Key in Dict
In [9]: d["blue"] = 8
In [10]: timeit f(d)
10000000 loops, best of 3: 151 ns per loop
In [11]: timeit f(d)
10000000 loops, best of 3: 151 ns per loop
Exceptions are slow so it's best to save them for truly exceptional circumstances. If it's an unexpected situation that doesn't normally crop up, then an exception is sensible. If an empty tile is a common occurrence it would be better to use an if check instead.
Another way to look at it is whether the user can trigger it or not. If the player can cause this condition merely by moving around the map, it probably shouldn't be an exception. Exceptions are more appropriate for when you encounter unexpected cases that aren't the user's fault, such as "oops, I forgot to put a wall here and the player wandered off the map" or "I expected this file to be there and it wasn't."
Although it is true that exceptions will often be slower than checking for possible failures that might occur, it is also most often not a difference that would matter in practical applications. In particular, it is usually not a significant difference for applications for which Python is used as the language of choice. Therefore, as the Python glossary for EAFP explains:
Easier to ask for forgiveness than permission. This common Python
coding style assumes the existence of valid keys or attributes and
catches exceptions if the assumption proves false. This clean and fast
style is characterized by the presence of many try and except
statements. The technique contrasts with the LBYL style common to many
other languages such as C.
Indeed, it is common in Python to employ exceptions (and ask for forgiveness) to handle errors, even if those errors are expected to occur often and on a regular basis.
Now, in your particular case it seems equally elegant to simply check if key is in a dictionary (as suggested by other answers). Still, you should not be afraid to use exceptions and their use (even for expected) error handling is totally Pythonic.
Using exceptions to implement normal flow through your code is not bad form in python. Such an approach is used by some language constructions including generators, which signal no more elements by raising StopIteration.
Other answers have pointed out that relying on the exception may make the code slower, and not relying on it may introduce a race condition. You can avoid both problems by using both an if construction and an except block. But that introduces another problem in that it makes your code more complicated and thus more error prone.
I would say that correctness is in general more important than performance (and if this piece of code was truly performance critical, you probably wouldn't write it in python to begin with). So if there is any possibility for the dict to change between checking for existence of the key and accessing it, I would definitely go with the exception. If you know such a race cannot happen, then I suggest you take the approach you find most readable.
There is another aspect of your code, which does appear to be bad form. Inside your try block I see three different places in which a KeyError could be raised. I am guessing you only intended to catch the exceptions from one of those places. By catching the exception from the other two locations it could be raised, you could make debugging the code harder. The condition triggering the bug would change from crashing with a descriptive trace to subtly incorrect behavior.
Thus only put those lines inside the try block, which really need to be there.
I am guessing this construction is closer to what you intended:
def getTile(self, pos):
x = pos[0]
y = pos[1]
try:
return self.lvlDict[x,y].kind
except KeyError:
return None
If you have a try block in which the first line could raise an exception, and the remaining lines are inside the try block because you don't want them to be executed in case of an exception, then you should be using a try-except-else construction.
From a non-Python perspective, I would say that exceptions should used to handle invalid cases only, and ask the question: are we discussing a nominal case, an edge case, or a case of invalid input/result? If your "None" case is valid, but an edge case, I would say that it should be handled by program logic (do not throw an exception). If the "None" could happen, but really never should, then it should be handled by exception.
Not being familiar with Python, but assuming parallels with other programming environments e.g. C#:
Wrapping code blocks with try/catches has negligible to nil performance impact; as soon as an exception is thrown, stuff happens behind the scenes which has a non-trivial performance cost.
The prototypical case for throwing an exception is when you can't logically proceed from where you are, have to discard the current program state and find your way back to some earlier known state from which you can recover. That's what exception mechanisms are designed for, even if they find broader use.
Used appropriately, exceptions help you write cleaner code by not cluttering with extra if-then logic, leaving only true "business" logic. They also get you away from awkward function signatures with cobbled-up return values for signaling error conditions. Used to excess, however, exceptions can end up hiding "business" logic, obfuscating your code.
Related
Sitting before this problem and I'm not sure which path I should choose.
I get string inputs representing an ID.
Most of the time it will be meaningfull keynames to look up the actual ID in a table/dict.
But direct integer ID input should be possible as well. But these will come as a string.
I'm not sure if this is more a style question or if there is a huge leaning towards one option.
Option 1:
Use a regexp, check for integer,
if true convert.
Option 2:
try
ID = string.tointeger()
catch
ID = table[string]
I'm leaning towards the try...catch option as it looks cleaner. But I'm not sure if avoiding error handling via regex would deep down actually be the cleaner way and should be preferred.
Having no deeper knowledge about about try...catch; if it is actually pretty smooth 'use it if you like' or a hiccup 'avoid if you can'
"He who would sacrifice correctness for performance deserves neither[!?]."
TL;DR:
try...catch can be very fast. Looking cleaner than code trying to ship around errors. On the other hand it might be harder to follow for others when there are big code blocks and there happens stuff behind the scene which you possible don't want to have.
This is of course language dependent
regexp can be very slow and a cheap filter before it can save you.
--
So trying to answer this myself now:
I hoped for a general answer but in the end of course it is language dependent.
While doing research I often found the arguments it is expensive, creating the exception object, collecting the call stack... Sure makes sense. But often there was no explanation and it felt more like a Stigma or Superstition: try...catch is bad.
From my tests (C++): The try...catch method was the fastest overall. A mere 3% drop in execution speed. Two simple "Does the string start with digits?" "^-?\d+" regexp for integer and float were a 50% drop and as I do some substring analysis after these 50%^x became noticeable.
In the end stumpled over Bjarne Stroustrup's (creator or C++) own FAQ:
http://www.stroustrup.com/bs_faq2.html#exceptions-why
What good can using exceptions do for me? The basic answer is: Using
exceptions for error handling makes you code simpler, cleaner, and
less likely to miss errors. But what's wrong with "good old errno and
if-statements"? The basic answer is: Using those, your error handling
and your normal code are closely intertwined. That way, your code gets
messy and it becomes hard to ensure that you have dealt with all
errors (think "spaghetti code" or a "rat's nest of tests").
[...]
Common objections to the use of exceptions:
but exceptions are expensive!: Not really. Modern C++ implementations
reduce the overhead of using exceptions to a few percent (say, 3%) and
that's compared to no error handling. Writing code with error-return
codes and tests is not free either. As a rule of thumb, exception
handling is extremely cheap when you don't throw an exception. It
costs nothing on some implementations. All the cost is incurred when
you throw an exception: that is, "normal code" is faster than code
using error-return codes and tests. You incur cost only when you have
an error.
Well now in the end I decided to use simple manual filter 1st Char in array[-,0-9] before the regexp. Which might be 10% slower than try...catch but does not throw errors 80% of the time. Still good performance and nice code :)
From the Python glossary: https://docs.python.org/3/glossary.html#term-eafp
EAFP
Easier to ask for forgiveness than permission. This common Python
coding style assumes the existence of valid keys or attributes and
catches exceptions if the assumption proves false. This clean and fast
style is characterized by the presence of many try and except
statements. The technique contrasts with the LBYL style common to many
other languages such as C.
LBYL
Look before you leap. This coding style explicitly tests for pre-conditions before making calls or lookups. This style contrasts
with the EAFP approach and is characterized by the presence of many if
statements.
In a multi-threaded environment, the LBYL approach can risk introducing a race condition between “the looking” and “the leaping”.
For example, the code, if key in mapping: return mapping[key] can fail
if another thread removes key from mapping after the test, but before
the lookup. This issue can be solved with locks or by using the EAFP
approach.
Whenever I chain conditions in Python (or any other language tbh) I stumble upon asking myself this, kicking me out of the productive "Zone".
When I chain conditions I can, by ordering them correctly, check conditions that without checking for the other conditions first, may produce an Error.
As an example lets assume the following snippet:
if "attr" in some_dictionary and some_value in some_dictionary["attr"]:
print("whooohooo")
If the first condition wasnt in the first place or even absent, the second condition my produce an KeyError
I do this pretty often to simply save space in the code, but I always wondered, if this is good style, if it comes with a risk or if its simply "pythonic".
A more Pythonic way is to "ask for forgivness rather than permission". In other words, use a try-except block:
try:
if some_value in some_dictionary["attr"]:
print("Woohoo")
except KeyError:
pass
Python is a late binding language, which is reflected in these kind of checks. The behavior is called short-circuiting. One thing I often do is:
def do(condition_check=None):
if condition_check is not None and condition_check():
# do stuff
Now, many people will argue that try: except: is more appropriate. This really depends on the use case!
if expressions are faster when the check is likely to fail, so use them when you know what is happening.
try expressions are faster when the check is likely to succeed, so use them to safeguard against exceptional circumstances.
if is explicit, so you know precisely what you are checking. Use it if you know what is happening, i.e. strongly typed situations.
try is implicit, so you only have to care about the outcome of a call. Use it when you don't care about the details, i.e. in weakly typed situations.
if works in a well-defined scope - namely right where you are performing the check. Use it for nested relations, where you want to check the top-most one.
try works on the entire contained call stack - an exception may be thrown several function calls deeper. Use it for flat or well-defined calls.
Basically, if is a precision tool, while try is a hammer - sometimes you need precision, and sometimes you just have nails.
I have been working on a Python project that has grown somewhat large, and has several layers of functions. Due to some boneheaded early decisions I'm finding that I have to go a fix a lot of crashers because the lower level functions are returning a type I did not expect in the higher level functions (usually None).
Before I go through and clean this up, I got to wondering what is the most pythonic way of indicating error conditions and handling them in higher functions?
What I have been doing for the most part is if a function can not complete and return its expected result, I'll return None. This gets a little gross, as you end up having to always check for None in all the functions that call it.
def lowLevel():
## some error occurred
return None
## processing was good, return normal result string
return resultString
def highLevel():
resultFromLow = lowLevel()
if not resultFromLow:
return None
## some processing error occurred
return None
## processing was good, return normal result string
return resultString
I guess another solution might be to throw exceptions. With that you still get a lot of code in the calling functions to handle the exception.
Nothing seems super elegant. What do other people use? In obj-c a common pattern is to return an error parameter by reference, and then the caller checks that.
It really depends on what you want to do about the fact this processing can't be completed. If these are really exceptions to the ordinary control flow, and you need to clean up, bail out, email people etc. then exceptions are what you want to throw. In this case the handling code is just necessary work, and is basically unavoidable, although you can rethrow up the stack and handle the errors in one place to keep it tidier.
If these errors can be tolerated, then one elegant solution is to use the Null Object pattern. Rather than returning None, you return an object that mimics the interface of the real object that would normally be returned, but just doesn't do anything. This allows all code downstream of the failure to continue to operate, oblivious to the fact there was a failure. The main downside of this pattern is that it can make it hard to spot real errors, since your code will not crash, but may not produce anything useful at the end of its run.
A common example of the Null Object pattern in Python is returning an empty list or dict when you're lower level function has come up empty. Any subsequent function using this returned value and iterating through elements will just fall through silently, without the need for error checking. Of course, if you require the list to have at least one element for some reason, then this won't work, and you're back to handling an exceptional situation again.
On the bright side, you have discovered exactly the problem with using a return value to indicate an error condition.
Exceptions are the pythonic way to deal with problems. The question you have to answer (and I suspect you already did) is: Is there a useful default that can be returned by low_level functions? If so, take it and run with it; otherwise, raise an exception (`ValueError', 'TypeError' or even a custom error).
Then, further up the call stack, where you know how to deal with the problem, catch the exception and deal with it. You don't have to catch exceptions immediately -- if high_level calls mid-level calls low_level, it's okay to not have any try/except in mid_level and let high_level deal with it. It may be that all you can do is have a try/except at the top of your program to catch and log all uncaught and undealt-with errors, and that can be okay.
This is not necessarily Pytonic as such but experience has taught me to let exceptions "lie where they lie".
That is to say; Don't unnecessarily hide them or re-raise a different exception.
It's sometimes good practice to let the callee fail rather than trying to capture and hide all kinds of error conditions.
Obviously this topic is and can be a little subjective; but if you don't hide or raise a different exception, then it's much easier to debug your code and much easier for the callee of your functions or api to understand what went wrong.
Note: This answer is not complete -- See comments. Some or all of the answers presented in this Q&A should probably be combined in a nice way presneting the various problems and solutions in a clear and concise manner.
My case right now:
try:
try:
condition
catch
try:
condition
catch
catch
major failure
Is it bad to have the code like that? Does it clutter too much, or what are the implications of something like that?
No, that's somewhat common (except the keyword is except rather than catch). It depends on what you need to do and the design.
What IS bad, that I see too much of, is catching top-level Exception class, rather than something more specific (e.g. KeyError). Or raising the same.
I wouldn't just cut a verdict and claim "it's bad", because sometimes you may need it. Python sometimes deliberately throws exceptions instead of letting you ask (does this ...?) [the EAFP motto] and in some cases nesting of try/catch is useful - when this makes sense with the logical flow of the code.
But my guess is that most times you don't. So a better question in your case would be to present a specific use case where you think you need such code.
I read in an earlier answer that exception handling is cheap in Python so we shouldn't do pre-conditional checking.
I have not heard of this before, but I'm relatively new to Python. Exception handling means a dynamic call and a static return, whereas an if statement is static call, static return.
How can doing the checking be bad and the try-except be good, seems to be the other way around. Can someone explain this to me?
Don't sweat the small stuff. You've already picked one of the slower scripting languages out there, so trying to optimize down to the opcode is not going to help you much. The reason to choose an interpreted, dynamic language like Python is to optimize your time, not the CPU's.
If you use common language idioms, then you'll see all the benefits of fast prototyping and clean design and your code will naturally run faster as new versions of Python are released and the computer hardware is upgraded.
If you have performance problems, then profile your code and optimize your slow algorithms. But in the mean time, use exceptions for exceptional situations since it will make any refactoring you ultimately do along these lines a lot easier.
You might find this post helpful: Try / Except Performance in Python: A Simple Test where Patrick Altman did some simple testing to see what the performance is in various scenarios pre-conditional checking (specific to dictionary keys in this case) and using only exceptions. Code is provided as well if you want to adapt it to test other conditionals.
The conclusions he came to:
From these results, I think it is fair
to quickly determine a number of
conclusions:
If there is a high likelihood that the element doesn't exist, then
you are better off checking for it
with has_key.
If you are not going to do anything with the Exception if it is
raised, then you are better off not
putting one have the except
If it is likely that the element does exist, then there is a very
slight advantage to using a try/except
block instead of using has_key,
however, the advantage is very slight.
Putting aside the performance measurements that others have said, the guiding principle is often structured as "it is easier to ask forgiveness than ask permission" vs. "look before you leap."
Consider these two snippets:
# Look before you leap
if not os.path.exists(filename):
raise SomeError("Cannot open configuration file")
f = open(filename)
vs.
# Ask forgiveness ...
try:
f = open(filename)
except IOError:
raise SomeError("Cannot open configuration file")
Equivalent? Not really. OSes are multi-taking systems. What happens if the file was deleted between the test for 'exists' and 'open' call?
What happens if the file exists but it's not readable? What if it's a directory name instead of a file. There can be many possible failure modes and checking all of them is a lot of work. Especially since the 'open' call already checks and reports all of those possible failures.
The guideline should be to reduce the chance of inconsistent state, and the best way for that is to use exceptions instead of test/call.
"Can someone explain this to me?"
Depends.
Here's one explanation, but it's not helpful. Your question stems from your assumptions. Since the real world conflicts with your assumptions, it must mean your assumptions are wrong. Not much of an explanation, but that's why you're asking.
"Exception handling means a dynamic call and a static return, whereas an if statement is static call, static return."
What does "dynamic call" mean? Searching stack frames for a handler? I'm assuming that's what you're talking about. And a "static call" is somehow locating the block after the if statement.
Perhaps this "dynamic call" is not the most costly part of the operation. Perhaps the if-statement expression evaluation is slightly more expensive than the simpler "try-it-and-fail".
Turns out that Python's internal integrity checks are almost the same as your if-statement, and have to be done anyway. Since Python's always going to check, your if-statement is (mostly) redundant.
You can read about low-level exception handling in http://docs.python.org/c-api/intro.html#exceptions.
Edit
More to the point: The if vs. except debate doesn't matter.
Since exceptions are cheap, do not label them as a performance problem.
Use what makes your code clear and meaningful. Don't waste time on micro-optimizations like this.
With Python, it is easy to check different possibilities for speed - get to know the timeit module :
... example session (using the command line) that compare the cost of using hasattr() vs. try/except to test for missing and present object attributes.
% timeit.py 'try:' ' str.__nonzero__' 'except AttributeError:' ' pass'
100000 loops, best of 3: 15.7 usec per loop
% timeit.py 'if hasattr(str, "__nonzero__"): pass'
100000 loops, best of 3: 4.26 usec per loop
% timeit.py 'try:' ' int.__nonzero__' 'except AttributeError:' ' pass'
1000000 loops, best of 3: 1.43 usec per loop
% timeit.py 'if hasattr(int, "__nonzero__"): pass'
100000 loops, best of 3: 2.23 usec per loop
These timing results show in the hasattr() case, raising an exception is slow, but performing a test is slower than not raising the exception. So, in terms of running time, using an exception for handling exceptional cases makes sense.
EDIT: The command line option -n will default to a large enough count so that the run time is meaningful. A quote from the manual:
If -n is not given, a suitable number of loops is calculated by trying successive powers of 10 until the total time is at least 0.2 seconds.
I am a python beginner as well. While I cannot say why exactly Exception handling has been called cheap in the context of that answer, here are my thoughts:
Note that checking with if-elif-else has to evaluate a condition every time. Exception handling, including the search for an exception handler occurs only in an exceptional condition, which is likely to be rare in most cases. That is a clear efficiency gain.
As pointed out by Jay, it is better to use conditional logic rather than exceptions when there is a high likelihood of the key being absent. This is because if the key is absent most of the time, it is not an exceptional condition.
That said, I suggest that you don't worry about efficiency and rather about meaning. Use exception handling to detect exceptional cases and checking conditions when you want to decide upon something. I was reminded about the importance of meaning by S.Lott just yesterday.
Case in point:
def xyz(key):
dictOb = {x:1, y:2, z:3}
#Condition evaluated every time
if dictOb.has_key(key): #Access 1 to dict
print dictOb[key] #Access 2
Versus
#Exception mechanism is in play only when the key isn't found.
def xyz(key):
dictOb = {x:1, y:2, z:3}
try:
print dictOb[key] #Access 1
except KeyError:
print "Not Found"
Overall, having some code that handles something,like a missing key, just in case needs exception handling, but in situations like when the key isn't present most of the time, what you really want to do is to decide if the key is present => if-else. Python emphasizes and encourages saying what you mean.
Why Exceptions are preferred to if-elif ->
It expresses the meaning more clearly when you are looking foe exceptional aka unusual/unexpected conditions in your code.
It is cleaner and a whole lot more readable.
It is more flexible.
It can be used to write more concise code.
Avoids a lot of nasty checking.
It is more maintainable.
Note
When we avoid using try-except, Exceptions continue being raised. Exceptions which aren't handled simply go to the default handler. When you use try-except, you can handle the error yourself. It might be more efficient because if-else requires condition evaluation, while looking for an exception handler may be cheaper. Even if this is true, the gain from it will be too minor to bother thinking about.
I hope my answer helps.
What are static versus dynamic calls and returns, and why do you think that calls and returns are any different in Python depending on if you are doing it in a try/except block? Even if you aren't catching an exception, Python still has to handle the call possibly raising something, so it doesn't make a difference to Python in regards to how the calls and returns are handled.
Every function call in Python involves pushing the arguments onto the stack, and invoking the callable. Every single function termination is followed by the caller, in the internal wiring of Python, checking for a successful or exception termination, and handles it accordingly. In other words, if you think that there is some additional handling when you are in a try/except block that is somehow skipped when you are not in one, you are mistaken. I assume that is what you "static" versus "dynamic" distinction was about.
Further, it is a matter of style, and experienced Python developers come to read exception catching well, so that when they see the appropriate try/except around a call, it is more readable than a conditional check.
The general message, as S.Lott said, is that try/except doesn't hurt so you should feel free to use it whenever it seems appropriate.
This debate is often called "LBYL vs EAFP" – that's "look before you leap" vs "easier to ask forgiveness than permission". Alex Martelli weighs forth on the subject here: http://mail.python.org/pipermail/python-list/2003-May/205182.html This debate is almost six years old, but I don't think the basic issues have changed very much.