Right now I'm doing this:
try:
while True:
s = client.recv_into( buff, buff_size )
roll += buff.decode()
I repeatedly call client.recv_into until it raises an exception. Now, I know eventually, without a doubt, it will raise an exception...
Is there a cleaner way to do this? Is there some sort of loop-until-exception construct or a common way to format this?
There are two ways to do this:
Like you did
try:
while True:
s = client.recv_into( buff, buff_size )
roll += buff.decode()
except YourException:
pass
or
while True:
try:
s = client.recv_into( buff, buff_size )
roll += buff.decode()
except YourException:
break
Personally, I would prefer the second solution as the break keyword makes clear what is happening in case of the exception.
Secondly, you should only catch the exception you're awaiting (in my example YourException). Otherwise IOError, KeyboardInterrupt, SystemExit etc. will also be caught, hiding "real errors" and potentially blocking your program from exiting properly.
That appears to be the (pythonic) way to do it. Indeed, the Python itertools page gives the following recipe for iter_except (calls a function func until the desired exception occurs):
def iter_except(func, exception, first=None):
try:
if first is not None:
yield first()
while 1:
yield func()
except exception:
pass
which is almost exactly what you're done here (although you probably do want to add a except [ExceptionName] line in your try loop, as Nils mentioned).
Related
I'm attempting an operation in Python, and if it fails in certain ways I'd like to retry up to 10 times. If it fails in any other way I'd like it to fail immediately. After 10 retries I'd like all failures to propagate to the caller.
I've been unable to code the flow control in a satisfying way. Here's one example of the behavior (but not the style!) I want:
def run():
max_retries = 10
for retry_index in range(max_retries):
try:
result = run_operation()
except OneTypeOfError:
if retry_index < max_retries - 1:
continue
else:
raise
if result.another_type_of_error:
if retry_index < max_retries - 1:
continue
else:
raise AnotherTypeOfError()
try:
result.do_a_followup_operation()
except AThirdTypeOfError:
if retry_index < max_retries - 1:
continue
else:
raise
return result
raise Exception("We don't expect to end up here")
At first I thought I could just refactor this so the retry logic is separate from the error handling logic. The problem is that if, for example, OneTypeOfError is raised by result.do_a_followup_operation(), I don't want to retry at all in that case. I only want to retry in the specific circumstances coded above.
So I thought perhaps I could refactor this into a function which returns the result, a raised Exception (if any), and a flag indicating whether it's a retryable exception. Somehow that felt less elegant than the above to me.
I'm wondering if there are any flow control patterns in Python which might help here.
You can use a specific exception class and recursion to be a little more dry. Sth along these lines:
class Retry(Exception):
def __init__(self, e):
super(Exception, self).__init__()
self.e = e
def run(max_retries=10, exc=None):
if max_retries <= 0:
raise exc or Exception('Whatever')
try:
try:
result = run_operation()
except OneTypeOfError as e:
raise Retry(e)
if result.another_type_of_error:
raise Retry(AnotherTypeOfError())
try:
result.do_a_followup_operation()
except AThirdTypeOfError as e:
raise Retry(e)
except Retry as r:
return run(max_retries=max_retries-1, exc=r.e)
else:
return result
An iterative solution with a given number of iterations seems semantically questionable. After all, you want the whole thing to succeed and a retry is a fallback. And having run out of retries sounds like a base case to me.
If I understand this correctly, you could do it as follows by changing two things:
Counting tries_left down from 10 instead of retry_index up from 0 reads more naturally and lets you exploit that positive numbers are truthy.
If you changed (or wrapped) run_operation() such that it would already raise AnotherTypeOfError if result.another_error is true, you could combine the first two except blocks.
The code can optionally be made slightly more dense by omitting the else after raise (or after continue if you choose to test for if tries_left instead) – the control flow is diverted at that point anyway –, and by putting a simple statement on the same line as a bare if without else.
for tries_left in range(10, -1, -1):
try:
result = run_operation()
except OneTypeOfError, AnotherTypeOfError:
if not tries_left: raise
continue
try:
result.do_a_followup_operation()
except AThirdTypeOfError:
if not tries_left: raise
continue
return result
Edit: Ah I didn't realize that your code indicated you didn't use multiple except blocks, as pointed out by JETM
Here's your quick primer on ExceptionHandling:
try:
# do something
except OneTypeOfError as e:
# handle one type of error one way
print(e) # if you want to see the Exception raised but not raise it
except AnotherTypeOfError as e:
# handle another type of error another way
raise e('your own message')
except (ThirdTypeOfError, FourthTypeOfError) as e:
# handle error types 3 & 4 the same way
print(e) # if you want to see the Exception raised but not raise it
except: # DONT DO THIS!!!
'''
Catches all and any exceptions raised.
DONT DO THIS. Makes it hard to figure out what goes wrong.
'''
else:
# if the try block succeeds and no error is raisedm then do this.
finally:
'''
Whether the try block succeeds or fails and one of the except blocks is activated.
Once all those are done with, finally run this block.
This works even if your program crashed and so is great for cleaning up for example.
'''
I did this once a long time ago, and recursively called the same function in one kind of exception but not another. I also passed the retry index and max_retries variable to the function, which meant adding those as parameters.
The other way would be to place the entire thing in a for loop of max_retries and add a break in all except blocks for the exceptions where you don't want a retry.
Finally, instead of a for loop, you can put the entire thing in a while loop, insert an increment condition in except block for one type of exception and make the while condition false in except block for other exceptions.
I want to know what is the most elegant way of writing try..except statements in python. Assume I have this code:
with open(sys.argv[1]) as f:
for line in f:
try:
do_1(line)
except:
pass
try:
do_2(line)
except:
pass
try:
do_3(line)
except:
pass
...
...
What is the best way of writing this? My actions are sequential. However, if do_1 fails I still want to perform do_2. If all of them are in one try..except block, then if do_1 fails, I will never reach do_2. Is this the right way, or can I have one except for all of d0_i actions?
It's simple enough to write this as a loop:
for action in [do_1, do_2, do_3, ...]:
try:
action(line)
except AppropriateExceptionType:
pass
I would factor out the common code which is your try/except statements. Something like:
def run_safely(f, *args):
try:
f(*args)
except SpecificException:
# handle appropriately here
pass
with open(sys.argv[1]) as f:
for line in f:
run_safely(do_1, line)
run_safely(do_2, line)
run_safely(do_3, line)
Essentially, you need each do_<Step> function to run inside the finally block of the previous one, like so:
try:
do_1(line)
except:
# Handle failure
pass
finally:
# Run regardless
try:
do_2(line)
except:
# Handle failure
finally:
# Run regardless
try:
do_3(line)
...
This chains the functions together through the finally block. Notice that in the event of an exception at any step, the exception is handled before starting the next step, which is guaranteed to run regardless of whether an exception is generated or not.
Since your functions all have the same shape (taking the same number and type of arguments), you can abstract out this pattern into a function, like tryChain below:
def tryChain(functions, *sharedArgs)
f = functions.pop()
try:
f(*sharedArgs)
finally:
tryChain(functions)
try:
tryChain([do_1, do_2, ...], line, arg2, ...)
except SpecificException:
# Handle exception here, which may be propagated from any of the actions
pass
Note that in this case, only the last exception is thrown back to the caller; the others are hidden. (You could handle the exceptions inside tryChain as well, with an except block inserted there; or, you could pass in an error handler for each step; or a map from exception types to the appropriate handler, and re-throw the error if none of them matches — but at that point, you're practically reinventing exception handling.)
I have two objectives with this try/except statement.
It needs to return a value of 1 if no problems occurred, or 0 if any problems occurred.
It needs to raise an exception and end the script.
I have the return value working. I also have the SystemExit() working. But together, they aren't working.
My Python Script (that's relevant):
except IOError:
value_to_return = 0
return value_to_return
raise SystemExit("FOOBAR")
With this, it ignores the raise SystemExit("FOOBAR") line completely. How do I go about getting a returned value and still raise SystemExit("FOOBAR")? This may be elementary to some, but I'm actually having quite a bit of difficulty with it.
Returning and raising are mutually exclusive.
Raising SystemExit will end the script. A few cleanup routines get to run, and if the caller really, really wants to, they can catch the SystemExit and cancel it, but mostly, you can think of it as stopping execution right there. The caller will never get a chance to see a return value or do anything meaningful with it.
Returning means you want the script to continue. Continuing might mean having the caller raise SystemExit, or it might mean ignoring the error, or it might mean something else. Whatever it means is up to you, as you're the one writing the code.
Finally, are you sure you should be handling this error at all? Catching an exception only to turn it into a system shutdown may not be the most useful behavior. It's not a user-friendly way to deal with problems, and it hides all the useful debugging information you'd get from a stack trace.
You can raise an error with a 'returning_value' argument to be used after the calling.
Another pythonic answer to your problem could be to make use of the error arguments in the raise and then, in your call manage the error to get the value, convert it from string and get your 'return-ish'.
def your_f():
try:
some_io_thingy_ok()
return 1
except IOError:
raise SystemExit("FOOBAR", 0)
try:
my_returning_value = your_f()
except SystemExit as err:
my_returning_value = err.args[1]
print(my_returning_value)
From Python 3 docs :
When an exception occurs, it may have an associated value, also known
as the exception’s argument. The presence and type of the argument
depend on the exception type.
The except clause may specify a variable after the exception name. The
variable is bound to an exception instance with the arguments stored
in instance.args. For convenience, the exception instance defines
str() so the arguments can be printed directly without having to reference .args. One may also instantiate an exception first before
raising it and add any attributes to it as desired.
To exit a script and return an exit status, use sys.exit():
import sys
sys.exit(value_to_return)
I think what you may be looking for is something more like this:
def some_function():
# this function should probably do some stuff, then return 1 if
# it was successful or 0 otherwise.
pass
def calling_function():
a = some_function()
if a == 1:
raise SystemExit('Get the heck outta here!')
else:
# Everything worked!
pass
You can't "raise" and "return" in the same time, so you have to add a special variable to the return value (e.g: in tuple) in case of error.
E.g:
I have a function (named "func") which counts something and I need the (partial) result even if an exception happened during the counting. In my example I will use KeyboardInterrupt exception (the user pressed CTRL-C).
Without exception handling in the function (it's wrong, in case of any exception the function doesn't give back anything):
def func():
s=0
for i in range(10):
s=s+1
time.sleep(0.1)
return s
x=0
try:
for i in range(10):
s=func()
x=x+s
except KeyboardInterrupt:
print(x)
else:
print(x)
And now I introduce a boolean return value (in a tuple, next to the original return value) to indicate if an exception happened. Because in the function I handle only the KeyboardInterrupt exception, I can be sure that's happened, so I can raise the same where I called the function:
def func():
try:
s=0
for i in range(10):
s=s+1
time.sleep(0.1)
except KeyboardInterrupt: # <- the trick is here
return s, True # <- the trick is here
return s, False # <- the trick is here
x=0
try:
for i in range(10):
s,e=func()
x=x+s
if e: # <- and here
raise KeyboardInterrupt # <- and here
except KeyboardInterrupt:
print(x)
else:
print(x)
Note: my example is python3. The time module is used (in both code above), but I haven't import it just to make it shorter. If you want to really try it, put at the beginning:
import time
i was looking for an answer without using try, use 'finally' keyword like this.. if any one knows fill me in
here is an answer for your poblem
try:
9/0#sample error "don't know how to make IOError"
except ZeroDivisionError:
value_to_return = 0
raise SystemExit("FOOBAR")
finally:return value_to_return
I've got a function that often throws an exception (SSH over 3g).
I'd like to keep trying to run function() every 10 seconds until it succeeds (doesn't throw an exception).
As I see it, there are two options:
Nesting:
def nestwrapper():
try:
output = function()
except SSHException as e:
# Try again
sleep(10)
return nestwrapper()
return output
Looping: (updated)
It's been pointed out that the previous looping code was pretty unnecessary.
def loopwrapper():
while True:
try:
return function()
except SSHException as e:
sleep(10)
Is there a preferred method of doing this?
Is there an issue with nesting and the exception stack?
I would find a loop to be cleaner and more efficient here. If this is an automation job, the recursive method could hit python recursion limit (default is 1000 iirc, can check with sys.getrecursionlimit()).
Don't use status is False for your expression, because this is an identity comparison. Use while not status.
I would probably implement it slightly differently too, because I don't see any need for the two different functions here:
def function_with_retries():
while True:
try:
output = function()
except SSHException:
sleep(10)
else:
return output
I'm not sure it makes a heck of a lot of sense to specially wrap the function call twice. the exception is probably reasonable, and you're going to the extra step of retrying on that particular exception. What I mean is that the try/except is rather tightly involved with the retrying loop.
This is the way I normally do this:
def retry_something():
while True:
try:
return something()
except SomeSpecialError:
sleep(10)
The while True: is really exactly what you're up to, you're going to loop forever, or rather, until you actually manage to something(), and then return. There's no further need for a boolean flag of success, that's indicated by the normal case of the return statement (which politely escapes the loop).
Keep it simple.
function looper(f):
while 1:
try:
return f()
except SSHException, e:
sleep(10)
output = looper(<function to call>)
I know that is a weird question, and probably there is not an answer.
I'm trying to execute the rest of the try block after an exception was caught and the except block was executed.
Example:
[...]
try:
do.this()
do.that()
[...]
except:
foo.bar()
[...]
do.this() raise an exception managed by foo.bar(), then I would like to execute the code from do.that(). I know that there is not a GOTO statement, but maybe some kind of hack or workaround!
Thanks!
A try... except... block catches one exception. That's what it's for. It executes the code inside the try, and if an exception is raised, handles it in the except. You can't raise multiple exceptions inside the try.
This is deliberate: the point of the construction is that you need explicitly to handle the exceptions that occur. Returning to the end of the try violates this, because then the except statement handles more than one thing.
You should do:
try:
do.this()
except FailError:
clean.up()
try:
do.that()
except FailError:
clean.up()
so that any exception you raise is handled explicitly.
Use a finally block? Am I missing something?
[...]
try:
do.this()
except:
foo.bar()
[...]
finally:
do.that()
[...]
If you always need to execute foo.bar() why not just move it after the try/except block? Or maybe even to a finally: block.
One possibility is to write a code in such a way that you can re-execute it all when the error condition has been solved, e.g.:
while 1:
try:
complex_operation()
except X:
solve_problem()
continue
break
fcts = [do.this, do.that]
for fct in fcts:
try:
fct()
except:
foo.bar()
You need two try blocks, one for each statement in your current try block.
This doesn't scale up well, but for smaller blocks of code you could use a classic finite-state-machine:
states = [do.this, do.that]
state = 0
while state < len(states):
try:
states[state]()
except:
foo.bar()
state += 1
Here's another alternative. Handle the error condition with a callback, so that after fixing the problem you can continue. The callback would basically contain exactly the same code you would put in the except block.
As a silly example, let's say that the exception you want to handle is a missing file, and that you have a way to deal with that problem (a default file or whatever). fileRetriever is the callback that knows how to deal with the problem. Then you would write:
def myOp(fileRetriever):
f = acquireFile()
if not f:
f = fileRetriever()
# continue with your stuff...
f2 = acquireAnotherFile()
if not f2:
f2 = fileRetriever()
# more stuff...
myOp(magicalCallback)
Note: I've never seen this design used in practice, but in specific situations I guess it might be usable.