Let's say I have three functions that do different things but should react to a set of exceptions in the same way. One of them might look like:
def get_order_stat(self, Order_id):
status_returned = False
error_count = 0
while status_returned == False:
try:
stat_get = client.queryOrder(orderId=Order_id)
except MalformedRequest:
print('Order ID not yet findable, keep trying')
error_count += 1
time.sleep(int(1))
except InternalError:
print('Order check returned InternalError, keep trying')
error_count += 1
time.sleep(int(1))
except StatusUnknown:
print('Order check returned StatusUnknown, keep trying')
error_count += 1
time.sleep(int(1))
else:
status = stat_get['status']
status_returned = True
finally:
if error_count >= 10:
print('Error loop, give up')
break
return status
The vast majority of the code is the exception handling, and I'd like to avoid having to repeat it in every function that needs it. Is there a way to define something like an exception handling function containing the handling code? Ideally my function would end up effectively:
def get_order_stat(self, Order_id):
status_returned = False
while status_returned == False:
try:
stat_get = client.queryOrder(orderId=Order_id)
except:
handler_function()
else:
status = stat_get['status']
status_returned = True
return status
You practically already did it. Just define the handler_function() somewhere and it gets called when an Exception in the try block gets raised.
Maybe helpful: you can bind the Exception to a variable and use it for exception handling in the handler function:
except Exception as e:
handler_function(e)
Then you can for example do `print(e)̀ to give out the exception or do different handling for different exceptions in the function. Hope that helps!
You also can specify several exceptions in one line if you don't want to be general, but catch all specific exceptions with one statement:
except (ExceptionType1, ExceptionType2, ExceptionType3) as e:
handler_function(e)
I might write a decorator function for the exception handling; for instance using functool.wraps.
from functool import wraps
def retry(f):
#wraps(f)
def wrapper(*args, **kwargs):
error_count = 0
while error_count < 10:
try:
return f(*args, **kwargs)
except MalformedRequest:
print('Order ID not yet findable, keep trying')
except InternalError:
print('Order check returned InternalError, keep trying')
error_count += 1
time.sleep(int(1))
print('Error loop, give up')
return None
return wrapper
Then you can write a very simple API call function, and wrap it with the retry wrapper:
#retry
def get_order(order_id):
stat_get = client.queryOrder(orderId=order_id)
return stat_get['status']
In your original function, notice that you can move the contents of the try...else block into the main try block without affecting the logic (extracting the value from the query result won't raise one of the network-related exceptions), and then you can just return out of the try block instead of arranging to stop the loop. Then the contents of the try block are what I've broken out here into get_order(). I restructured the remaining loop a little and turned it into decorator form.
Related
As you can see, the connect function, is converting the _connect function into a lambda through convert and it's getting passed to run_api function. The exception thrown in _connect is not getting caught by the except in the run_api function. Is anything that needs to be done with respect to lambda?
The code looks good but still am not able to figure out why the exception is not getting caught incase of failure.
Here is my code.
def run_api(function, retry_count):
count = 0
while count < retry_count:
count += 1
try:
function()
return True
except (BleTestFail, BleTestError):
if count == retry_count:
return False
def convert(func):
return lambda: func
def _connect(self, target_id):
result = self.device.ble_central.connect(target_id)
self.logger.debug('Connect output %s', result)
if result['op'] != 'ok':
self.logger.error('Connect command execution failed')
raise ble_utils.BleTestFail('Failed to connect')
return True
def connect(self, target_ids, retry_count=1):
connected = []
unconnected = []
if not isinstance(target_ids, list):
target_ids = [target_ids]
for target_id in target_ids:
connect_function = ble_utils.convert(self._connect(target_id))
connect_status = ble_utils.run_api(connect_function, retry_count,
'connecting device %s' % target_id,
self.logger)
if connect_status:
connected.append(target_id)
else:
unconnected.append(target_id)
if connected:
self.logger.info('Connected to %s devices: %s', len(connected), connected)
if unconnected:
self.logger.error('Unable to connect %s devices: %s', len(unconnected),
unconnected)
return connected, unconnected
So, to clarify, we have the example "API" to which we want to provide a callback:
def run_api(function, retry_count):
count = 0
while count < retry_count:
count += 1
try:
function()
return True
except (BleTestFail, BleTestError):
if count == retry_count:
return False
And a method that we want to be called in that API, with a specific argument:
class Example:
# other stuff omitted...
def _connect(self, target_id):
result = self.device.ble_central.connect(target_id)
self.logger.debug('Connect output %s', result)
if result['op'] != 'ok':
self.logger.error('Connect command execution failed')
raise ble_utils.BleTestFail('Failed to connect')
return True
connection = Example()
So now we want to call run_api with connection._connect, but somehow provide the target_id information.
This is called binding, and the most elegant way to do it is with the standard library functools.partial:
from functools import partial
# This is how we can make the `convert` function from before:
def convert(func, param):
return partial(func, param)
# But there is no point to this, since we can just use `partial` directly.
# There was no hope for the original approach, because you were calling the
# function ahead of time and passing the returned result to `convert`.
# So, the process looks like this:
# target_id = 1, retry_count = 2
run_api(partial(connection._connect, 1), 2)
You can make it work with lambda, but I don't recommend it - functools.partial is more explicit, and elegantly handles more advanced use cases that have some unexpected gotchas (in particular, if you want to make multiple callbacks in a loop; you may find they all unexpectedly bind with the same value, or else you have to use a very ugly workaround). But for the sake of completion, that looks like so:
def convert(func, param):
return lambda: func(param)
I would like to write a python decorator so that a function raising an exception will be run again until either it succeeds, or it reaches the maximum number of attempts before giving up.
Like so :
def tryagain(func):
def retrier(*args,**kwargs,attempts=MAXIMUM):
try:
return func(*args,**kwargs)
except Exception as e:
if numberofattempts > 0:
logging.error("Failed. Trying again")
return retrier(*args,**kwargs,attempts=attempts-1)
else:
logging.error("Tried %d times and failed, giving up" % MAXIMUM)
raise e
return retrier
My problem is that I want a guarantee that no matter what names the kwargs contain, there cannot be a collision with the name used to denote the number of attempts made.
however this does not work when the function itself takes attempts as a keyword argument
#tryagain
def other(a,b,attempts=c):
...
raise Exception
other(x,y,attempts=z)
In this example,if other is run, it will run z times and not MAXIMUM times (note that for this bug to happen, the keyword argument must be explicitly used in the call !).
You can specify decorator parameter, something along the lines of this:
import logging
MAXIMUM = 5
def tryagain(attempts=MAXIMUM):
def __retrier(func):
def retrier(*args,**kwargs):
nonlocal attempts
while True:
try:
return func(*args,**kwargs)
except Exception as e:
attempts -= 1
if attempts > 0:
print('Failed, attempts left=', attempts)
continue
else:
print('Giving up')
raise
return retrier
return __retrier
#tryagain(5) # <-- this specifies number of attempts
def fun(attempts='This is my parameter'): # <-- here the function specifies its own `attempts` parameter, unrelated to decorator
raise Exception(attempts)
fun()
Instead of an argument, get the number of retry attempts from a function attribute.
def tryagain(func):
def retrier(*args,**kwargs):
retries = getattr(func, "attempts", MAXIMUM)
while retries + 1 > 0:
try:
return func(*args, **kwargs)
except Exception as e:
logging.error("Failed. Trying again")
last_exception = e
retries -= 1
else:
logging.error("Tried %d times and failed, giving up", retries)
raise last_exception
return retrier
#tryagain
def my_func(...):
...
my_func.attempts = 10
my_func() # Will try it 10 times
To make MAXIMUM something you can specify when you call decorate the function, change the definition to
def tryagain(maximum=10):
def _(f):
def retrier(*args, **kwargs):
retries = getattr(func, "attempts", maximum)
while retries + 1 > 0:
try:
return func(*args, **kwargs)
except Exception as e:
logging.error("Failed. Trying again")
last_exception = e
retries -= 1
else:
logging.error("Tried %d times and failed, giving up", retries)
raise last_exception
return retrier
return _
Although there's still a risk of a name collision with the attempts attributes, the fact that function attributes are rarely used makes it more reasonable to document tryagain as not working with functions with a pre-existing attempts attribute.
(I leave it as an exercise to modify tryagain to take an attribute name to use as an argument:
#tryagain(15, 'max_retries')
def my_func(...):
...
so that you can choose an unused name at decoration time. For that matter, you can also use an argument to tryagain as the name of a keyword argument to add to my_func.)
I have a scenario where a transaction.commit_on_success() is being rolled back after the code block finishes execution without raising error.
The DataBaseError is being caught and retried explicitly within that transaction.
If there are 10 requests coming in simultaneously, for the one's where the retry executes, rollback happens for those objects even though a voucher is served in the first retry.
Assuming I have brand, amount and currency already, below is my code base
def get_coupon_code(brand, currency, amount, retry=4)
try:
qs = VoucherCode.select_for_update().filter(
brand=brand, currency=currency, amount=amount
).values('id', 'voucher_code').order_by('id')[:1]
try:
return qs[0]
except IndexError:
return None
except DatabaseError as err:
if retry and hasattr(err, 'args') and err.args[0] == 1213:
retry -= 1
time.sleep(4)
return self.get_coupon_code(
brand=brand, currency=currency, amount=amount, retry=retry)
else:
raise err
try:
with transaction.commit_on_success():
object = Coupon.objects.create(brand=brand)
voucher_code = get_coupon_code(brand, currency, amount)
code = voucher_code.get('voucher_code')
object.voucher_code = code
try:
voucher_obj = VoucherCode.objects.get(voucher_code=code)
voucher_obj.delete()
except VoucherCode.DoesNotExist:
pass
return object
except Exception as error:
raise MyCustomException()
i need continue in program after except when variable is True but when variable is False need exit program.
I think there will be if else but I'm not sure how to use it.
for examlpe:
var = True
try:
print 2/0
except:
exit(1)
... continue executing
var = False
try:
print 2/0
except:
exit(1)
... exit
Thanks for comments.
This ought to do the trick, by the way you should probably use raise.
var = True
try:
print 2/0
except:
if not var:
# I recommend using raise, as it would show you the error
exit(1)
If you have a number of except groups which will need to use the var try this
Note that you can expand myexcept using decorators or closures in order to set up additional processing within the exception as well. Since a function is an object, you can use a different specialfunc() for every except: that you write. You can set up the myexcept to handle calls to specialfunc() with arguments as well using the variable arguments process as shown below
def specialfunc1():
# put the special function code here
def specialfunc2(arg1):
# put the processing here
def specialfunc3(arg1, arg2):
# put the processing here
def myexcept(var, e, specialfunc, *args)
print 'Exception caught for ', e
if var:
specialfunc(*args)
else:
raise # This raises the current exception to force an exit
try:
# code you are testing
2/0
except Error1, e:
myexcept(var, e, specialfunc1)
except Error2, e:
myexcept(var, e, specialfunc2(arg1))
except Error3, e:
myexcept(var, e, specialfunc3(arg1, arg2))
except: # this default forces the regular exception handler
# remaining code
How rewrite this code in a pythonic way?
tried = 0
while tried < 3:
try:
function()
break
except Exception as e:
print e
tried += 1
Is there a built-in function I could use?
A more pythonic way to do something N times is to use xrange with the _ variable:
for _ in xrange(3):
try:
function()
break
except Exception as e:
print e
Also, consider catching a more specific exception instead of the root Exception class.
You can use a retry decorator:
#retries(3)
def execTask():
f()
One simpler than the one in the provided link could look like this:
def retry(times=3):
def do_retry(f, *args, **kwargs):
cnt = 0
while cnt < times:
try:
f(*args, **kwargs)
return
except:
cnt += 1
return do_retry
And could be used like this:
#retry(3)
def test():
print("Calling function")
raise Exception("Some exception")
tried = 0
while tried < 3:
try:
function()
break
except Exception as e:
print e
tried += 1
It's almost exactly how you wrote it, except you need a colon at the end of your while line and move the break to the "try" block.
Motivation
Given the motivation of the OP was to limit the number of failing attempt to run a function(), the following code does not provide any artificial gymnastics, but both limits the number of tries and retains the actual number of attempts for ex-post analyses ( if needed down the road )
tried = 0
while tried < 3:
try:
function() # may throw an Exception
break # on success pass through/return from function()
except Exception as e:
print e
tried += 1
# retains a value of tried for ex-post review(s) if needed