I am hitting an external api that allows 1request/sec so to avoid request collision, I am using retrying module in python. I want to retry on generic exception and try with a random time frame.
#retry(retry_on_exception=exceptions.Exception,wrap_exception=True,wait_random_min=1000,wait_random_max=2500)
def do_some _stuff(info):
#some stuff that can throw exception
Does it seem like a robust solution to my problem, I am not sure? Also, is it the correct way to handle generic exception in retry decorator. I am using it exactly like the above code and it does not throw any errors but not sure. All the examples I have seen are with some specific exceptions.
updated: Exceptions I am getting
reject |= self._retry_on_exception(attempt.value[1])
TypeError: unsupported operand type(s) for |=: 'bool' and 'exceptions.Exception'
You're not using the retry_on_exception parameter correctly. It expects a callable and that callable must return a boolean. You can see an example of this in the package doc:
def retry_if_io_error(exception):
"""Return True if we should retry (in this case when it's an IOError), False otherwise"""
return isinstance(exception, IOError)
#retry(retry_on_exception=retry_if_io_error)
def might_io_error():
print "Retry forever with no wait if an IOError occurs, raise any other errors"
So in your case you can check on Exception instances, instead of IOError, with the test isinstance(exception, Exception). But you can notice that it'll always be true and so it's sort of pointless. #retry default behavior is to retry independently from what the exception is, so you actually have nothing to add:
#retry(wait_random_min=1000, wait_random_max=2500)
def do_some_stuff(info):
#some stuff that can throw exception
But it's generally a bad idea to treat all types of exceptions the same way. In your case it looks like you should only retry on some HttpError or whatever similar error your code raises when you hit the API limit.
Related
Let’s say I have this function :
def a ():
try :
b
except Error:
raise Error
return True
Is this considered good practice to only return true if it is successful otherwise stop block execution with raise error ?
Think about your function and error processing as separate concerns. A function should either return an object as the result (or automatically None with no return statement) or perform some action without returning an explicit result. Avoid mixing the two.
If an error that your function code can detect happens, rise an Exception at the appropriate abstraction level, so that the client code can wrap the call in a try block and properly treat the error case, if there is a wish to do so.
How to retry a function if the exception is NOT of a certain type using Python's tenacity?
retry_if_exception_type will retry if there is risen an exception of a certain type. not does not seems to work placed before the method nor before its arguments.
retry_unless_exception_type, on the other side, loops forever, even if there is not risen error, until there is risen error of a certain type.
retry_if_not_exception_type is available since version 8.0.0
Retries except an exception has been raised of one or more types.
So if you use retry_if_not_exception_type((ValueError, OSError)), it will retry for any exception, except ValueError or OSError.
using retry_unless_exception_type() combined with stop_after_attempt() worked for me to accomplish this. the stop_after_attempt() prevents the infinite looping.
I had to create my own class for that:
class retry_if_exception_unless_type(retry_unless_exception_type):
"""Retries until successful outcome or until an exception is raised of one or more types."""
def __call__(self, retry_state):
# don't retry if no exception was raised
if not retry_state.outcome.failed:
return False
return self.predicate(retry_state.outcome.exception())
Breaking it down a bit, what you want is to retry if:
an exception is raised
(and) unless the exception is of a certain type
So write that:
retry_if_exception_type() & retry_unless_exception_type(ErrorClassToNotRetryOn)
According to a given protocol (which I cannot change, only implement), some function initialize_foo() is supposed to be called only once:
def initialize_foo():
"""
...
Note:
You must call this function exactly once.
"""
I would like to recognize a protocol abuse where it is called twice, and raise an exception:
_foo_initialized = False
def initialize_foo():
"""
...
Note:
You must call this function exactly once.
"""
if _foo_initialized:
raise <what>?
...
_foo_initialized = True
The problem is what class's object to raise. Looking at the standard exceptions, I can't find anything to subclass except Exception, which seems too general.
What is the general practice in this case?
I'd use RuntimeError.
It is often used for that sort of stuff, even in the standard library. You can find an example very similar to your use case in the warnings module:
if self._entered:
raise RuntimeError("Cannot enter %r twice" % self)
Another example is in threading:
if self._started.is_set():
raise RuntimeError("threads can only be started once")
You can also consider raising an ad-hoc exception (possibly a subclass of RuntimeError) if that error is supposed to be caught and if you feel that RuntimeError may be ambiguous.
I would recommend you to subclass a warning, instead of having an exception, since I have a feeling that a lot of times you'd rather continue running after this happens.
I am writing a class in Python and part of the code deals with a server. Therefore I need to deal with exceptions like ExpiredSession or ConnectionError.
Instead of writing exception handling code for every try/except block, I have a single function in the class to deal with the exceptions. something like this (inside the class definition)
def job_a(self):
try:
do something
except Exception as e:
#maybe print the error on screen.
self.exception_handling(e)
def job_b(self):
try:
do something else
except Exception as e:
#maybe print the error on screen.
self.exception_handling(e)
def exception_handling(self,e):
if isInstanceOf(e,ExpiredSession):
#deal with expired session.
self.reconnect()
if isInstanceOf(e,ConnectionError):
#deal with connection error
else:
#other exceptions
I am not sure if this kind of code would cause any problem because I haven't seen code do this. like, possible memory leak? (Now I notice the memory usage grows(though slowly) when I have more and more error/exception and eventually I have to restart the process before it eats all my memories). Not sure this is the cause.
Is it a good practice to pass exceptions to a single function?
This is a good use case for a context manager. You can see some examples of using context managers for error handling here. The contextmanager decorator allows you to write context managers concisely in the form of a single function. Here's a simple example:
class Foo(object):
def meth1(self):
with self.errorHandler():
1/0
def meth2(self):
with self.errorHandler():
2 + ""
def meth3(self):
with self.errorHandler():
# an unhandled ("unexpected") kind of exception
""[3]
#contextlib.contextmanager
def errorHandler(self):
try:
yield
except TypeError:
print "A TypeError occurred"
except ZeroDivisionError:
print "Divide by zero occurred"
Then:
>>> x = Foo()
>>> x.meth1()
Divide by zero occurred
>>> x.meth2()
A TypeError occurred
The with statement allows you to "offload" the error handling into a separate function where you catch the exceptions and do what you like with them. In your "real" functions (i.e., the functions that do the work but may raise the exceptions), you just need a with statement instead of an entire block of complicated try/except statements.
An additional advantage of this approach is that if an unforeseen exception is raised, it will propagate up normally with no extra effort:
>>> x.meth3()
Traceback (most recent call last):
File "<pyshell#394>", line 1, in <module>
x.meth3()
File "<pyshell#389>", line 12, in meth3
""[3]
IndexError: string index out of range
In your proposed solution, on the other hand, the exception is already caught in each function and the actual exception object is passed the handler. If the handler gets an unexpected error, it would have to manually reraise it (and can't even use a bare raise to do so). Using a context manager, unexpected exceptions have their ordinary behavior with no extra work required.
I've recently started using Ludibrio for mocking objects in unit testing. So far it seems to be pretty streamlined, but I seem to have hit a snag when testing some failure scenarios and can't seem to find a solution online.
Some of the method calls I'm working with raise exceptions which I want to trap. So I want my mock object to simulate these conditions by raising an exception on a particular call. I tried doing it like this:
from ludibrio import *
with Mock() as myMock:
def raiseException():
raise Exception('blah')
myMock.test() >> raiseException()
try:
print myMock.test()
except Exception, e:
print 'Error: %s' % e
myMock.validate()
The trouble is, raiseException() is evaluated when the mock object is built, rather than when myMock.test() is called. So clearly this isn't the correct way to do this.
Is there a way to get the mock object to raise an exception at runtime? Or would the exception be intercepted as a failure and not get outside of the mock object anyway?
Further Googling has eventually yielded the answer. Simply tell the mock object to pass back an exception. This appears to then be raised on the outside:
myMock.test() >> Exception('blah')