I've recently started using Ludibrio for mocking objects in unit testing. So far it seems to be pretty streamlined, but I seem to have hit a snag when testing some failure scenarios and can't seem to find a solution online.
Some of the method calls I'm working with raise exceptions which I want to trap. So I want my mock object to simulate these conditions by raising an exception on a particular call. I tried doing it like this:
from ludibrio import *
with Mock() as myMock:
def raiseException():
raise Exception('blah')
myMock.test() >> raiseException()
try:
print myMock.test()
except Exception, e:
print 'Error: %s' % e
myMock.validate()
The trouble is, raiseException() is evaluated when the mock object is built, rather than when myMock.test() is called. So clearly this isn't the correct way to do this.
Is there a way to get the mock object to raise an exception at runtime? Or would the exception be intercepted as a failure and not get outside of the mock object anyway?
Further Googling has eventually yielded the answer. Simply tell the mock object to pass back an exception. This appears to then be raised on the outside:
myMock.test() >> Exception('blah')
Related
I have code that raises exceptions from other exceptions so that we can see details about eveything that went wrong. In the example below we include information about what we are processing and what specific thing in the processing went wrong.
def process(thing):
try:
process_widgets(thing)
except Exception as e:
raise CouldNotProcess(thing) from e
def process_widgets(thing):
for widget in get_widgets(thing):
raise CouldNotProcessWidget(widget)
def do_processing():
for thing in things_to_process():
process(thing)
I am trying to change this so that process_widgets can raise a specific type of exception and do_processing can change its behaviour based on this exception. However the raise from in process is masking this exception which makes this impossible. Is there a good to let do_processing know about what went wrong in process_widgets while also doing raise from.
Ideas:
Python 3.11 has exception groups. So perhaps there is a way of adding exceptions to group and catching them with the likely confusing except* syntax.
There is a dirty trick where I do raise e from CouldNoPorcess(thing) to get both the helpful logging.
Apparently internally exception chaining works by adding __cause__ property (forming a linked list) so I could manually look through causes in the top most exception to manually implement behaviour like except* with exception groups.
def raised_from(e, type):
while e is not None:
if isinstance(e, Specific):
return True
e = e.__cause__
return False
...
try:
do_processing()
except CouldNotProcess as e:
if raised_from(e, CouldNotProcessWidget):
do_stuff()
You see this pattern quite a lot with status_codes from http.
I could use logging rather than adding information to exceptions. This hides information from the exception handling code, but works for logging. I think this is the work around that I'll use at the moment.
It's noticeable that the PEP says that exception chaining isn't quite designed for adding information to exceptions.
Update
python 3.11 has an add_note method and notes property which can be used to add information - which works for some use cases.
For this use case exception groups might be the way to go, though I am concerned that this might be a little confusing.
Lets assume there is a method, and I want this method to be called only at certain conditions of a class variable, and otherwise I want to throw an exception:
def foo(self):
if not self.somecondition:
raise ????
else:
#do the thing it supposed to do
Now I am looking for a correct exception for that. I went through them all at python built-in Exceptions docs and found nothing suitable. The classic valueerror is not quite suitable because there is not necessarily a problem with the values but rather with the method being called in this
circumstances.
What exception should I use then? Do I need to create my own in this case or does python has a built in answer?
I have a function that has try/except as follows:
def func_A():
try:
# do some stuff
except Exception as e:
log.error("there was an exception %s", str(e))
I want to write a unit test for this func_A()
More importantly, I want to ensure that
No exception was caught inside A
I have try/except just for safety.
Unless there is a bug, there should be no exception thrown inside A (although it will be caught with try/except) and that's what I want to validate with my unit test.
What is the best way for unit test to catch the case where there was an exception thrown and caught?
If you really need this, one possible way is to mock out the log.error object. After invoking the func_A function, you can make an assertion that your mock wasn't called.
Note that you should not catch exceptions at all if you don't intend to actually handle them. For proper test coverage, you should provide 2 tests here - one which checks each branching of the try/except.
Another possible solution is to split implementation into two functions:
Function foo() with logic without try statement. This way you can make sure that no exception is thrown in your implementation.
safe_foo() which wraps foo() into try statement. Then you can mock foo() to simulate throwing an exception by it and make sure every exception is caught.
Drawback is that either foo() will be part of a public interface or you will write tests for a private function.
You can have one variable which will track function executed properly or ended in exception.
def func_A():
function_state = True
try:
# do some stuff
except Exception as e:
log.error("there was an exception %s", str(e))
function_state = False
return function_state
Use assertTrue to validate function_state.
Option 1: Don't. This is testing an implementation detail. Try to write your test suite so that you very the function does everything you need it to do. If it does what you want with the inputs you want, you're good.
Option 2: You can modify the function to take a logger as a parameter. Then in the test case, pass in a mock object and check that the logging method gets called.
I am hitting an external api that allows 1request/sec so to avoid request collision, I am using retrying module in python. I want to retry on generic exception and try with a random time frame.
#retry(retry_on_exception=exceptions.Exception,wrap_exception=True,wait_random_min=1000,wait_random_max=2500)
def do_some _stuff(info):
#some stuff that can throw exception
Does it seem like a robust solution to my problem, I am not sure? Also, is it the correct way to handle generic exception in retry decorator. I am using it exactly like the above code and it does not throw any errors but not sure. All the examples I have seen are with some specific exceptions.
updated: Exceptions I am getting
reject |= self._retry_on_exception(attempt.value[1])
TypeError: unsupported operand type(s) for |=: 'bool' and 'exceptions.Exception'
You're not using the retry_on_exception parameter correctly. It expects a callable and that callable must return a boolean. You can see an example of this in the package doc:
def retry_if_io_error(exception):
"""Return True if we should retry (in this case when it's an IOError), False otherwise"""
return isinstance(exception, IOError)
#retry(retry_on_exception=retry_if_io_error)
def might_io_error():
print "Retry forever with no wait if an IOError occurs, raise any other errors"
So in your case you can check on Exception instances, instead of IOError, with the test isinstance(exception, Exception). But you can notice that it'll always be true and so it's sort of pointless. #retry default behavior is to retry independently from what the exception is, so you actually have nothing to add:
#retry(wait_random_min=1000, wait_random_max=2500)
def do_some_stuff(info):
#some stuff that can throw exception
But it's generally a bad idea to treat all types of exceptions the same way. In your case it looks like you should only retry on some HttpError or whatever similar error your code raises when you hit the API limit.
I have a python module containing functions and a few classes. This module is basically used as a tool-set by several of my co-workers.
I want to set-up a sort of bug reporting system where anytime someone generates an exception that I don't handle, an email will be sent with information on the exception. This way I can continually improve the robustness of my code and the help-fullness of my own error messages. Is the best way to do this to just put a try/except block around the entire module?
There are several reasons I think your approach might not be the best.
Sometimes exceptions should be thrown. For example, if I pass some stupid argument to a function, it should complain by throwing an exception. You don't want to get an email every time someone passes a string instead of an integer, etc. do you?
Besides, wrapping the entire thing in a try...except won't work, as that will only be catching exceptions that would occur during the definition of the classes/functions (when your module is loaded/imported). For example,
# Your python library
try:
def foo():
raise Exception('foo exception')
return 42
except Exception as e:
print 'Handled: ', e
# A consumer of your library
foo()
The exception is still uncaught.
I guess you can make your own SelfMailingException and subclass it. Not that I would recommend this approach.
another option:
def raises(*exception_list):
def wrap(f):
def wrapped_f(*x, **y):
try:
f(*x, **y)
except Exception as e:
if not isinstance(e, tuple(exception_list)):
print('send mail')
# send mail
raise
return wrapped_f
return wrap
usage:
#raises(MyException)
def foo():
...