Decorated generator function - python

I have a decorator:
def remediation_decorator(dec_mthd):
def new_func(*args, **kwargs):
try:
return dec_mthd(*args, **kwargs)
except (KeyError, HTTPError) as err:
print(f'error = {err}... call the remediation function')
return new_func
Inside the generator function, another function is called to raise specific exceptions under certain conditions:
def check(number):
if number == 1:
raise HTTPError
if number == 2:
raise KeyError
This generator function is decorated like so:
#remediation_decorator
def dec_mthd_b(number):
check(number)
for i in range(0,3):
yield i+1
When an exception is raised by the check function, the decorator's except is not hit.
[ins] In [16]: dec_mthd_b(1)
Out[16]: <generator object dec_mthd_b at 0x10e79cc80>
It appears to behave like this because it's a generator function - from Yield expressions:
When a generator function is called, it returns an iterator known as a generator.
(I wonder whether to take this in the literal sense 'it returns the iterator first irrespective of other logic in the function', hence why check() does not raise the exception?)
and,
By suspended, we mean that all local state is retained, including the current bindings of local variables, the instruction pointer, the internal evaluation stack, and the state of any exception handling.
Have I understood this correctly? Please can anyone explain this further?

Yes you got it.
#remediation_decorator is a Syntactic Sugar in python for decorators. I'm going to use the verbose(?) form:
def dec_mthd_b(number):
check(number)
for i in range(0, 3):
yield i + 1
dec_mthd_b = remediation_decorator(dec_mthd_b)
What does this line do ? remediation_decorator is your decorator, it gives you the inner function, in your case new_func.
What is new_func ? It is a normal function, when you call it, it runs the body of the function.
What will return from new_func ? dec_mthd(*args, **kwargs).
Here dec_mthd points to dec_mthd_b and it is a function again. But when you call it, since dec_mthd_b has yield` keyword inside, it gives you back the generator object.
Now here is the point. The body of your inner function, here new_func, is executed without any problem. You got your generator object back. No error is raised...
# this is the result of calling inner function, which gives you the generator object
gen = dec_mthd_b(1)
# Here is where you're going to face the exceptions.
for i in gen:
print(i)
What will happen in the for loop ? Python runs the body of the dec_mthd_b. The error is raised from there...
So in order to catch the exceptions, you have two options, either catch it inside the dec_mthd_b, or in the last for loop.

You have understood it correctly. The below code will throw an exception. When a generator is created nothing gets executed. You need to fetch the next element, hence get the value from generator, then it will raise the exception.
g = dec_mthd_b(1)
next(g) #throws httperror
In fact thats how iteration is done, we repeatadly call next method, until IterationError exception is thrown.

Related

How to catch StopIteration from subgenerator

I'd like to write a generator which can accept a limited number of inputs from yields and then gracefully handle further inputs. What's the best way of catching StopIteration?
I've tried wrapping by inner generator with an outer generator using a yield from expression inside a try-except block, but StopIteration gets raised anyway...
def limited_writer(max_writes):
for i in range(max_writes):
x = yield
print(x) #
def graceful_writer(l):
try:
yield from l
except StopIteration:
# Ideally will have additional handling logic here
raise Exception("Tried to write too much")
l_w = limited_writer(4)
g_w = graceful_writer(w)
g_w.send(None)
for i in range(5):
g_w.send(i)
I'd like the above to raise Exception (but more generally provide a nice way of handling providing too much data), but in fact it still raises StopIteration. What's the best solution?
If you want graceful_writer to keep accepting data that is sent to it via its .send() method, it needs to keep on yielding indefinitely. The try/except block you currently have doesn't actually do anything, the yield from statement already absorbs the StopIteration from limited_writer. The one you are seeing at the top level comes from graceful_writer itself, when it reaches the end of its code.
To avoid that, try using an infinite loop, like this:
def graceful_writer(gen):
yield from gen # send values to wrapped generator for as long as it will take them
while True:
yield # then loop forever, discarding any additional values sent in

Python Define IFERROR function

I'm trying to define my own IFERROR function in python like in Excel. (Yes, I know I can write try/except. I'm just trying to create an inline shorthand for a try/except pattern I often use.) The current use case is trying to get several attributes of some remote tables. The module used to connect to them gives a variety of errors and if that happens, I simply want to record that an error was hit when attempting to get that attribute.
What I've tried:
A search revealed a number of threads, the most helpful of which were:
Frequently repeated try/except in Python
Python: try-except as an Expression?
After reading these threads, I tried writing the following:
>>> def iferror(success, failure, *exceptions):
... try:
... return success
... except exceptions or Exception:
... return failure
...
>>> iferror(1/0,0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ZeroDivisionError: division by zero
I also tried using a context manager (new to me):
>>> from contextlib import contextmanager as cm
>>> #cm
... def iferror(failure, *exceptions):
... try:
... yield
... except exceptions or Exception:
... return failure
...
>>> with iferror(0,ZeroDivisionError) as x:
... x=1/0
...
>>> print(x)
None
Is there a way to define a function which will perform a predefined try/except pattern like IFERROR?
The problem with iferror(1/0,0) is that function arguments are evaluated before the function is entered (this is the case in most programming languages, the one big exception being Haskell). No matter what iferror does, 1/0 runs first and throws an error.
We must somehow delay the evaluation of 1/0 so it happens inside the function, in the context of a try block. One way is to use a string (iferror('1/0', 1)) that iferror can then eval. But eval should be avoided where possible, and there is a more light-weight alternative: Function bodies are not evaluated until the function is called, so we can just wrap our expression in a function and pass that:
def iferror(success, failure, *exceptions):
try:
return success()
except exceptions or Exception:
return failure
def my_expr():
return 1/0
print(iferror(my_expr, 42))
42
The crucial part here is that we don't call my_expr directly. We pass it as a function into iferror, which then invokes success(), which ends up executing return 1/0.
The only problem is that we had to pull the function argument (1/0) out of the normal flow of code and into a separate function definition, which we had to give a name (even thought it's only used once).
These shortcomings can be avoided by using lambda, which lets us define single-expression functions inline:
def iferror(success, failure, *exceptions):
try:
return success()
# ^^
except exceptions or Exception:
return failure
print(iferror(lambda: 1/0, 42))
# ^^^^^^^
42
[Live demo]
Compared to your original attempt, only two changes were necessary: Wrap the expression in a lambda:, which delays evaluation, and use () in try: return success() to call the lambda, which triggers evaluation of the function body.
Well I found a way but I'm not quite sure, if it's that what you are looking for.
First of all the error occurs if you call the function, as a result your function doesn't start at all! So I gave the first parameter of the function a string.
So the function gets your "test" and "controls" it with eval(). Here is my Code:
def iferror(success: str, failure, *exceptions):
try:
# Test
return eval(success)
except exceptions or Exception:
return failure
iferror("1/0", "Hi there!")
I hope I could hope you.

How to check, whether exception was raisen in the current scope?

I use the following code to call an arbitrary callable f() with appropriate number of parameters:
try:
res = f(arg1)
except TypeError:
res = f(arg1, arg2)
If f() is a two parameter function, calling it with just one parameter raises TypeError, so the function can be called properly in the except branch.
The problem is when f() is one-parameter function and the exception is raisen in the body of f() (possibly because of a bad call to some function), for example:
def f(arg):
map(arg) # raises TypeError
The control flow goes to the except branch because of the internal error of f(). Of course calling f() with two arguments raises a new TypeError. Then instead of traceback to the original error I get traceback to the call of f() with two parameters, which is much less helpful when debugging.
How can my code recognize exceptions raised not in the current scope to reraise them?
I want to write the code like this:
try:
res = f(arg1)
except TypeError:
if exceptionRaisedNotInTheTryBlockScope(): # <-- subject of the question
raise
res = f(arg1, arg2)
I know I can use a walkarround by adding exc_info = sys.exc_info() in the except block.
One of assumptions is I have no control over f() since it is given by user of my module. Also its __name__ attribute may be other than 'f'. Internal exception may be raised by bad recurrent call to f().
The walkarround is unsuitable since it complicates debugging by the author of f().
You can capture the exception object and examine it.
try:
res = f(arg1)
except TypeError as e:
if "f() missing 1 required positional argument" in e.args[0]:
res = f(arg1, arg2)
else:
raise
Frankly, though, not going the extra length to classify the exception should work fine as you should be getting both the original traceback and the secondary traceback if the error originated inside f() -- debugging should not be a problem.
Also, if you have control over f() you can make second argument optional and not have to second-guess it:
def f(a, b=None):
pass
Now you can call it either way.
How about this:
import inspect
if len(inspect.getargspec(f).args) == 1:
res = f(arg1)
else:
res = f(arg1, arg2)
Finally figured it out:
def exceptionRaisedNotInTheTryBlockScope():
return sys.exc_info()[2].tb_next is not None
sys.exc_info() returns a 3-element tuple. Its last element is the traceback of the last exception. If the traceback object is the only one in the traceback chain, then the exception has been raisen in the scope of the try block (https://docs.python.org/2/reference/datamodel.html).
According to https://docs.python.org/2/library/sys.html#sys.exc_info, it should be avoided to store the traceback value.

How do you assert two functions throw the same error without knowing the error?

I have an outer function that calls an inner function by passing the arguments along. Is it possible to test that both functions throw the same exception/error without knowing the exact error type?
I'm looking for something like:
def test_invalidInput_throwsSameError(self):
arg = 'invalidarg'
self.assertRaisesSameError(
innerFunction(arg),
outerFunction(arg)
)
Assuming you're using unittest (and python2.7 or newer) and that you're not doing something pathological like raising old-style class instances as errors, you can get the exception from the error context if you use assertRaises as a context manager.
with self.assertRaises(Exception) as err_context1:
innerFunction(arg)
with self.assertRaises(Exception) as err_context2:
outerFunction(arg)
# Or some other measure of "sameness"
self.assertEqual(
type(err_context1.exception),
type(err_context2.exception))
First call the first function and capture what exception it raises:
try:
innerfunction(arg)
except Exception as e:
pass
else:
e = None
Then assert that the other function raises the same exception:
self.assertRaises(e, outerfunction, arg)
I believe this will do what you want; just add it as another method of your TestCase class:
def assertRaisesSameError(self, *funcs, bases=(Exception,)):
exceptions = []
for func in funcs:
with self.assertRaises(base) as error_context:
func()
exceptions.append(error_context.exception)
for exc in exceptions:
self.assertEqual(type(exc), type(exc[-1]))
I haven't tested this, but it should work. For simplicity this only takes functions with no arguments, so if your function does have arguments, you'll have to make a wrapper (even just with a lambda). It would not be hard at all to expand this to allow for passing in the *args and **kwargs.
Let me know of any changes anyone thinks should be made.

How to unit test a function that does not return anything?

Is it possible to find the values of the local variables in a function by mocking?
class A:
def f(self):
a = 5000 * 10
B().someFunction(a)
How do I write a unit test for this function? I have mocked someFunction as I do not want the testing scope to go outside the block. The only way I can test the rest of the function is by checking if the value of variable a is 50000 at the end of the function. How do I do this?
A function that does not return anything, doesn't modify anything and does not raise any error is a function that basically have no reason to be.
If your function is supposed to assert something and raise an error,
give it wrong information and check if it does raise the right error.
If your function takes an object and modifies it, test if the new
state of your object is as expected.
If your function output
something to the console, you can temporarily redirect the
input/output stream and test what is written/read.
If none of the above, just delete your function and forget about it :)
With interaction testing, you could check what value someFunction was called with. What happens inside that function, should be tested in the unit test of that function.
Put the body of the function in a try/except block which does not return anything and return true if successful. This does not break the existing code.
ex:-
def fun():
try:
# function code
return True
except Exception as e:
raise Exception(str(e))

Categories

Resources