I have a function that has a return value. I'd like the function to behave differently when the return value of the function is assigned to something.
So something like this:
def func():
if return_value_assigned():
print("yes")
else:
print("no")
return 0
# this should print "no"
func()
# this should print "yes"
a = func()
I've tried to use this method:
frame_stack = inspect.getouterframes(inspect.currentframe())
code_context = frame_stack[2].code_context[0]
is_return_value_assigned = re.match(r"[\w+ ,]*=", code_context) is not None
However, apparently, code_context only stores the last line of the function call. So for example, this wouldn't work:
a = func(*args,
**kwargs)
Is there a better way of doing this? Is it possible to get the code context of the entire function call instead of just the last line?
The only option that makes any sense, with regards to every programming standard on caller vs. callee responsibilities I've ever seen, is something like this:
def foo(return_value=True):
if return_value:
print("yes")
else:
print("no")
return 0
Otherwise you're asking the called function to extact information from the caller, which seems like a really bad idea for many reasons.
I wrote my own c-module for Python and for a custom table in a documentation I need the number of parameters of the builtin-functions during runtime.
There are functions in Python 2 like inspect.getargspec or functions in Python 3 like inspect.signature which support normal Python functions, but leave builtin-functions unsupported.
There are two other community solutions so far:
Parsing the doc-strings
Parsing the original *.c file
See answer for third approach
In some cases the docstrings are outdated and/or it's hard to extract the argument count since the docstring can be any plain string. Parsing the original *.c file is a good approach as well, but you might not have access to it.
In the following this is the working solution I came up with for Python 2 and 3.
What does it do?
During runtime a list of 99 None objects gets passed to the corresponding function. One of the first checks in the internal parsing function PyArg_ParseTuple checks if the amount of parameters matches the amount of passed parameters - if not it will fail. That means we will call the function but we can also be sure it doesn't get really executed.
Technical background:
Why is it so hard to get the count of parameters of built-in functions? The problem is that the parameter list is evaluated during runtime, not compile time. A very simple example of a built-in function in C looks like this:
static PyObject* example(PyObject *self, PyObject *args)
{
int myFirstParam;
if(!PyArg_ParseTuple(args, "i", &myFirstParam))
return NULL;
...
}
Copy and Paste Solution:
import inspect
import time
import re
import types
import sys
def get_parameter_count(func):
"""Count parameter of a function.
Supports Python functions (and built-in functions).
If a function takes *args, then -1 is returned
Example:
import os
arg = get_parameter_count(os.chdir)
print(arg) # Output: 1
-- For C devs:
In CPython, some built-in functions defined in C provide
no metadata about their arguments. That's why we pass a
list with 999 None objects (randomly choosen) to it and
expect the underlying PyArg_ParseTuple fails with a
corresponding error message.
"""
# If the function is a builtin function we use our
# approach. If it's an ordinary Python function we
# fallback by using the the built-in extraction
# functions (see else case), otherwise
if isinstance(func, types.BuiltinFunctionType):
try:
arg_test = 999
s = [None] * arg_test
func(*s)
except TypeError as e:
message = str(e)
found = re.match(
r"[\w]+\(\) takes ([0-9]{1,3}) positional argument[s]* but " +
str(arg_test) + " were given", message)
if found:
return int(found.group(1))
if "takes no arguments" in message:
return 0
elif "takes at most" in message:
found = re.match(
r"[\w]+\(\) takes at most ([0-9]{1,3}).+", message)
if found:
return int(found.group(1))
elif "takes exactly" in message:
# string can contain 'takes 1' or 'takes one',
# depending on the Python version
found = re.match(
r"[\w]+\(\) takes exactly ([0-9]{1,3}|[\w]+).+", message)
if found:
return 1 if found.group(1) == "one" \
else int(found.group(1))
return -1 # *args
else:
try:
if (sys.version_info > (3, 0)):
argspec = inspect.getfullargspec(func)
else:
argspec = inspect.getargspec(func)
except:
raise TypeError("unable to determine parameter count")
return -1 if argspec.varargs else len(argspec.args)
def print_get_parameter_count(mod):
for x in dir(mod):
e = mod.__dict__.get(x)
if isinstance(e, types.BuiltinFunctionType):
print("{}.{} takes {} argument(s)".format(mod.__name__, e.__name__, get_parameter_count(e)))
import os
print_get_parameter_count(os)
Output:
os._exit takes 1 argument(s)
os.abort takes 0 argument(s)
os.access takes 2 argument(s)
os.chdir takes 1 argument(s)
os.chmod takes 2 argument(s)
os.close takes 1 argument(s)
...
At the core, what I'm trying to do is take a number of functions that look like this undecorated validation function:
def f(k: bool):
def g(n):
# check that n is valid
return n
return g
And make them look like this decorated validation function:
#k
def f():
def g(n):
# check that n is valid
return n
return g
The idea here being that k is describing the same functionality across all of the implementing functions.
Specifically, these functions are all returning 'validation' functions for use with the voluptuous validation framework. So all the functions of type f() are returning a function that is later executed by Schema(). k is actually allow_none, which is to say a flag that determines if a None value is ok. A very simple example might be this sample use code:
x = "Some input value."
y = None
input_validator = Schema(f(allow_none=True))
x = input_validator(x) # succeeds, returning x
y = input_validator(y) # succeeds, returning None
input_validator_no_none = Schema(f(allow_none=False))
x = input_validator(x) # succeeds, returning x
y = input_validator(y) # raises an Invalid
Without changing the sample use code I am attempting to achieve the same result by changing the undecorated validation functions to decorated validation functions. To give a concrete example, changing this:
def valid_identifier(allow_none: bool=True):
min_range = Range(min=1)
validator = Any(All(int, min_range), All(Coerce(int), min_range))
return Any(validator, None) if allow_none else validator
To this:
#allow_none(default=True)
def valid_identifier():
min_range = Range(min=1)
return Any(All(int, min_range), All(Coerce(int), min_range))
The function returned from these two should be equivalent.
What I've tried to write is this, utilizing the decorator library:
from decorator import decorator
#decorator
def allow_none(default: bool=True):
def decorate_validator(wrapped_validator, allow_none: bool=default):
#wraps(wrapped_validator)
def validator_allowing_none(*args, **kwargs):
if allow_none:
return Any(None, wrapped_validator)
else:
return wrapped_validator(*args, **kwargs)
return validator_allowing_none
return decorate_validator
And I have a unittest.TestCase in order to test if this works as expected:
#allow_none()
def test_wrapped_func():
return Schema(str)
class TestAllowNone(unittest.TestCase):
def test_allow_none__success(self):
test_string = "blah"
validation_function = test_wrapped_func(allow_none=False)
self.assertEqual(test_string, validation_function(test_string))
self.assertEqual(None, validation_function(None))
But my test returns the following failure:
def validate_callable(path, data):
try:
> return schema(data)
E TypeError: test_wrapped_func() takes 0 positional arguments but 1 was given
I tried debugging this, but couldn't get the debugger to actually enter the decoration. I suspect that because of naming issues, such as raised in this (very lengthy) blog post series, that test_wrapped_func isn't getting it's argument list properly set, and so the decorator is never even executed, but it may also be something else entirely.
I tried some other variations. By removing the function parentheses from #allow_none:
#allow_none
def test_wrapped_func():
return Schema(str)
I get a different error:
> validation_function = test_wrapped_func(allow_none=False)
E TypeError: test_wrapped_func() got an unexpected keyword argument 'allow_none'
Dropping the #decorator fails with:
> validation_function = test_wrapped_func(allow_none=False)
E TypeError: decorate_validator() missing 1 required positional argument: 'wrapped_validator'
Which makes sense because #allow_none takes an argument, and so the parentheses would logically be needed. Replacing them gives the original error.
Decorators are subtle, and I'm clearly missing something here. This is similar to currying a function, but it's not quite working. What am I missing about how this should be implemented?
I think you are putting your allow_none=default argument at the wrong nesting level. It should be on the innermost function (the wrapper), rather than the decorator (the middle level).
Try something like this:
def allow_none(default=True): # this is the decorator factory
def decorator(validator): # this is the decorator
#wraps(validator)
def wrapper(*args, allow_none=default, **kwargs): # this is the wrapper
if allow_none:
return Any(None, validator)
else:
return validator(*args, **kwargs)
return wrapper
return decorator
If you don't need the default to be settable, you can get rid of the outermost layer of nesting and just make the default value a constant in the wrapper function (or omit it if your callers will always pass a value). Note that as I wrote it above, the allow_none argument to the wrapper is a keyword-only argument. If you want to pass it as a positional parameter, you can move it ahead of *args, but that requires that it be the first positional argument, which may not be desireable from an API standpoint. More sophisticated solutions are probably possible, but overkill for this answer.
Let's say I have the following function/method, which calculates a bunch of stuff and then sets a lot a variables/attributes: calc_and_set(obj).
Now what I would like to do is to call the function several times with different objects, and if one or more fails then nothing should be set at all.
I thought I could do it like this:
try:
calc_and_set(obj1)
calc_and_set(obj2)
calc_and_set(obj3)
except:
pass
But this obviously doesn't work. If for instance the error happens in the third call to the function, then the first and second call will already have set the variables.
Can anyone think of a "clean" way of doing what I want? The only solutions I can think of are rather ugly workarounds.
I see a few options here.
A. Have a "reverse function", which is robust. So if
def calc_and_set(obj):
obj.A = 'a'
def unset(obj):
if hasattr(obj, 'A'):
del obj.A
and
try:
calc_and_set(obj1)
calc_and_set(obj2)
except:
unset(obj1)
unset(obj2)
Notice, that in this case, unset doesn't care if calc_and_set completed successfully or not.
B. Separate calc_and_set to try_calc_and_set, testing if it works, and set, which won't throw errors, and would be called only if all try_calc_and_set didn't fail.
try:
try_calc_and_set(obj1)
try_calc_and_set(obj2)
calc_and_set(obj1)
calc_and_set(obj2)
except:
pass
C. (my favorite) - have calc_and_set return a new variable, and not operate in place. If successful, replace the original reference with the new one. This could easily be done by adding copy as the first statement in calc_and_set, and then returning the variable.
try:
obj1_t = calc_and_set(obj1)
obj2_t = calc_and_set(obj2)
obj1 = obj1_t
obj2 = obj2_t
except:
pass
The mirror of that one is of course to save your objects before:
obj1_c = deepcopy(obj1)
obj2_c = deepcopy(obj2)
try:
calc_and_set(obj1)
calc_and_set(obj2)
except:
obj1 = obj1_c
obj2 = obj2_c
And as a general comment (if this is just a sample code, forgive me) - don't have excepts without specifying exception type.
You can also try cache the actions you want to take and then do them all in one go if everybody passes:
from functools import partial
def do_something (obj, val):
# magic here
def validate (obj):
if obj.is_what_you_want():
return partial(do_something, obj, val)
else:
raise ValueError ("unable to process %s" % obj)
instructions = [validate(item) for item in your_list_of_objects]
for each_partial in instructions:
each_partial()
The operations will only get fired if the list compehension collects without any exceptions. You could wrap that for exception safety:
try:
instructions = [validate(item) for item in your_list_of_objects]
for each_partial in instructions:
each_partial()
print "succeeded"
except ValueError:
print "failed"
If there is no "built-in" way of doing this, I think after all the "cleanest" solution is to divide the function in two parts. Something Like this:
try:
res1 = calc(obj1)
res2 = calc(obj2)
res3 = calc(obj3)
except:
pass
else:
set(obj1, res1)
set(obj2, res2)
set(obj3, res3)
This is a follow-up to Handle an exception thrown in a generator and discusses a more general problem.
I have a function that reads data in different formats. All formats are line- or record-oriented and for each format there's a dedicated parsing function, implemented as a generator. So the main reading function gets an input and a generator, which reads its respective format from the input and delivers records back to the main function:
def read(stream, parsefunc):
for record in parsefunc(stream):
do_stuff(record)
where parsefunc is something like:
def parsefunc(stream):
while not eof(stream):
rec = read_record(stream)
do some stuff
yield rec
The problem I'm facing is that while parsefunc can throw an exception (e.g. when reading from a stream), it has no idea how to handle it. The function responsible for handling exceptions is the main read function. Note that exceptions occur on a per-record basis, so even if one record fails, the generator should continue its work and yield records back until the whole stream is exhausted.
In the previous question I tried to put next(parsefunc) in a try block, but as turned out, this is not going to work. So I have to add try-except to the parsefunc itself and then somehow deliver exceptions to the consumer:
def parsefunc(stream):
while not eof(stream):
try:
rec = read_record()
yield rec
except Exception as e:
?????
I'm rather reluctant to do this because
it makes no sense to use try in a function that isn't intended to handle any exceptions
it's unclear to me how to pass exceptions to the consuming function
there going to be many formats and many parsefunc's, I don't want to clutter them with too much helper code.
Has anyone suggestions for a better architecture?
A note for googlers: in addition to the top answer, pay attention to senderle's and Jon's posts - very smart and insightful stuff.
You can return a tuple of record and exception in the parsefunc and let the consumer function decide what to do with the exception:
import random
def get_record(line):
num = random.randint(0, 3)
if num == 3:
raise Exception("3 means danger")
return line
def parsefunc(stream):
for line in stream:
try:
rec = get_record(line)
except Exception as e:
yield (None, e)
else:
yield (rec, None)
if __name__ == '__main__':
with open('temp.txt') as f:
for rec, e in parsefunc(f):
if e:
print "Got an exception %s" % e
else:
print "Got a record %s" % rec
Thinking deeper about what would happen in a more complex case kind of vindicates the Python choice of avoiding bubbling exceptions out of a generator.
If I got an I/O error from a stream object the odds of simply being able to recover and continue reading, without the structures local to the generator being reset in some way, would be low. I would somehow have to reconcile myself with the reading process in order to continue: skip garbage, push back partial data, reset some incomplete internal tracking structure, etc.
Only the generator has enough context to do that properly. Even if you could keep the generator context, having the outer block handle the exceptions would totally flout the Law of Demeter. All the important information that the surrounding block needs to reset and move on is in local variables of the generator function! And getting or passing that information, though possible, is disgusting.
The resulting exception would almost always be thrown after cleaning up, in which case the reader-generator will already have an internal exception block. Trying very hard to maintain this cleanliness in the brain-dead-simple case only to have it break down in almost every realistic context would be silly. So just have the try in the generator, you are going to need the body of the except block anyway, in any complex case.
It would be nice if exceptional conditions could look like exceptions, though, and not like return values. So I would add an intermediate adapter to allow for this: The generator would yield either data or exceptions and the adapter would re-raise the exception if applicable. The adapter should be called first-thing inside the for loop, so that we have the option of catching it within the loop and cleaning up to continue, or breaking out of the loop to catch it and and abandon the process. And we should put some kind of lame wrapper around the setup to indicate that tricks are afoot, and to force the adapter to get called if the function is adapting.
That way each layer is presented errors that it has the context to handle, at the expense of the adapter being a tiny bit intrusive (and perhaps also easy to forget).
So we would have:
def read(stream, parsefunc):
try:
for source in frozen(parsefunc(stream)):
try:
record = source.thaw()
do_stuff(record)
except Exception, e:
log_error(e)
if not is_recoverable(e):
raise
recover()
except Exception, e:
properly_give_up()
wrap_up()
(Where the two try blocks are optional.)
The adapter looks like:
class Frozen(object):
def __init__(self, item):
self.value = item
def thaw(self):
if isinstance(value, Exception):
raise value
return value
def frozen(generator):
for item in generator:
yield Frozen(item)
And parsefunc looks like:
def parsefunc(stream):
while not eof(stream):
try:
rec = read_record(stream)
do_some_stuff()
yield rec
except Exception, e:
properly_skip_record_or_prepare_retry()
yield e
To make it harder to forget the adapter, we could also change frozen from a function to a decorator on parsefunc.
def frozen_results(func):
def freezer(__func = func, *args, **kw):
for item in __func(*args, **kw):
yield Frozen(item)
return freezer
In which case we we would declare:
#frozen_results
def parsefunc(stream):
...
And we would obviously not bother to declare frozen, or wrap it around the call to parsefunc.
Without knowing more about the system, I think it's difficult to tell what approach will work best. However, one option that no one has suggested yet would be to use a callback. Given that only read knows how to deal with exceptions, might something like this work?
def read(stream, parsefunc):
some_closure_data = {}
def error_callback_1(e):
manipulate(some_closure_data, e)
def error_callback_2(e):
transform(some_closure_data, e)
for record in parsefunc(stream, error_callback_1):
do_stuff(record)
Then, in parsefunc:
def parsefunc(stream, error_callback):
while not eof(stream):
try:
rec = read_record()
yield rec
except Exception as e:
error_callback(e)
I used a closure over a mutable local here; you could also define a class. Note also that you can access the traceback info via sys.exc_info() inside the callback.
Another interesting approach might be to use send. This would work a little differently; basically, instead of defining a callback, read could check the result of yield, do a lot of complex logic, and send a substitute value, which the generator would then re-yield (or do something else with). This is a bit more exotic, but I thought I'd mention it in case it's useful:
>>> def parsefunc(it):
... default = None
... for x in it:
... try:
... rec = float(x)
... except ValueError as e:
... default = yield e
... yield default
... else:
... yield rec
...
>>> parsed_values = parsefunc(['4', '6', '5', '5h', '22', '7'])
>>> for x in parsed_values:
... if isinstance(x, ValueError):
... x = parsed_values.send(0.0)
... print x
...
4.0
6.0
5.0
0.0
22.0
7.0
On it's own this is a bit useless ("Why not just print the default directly from read?" you might ask), but you could do more complex things with default inside the generator, resetting values, going back a step, and so on. You could even wait to send a callback at this point based on the error you receive. But note that sys.exc_info() is cleared as soon as the generator yields, so you'll have to send everything from sys.exc_info() if you need access to the traceback.
Here's an example of how you might combine the two options:
import string
digits = set(string.digits)
def digits_only(v):
return ''.join(c for c in v if c in digits)
def parsefunc(it):
default = None
for x in it:
try:
rec = float(x)
except ValueError as e:
callback = yield e
yield float(callback(x))
else:
yield rec
parsed_values = parsefunc(['4', '6', '5', '5h', '22', '7'])
for x in parsed_values:
if isinstance(x, ValueError):
x = parsed_values.send(digits_only)
print x
An example of a possible design:
from StringIO import StringIO
import csv
blah = StringIO('this,is,1\nthis,is\n')
def parse_csv(stream):
for row in csv.reader(stream):
try:
yield int(row[2])
except (IndexError, ValueError) as e:
pass # don't yield but might need something
# All others have to go up a level - so it wasn't parsable
# So if it's an IOError you know why, but this needs to catch
# exceptions potentially, just let the major ones propogate
for record in parse_csv(blah):
print record
I like the given answer with the Frozen stuff. Based on that idea I came up with this, solving two aspects I did not yet like. The first was the patterns needed to write it down. The second was the loss of the stack trace when yielding an exception. I tried my best to solve the first by using decorators as good as possible. I tried keeping the stack trace by using sys.exc_info() instead of the exception alone.
My generator normally (i.e. without my stuff applied) would look like this:
def generator():
def f(i):
return float(i) / (3 - i)
for i in range(5):
yield f(i)
If I can transform it into using an inner function to determine the value to yield, I can apply my method:
def generator():
def f(i):
return float(i) / (3 - i)
for i in range(5):
def generate():
return f(i)
yield generate()
This doesn't yet change anything and calling it like this would raise an error with a proper stack trace:
for e in generator():
print e
Now, applying my decorators, the code would look like this:
#excepterGenerator
def generator():
def f(i):
return float(i) / (3 - i)
for i in range(5):
#excepterBlock
def generate():
return f(i)
yield generate()
Not much change optically. And you still can use it the way you used the version before:
for e in generator():
print e
And you still get a proper stack trace when calling. (Just one more frame is in there now.)
But now you also can use it like this:
it = generator()
while it:
try:
for e in it:
print e
except Exception as problem:
print 'exc', problem
This way you can handle in the consumer any exception raised in the generator without too much syntactic hassle and without losing stack traces.
The decorators are spelled out like this:
import sys
def excepterBlock(code):
def wrapper(*args, **kwargs):
try:
return (code(*args, **kwargs), None)
except Exception:
return (None, sys.exc_info())
return wrapper
class Excepter(object):
def __init__(self, generator):
self.generator = generator
self.running = True
def next(self):
try:
v, e = self.generator.next()
except StopIteration:
self.running = False
raise
if e:
raise e[0], e[1], e[2]
else:
return v
def __iter__(self):
return self
def __nonzero__(self):
return self.running
def excepterGenerator(generator):
return lambda *args, **kwargs: Excepter(generator(*args, **kwargs))
(I answered the other question linked in the OP but my answer applies to this situation as well)
I have needed to solve this problem a couple of times and came upon this question after a search for what other people have done.
One option- which will probably require refactoring things a little bit- would be to simply create an error handling generator, and throw the exception in the generator (to another error handling generator) rather than raise it.
Here is what the error handling generator function might look like:
def err_handler():
# a generator for processing errors
while True:
try:
# errors are thrown to this point in function
yield
except Exception1:
handle_exc1()
except Exception2:
handle_exc2()
except Exception3:
handle_exc3()
except Exception:
raise
An additional handler argument is provided to the parsefunc function so it has a place to put the errors:
def parsefunc(stream, handler):
# the handler argument fixes errors/problems separately
while not eof(stream):
try:
rec = read_record(stream)
do some stuff
yield rec
except Exception as e:
handler.throw(e)
handler.close()
Now just use almost the original read function, but now with an error handler:
def read(stream, parsefunc):
handler = err_handler()
for record in parsefunc(stream, handler):
do_stuff(record)
This isn't always going to be the best solution, but it's certainly an option, and relatively easy to understand.
About your point of propagating exception from generator to consuming function,
you can try to use an error code (set of error codes) to indicate the error.
Though not elegant that is one approach you can think of.
For example in the below code yielding a value like -1 where you were expecting
a set of positive integers would signal to the calling function that there was
an error.
In [1]: def f():
...: yield 1
...: try:
...: 2/0
...: except ZeroDivisionError,e:
...: yield -1
...: yield 3
...:
In [2]: g = f()
In [3]: next(g)
Out[3]: 1
In [4]: next(g)
Out[4]: -1
In [5]: next(g)
Out[5]: 3
Actually, generators are quite limited in several aspects. You found one: the raising of exceptions is not part of their API.
You could have a look at the Stackless Python stuff like greenlets or coroutines which offer a lot more flexibility; but diving into that is a bit out of scope here.