partial asynchronous functions are not detected as asynchronous - python

I have a function which accepts both regular and asynchronous functions (not coroutines, but functions returning coroutines).
Internally it uses asyncio.iscoroutinefunction() test to see which type of function it got.
Recently it broke down when I attempted to create a partial async function.
In this demonstration, ptest is not recognized as a couroutine function, even if it returns a coroutine, i.e. ptest() is a coroutine.
import asyncio
import functools
async def test(arg): pass
print(asyncio.iscoroutinefunction(test)) # True
ptest = functools.partial(test, None)
print(asyncio.iscoroutinefunction(ptest)) # False!!
print(asyncio.iscoroutine(ptest())) # True
The problem cause is clear, but the solution is not.
How to dynamically create a partial async func which passes the test?
OR
How to test the func wrapped inside a partial object?
Either answer would solve the problem.

Using Python versions < 3.8 you can't make a partial() object pass that test, because the test requires there to be a __code__ object attached directly to the object you pass to inspect.iscoroutinefunction().
You should instead test the function object that partial wraps, accessible via the partial.func attribute:
>>> asyncio.iscoroutinefunction(ptest.func)
True
If you also need to test for partial() objects, then test against functools.partial:
def iscoroutinefunction_or_partial(object):
while isinstance(object, functools.partial):
object = object.func
return inspect.iscoroutinefunction(object)
In Python 3.8 (and newer), the relevant code in the inspect module (that asyncio.iscoroutinefunction() delegates to) was updated to handle partial() objects, and you no longer have to unwrap partial() objects yourself. The implementation uses the same while isinstance(..., functools.partial) loop.

I solved this by replacing all instances of partial with async_partial:
def async_partial(f, *args):
async def f2(*args2):
result = f(*args, *args2)
if asyncio.iscoroutinefunction(f):
result = await result
return result
return f2

Related

Why was Python decorator chaining designed to work backwards? What is the logic behind this order?

To start with, my question here is about the semantics and the logic behind why the Python language was designed like this in the case of chained decorators. Please notice the nuance how this is different from the question
How decorators chaining work?
Link: How decorators chaining work? It seems quite a number of other users had the same doubts, about the call order of chained Python decorators. It is not like I can't add a __call__ and see the order for myself. I get this, my point is, why was it designed to start from the bottom, when it comes to chained Python decorators?
E.g.
def first_func(func):
def inner():
x = func()
return x * x
return inner
def second_func(func):
def inner():
x = func()
return 2 * x
return inner
#first_func
#second_func
def num():
return 10
print(num())
Quoting the documentation on decorators:
The decorator syntax is merely syntactic sugar, the following two function definitions are semantically equivalent:
def f(arg):
...
f = staticmethod(f)
#staticmethod
def f(arg):
...
From this it follows that the decoration in
#a
#b
#c
def fun():
...
is equivalent to
fun = a(b(c(fun)))
IOW, it was designed like that because it's just syntactic sugar.
For proof, let's just decorate an existing function and not return a new one:
def dec1(f):
print(f"dec1: got {vars(f)}")
f.dec1 = True
return f
def dec2(f):
print(f"dec2: got {vars(f)}")
f.dec2 = True
return f
#dec1
#dec2
def foo():
pass
print(f"Fully decked out: {vars(foo)}")
prints out
dec2: got {}
dec1: got {'dec2': True}
Fully decked out: {'dec2': True, 'dec1': True}
TL;DR
g(f(x)) means applying f to x first, then applying g to the output.
Omit the parentheses, add # before and line break after each function name:
#g
#f
x
(Syntax only valid if x is the definition of a function/class.)
Abstract explanation
The reasoning behind this design decision becomes fairly obvious IMHO, if you remember what the decorator syntax - in its most abstract and general form - actually means. So I am going to try the abstract approach to explain this.
It is all about syntax
To be clear here, the distinguishing factor in the concept of the "decorator" is not the object underneath it (so to speak) nor the operation it performs. It is the special syntax and the restrictions for it. Thus, a decorator at its core is nothing more than feature of Python grammar.
The decorator syntax requires a target to be decorated. Initially (see PEP 318) the target could only be function definitions; later class definitions were also allowed to be decorated (see PEP 3129).
Minimal valid syntax
Syntactically, this is valid Python:
def f(): pass
#f
class Target: pass # or `def target: pass`
However, this will (perhaps unsuprisingly) cause a TypeError upon execution. As has been reiterated multiple times here and in other posts on this platform, the above is equivalent to this:
def f(): pass
class Target: pass
Target = f(Target)
Minimal working decorator
The TypeError stems from the fact that f lacks a positional argument. This is the obvious logical restriction imposed by what a decorator is supposed to do. Thus, to achieve not only syntactically valid code, but also have it run without errors, this is sufficient:
def f(x): pass
#f
class Target: pass
This is still not very useful, but it is enough for the most general form of a working decorator.
Decoration is just application of a function to the target and assigning the output to the target's name.
Chaining functions ⇒ Chaining decorators
We can ignore the target and what it is or does and focus only on the decorator. Since it merely stands for applying a function, the order of operations comes into play, as soon as we have more than one. What is the order of operation, when we chain functions?
def f(x): pass
def g(x): pass
class Target: pass
Target = g(f(Target))
Well, just like in the composition of purely mathematical functions, this implies that we apply f to Target first and then apply g to the result of f. Despite g appearing first (i.e. further left), it is not what is applied first.
Since stacking decorators is equivalent to nesting functions, it seems obvious to define the order of operation the same way. This time, we just skip the parentheses, add an # symbol in front of the function name and a line break after it.
def f(x): pass
def g(x): pass
#g
#f
class Target: pass
But, why though?
If after the explanation above (and reading the PEPs for historic background), the reasoning behind the order of operation is still not clear or still unintuitive, there is not really any good answer left, other than "because the devs thought it made sense, so get used to it".
PS
I thought I'd add a few things for additional context based on all the comments around your question.
Decoration vs. calling a decorated function
A source of confusion seems to be the distinction between what happens when applying the decorator versus calling the decorated function.
Notice that in my examples above I never actually called target itself (the class or function being decorated). Decoration is itself a function call. Adding #f above the target is calling the f and passing the target to it as the first positional argument.
A "decorated function" might not even be a function
The distinction is very important because nowhere does it say that a decorator actually needs to return a callable (function or class). f being just a function means it can return whatever it wants. This is again valid and working Python code:
def f(x): return 3.14
#f
def target(): return "foo"
try:
target()
except Exception as e:
print(repr(e))
print(target)
Output:
TypeError("'float' object is not callable")
3.14
Notice that the name target does not even refer to a function anymore. It just holds the 3.14 returned by the decorator. Thus, we cannot even call target. The entire function behind it is essentially lost immediately before it is even available to the global namespace. That is because f just completely ignores its first positional argument x.
Replacing a function
Expanding this further, if we want, we can have f return a function. Not doing that seems very strange, considering it is used to decorate a function. But it doesn't have to be related to the target at all. Again, this is fine:
def bar(): return "bar"
def f(x): return bar
#f
def target(): return "foo"
print(target())
print(target is bar)
Output:
bar
True
It comes down to convention
The way decorators are actually overwhelmingly used out in the wild, is in a way that still keeps a reference to the target being decorated around somewhere. In practice it can be as simple as this:
def f(x):
print(f"applied `f({x.__name__})`")
return
#f
def target(): return "foo"
Just running this piece of code outputs applied f(target). Again, notice that we don't call target here, we only called f. But now, the decorated function is still target, so we could add the call print(target()) at the bottom and that would output foo after the other output produced by f.
The fact that most decorators don't just throw away their target comes down to convention. You (as a developer) would not expect your function/class to simply be thrown away completely, when you use a decorator.
Decoration with wrapping
This is why real-life decorators typically either return the reference to the target at the end outright (like in the last example) or they return a different callable, but that callable itself calls the target, meaning a reference to the target is kept in that new callable's local namespace . These functions are what is usually referred to as wrappers:
def f(x):
print(f"applied `f({x.__name__})`")
def wrapper():
print(f"wrapper executing with {locals()=}")
return x()
return wrapper
#f
def target(): return "foo"
print(f"{target()=}")
print(f"{target.__name__=}")
Output:
applied `f(target)`
wrapper executing with locals()={'x': <function target at 0x7f1b2f78f250>}
target()='foo'
target.__name__='wrapper'
As you can see, what the decorator left us is wrapper, not what we originally defined as target. And the wrapper is what we call, when we write target().
Wrapping wrappers
This is the kind of behavior we typically expect, when we use decorators. And therefore it is not surprising that multiple decorators stacked together behave the way they do. The are called from the inside out (as explained above) and each adds its own wrapper around what it receives from the one applied before:
def f(x):
print(f"applied `f({x.__name__})`")
def wrapper_from_f():
print(f"wrapper_from_f executing with {locals()=}")
return x()
return wrapper_from_f
def g(x):
print(f"applied `g({x.__name__})`")
def wrapper_from_g():
print(f"wrapper_from_g executing with {locals()=}")
return x()
return wrapper_from_g
#g
#f
def target(): return "foo"
print(f"{target()=}")
print(f"{target.__name__=}")
Output:
applied `f(target)`
applied `g(wrapper_from_f)`
wrapper_from_g executing with locals()={'x': <function f.<locals>.wrapper_from_f at 0x7fbfc8d64f70>}
wrapper_from_f executing with locals()={'x': <function target at 0x7fbfc8d65630>}
target()='foo'
target.__name__='wrapper_from_g'
This shows very clearly the difference between the order in which the decorators are called and the order in which the wrapped/wrapping functions are called.
After the decoration is done, we are left with wrapper_from_g, which is referenced by our target name in global namespace. When we call it, wrapper_from_g executes and calls wrapper_from_f, which in turn calls the original target.

inspect.getcode on lambda in class decorator returns entire class in PY3

We are finally migrating from Python 2.7.13 to Python 3.6.6 and I have found some strange behaviour of inspect.getcode that behaves very differently in Python 3.6.6 compared to Python 2.7.13.
We have a number of classes that have a decorator that defines if an instance of this class is allowed to do a certain operation. This check is done by a separate module that manages these decorators rather than by the class itself.
To avoid unintentional changes, we use unit testing to record the code inside the decorator and compare it against a benchmark. The code is recorded with inspect.getcode().
If the decorator uses an external method, then result captured by PY2 and PY3 is identical - the code of the method.
However, if the decorator uses a lambda expression, the result is quite different:
PY2 returns the entire decorator, not just the lambda expression, but that's not really a problem for the unit test.
PY3, however, returns the decorator and the entire class - and this is a problem because now any change in the class would break this unit test.
I could not find any documented difference in inspect.getsource() that would explain this. Am I missing something here or is this a bug? I know that there are easy workarounds, e.g. I could just check the returned string and clip everything below the lambda - but I'd rather understand why this is happening.
Here is some example code that you can directly run in PY2 and PY3 (needs six).
This code is written from scratch to demonstrate the problem - please do not worry about the actual implementation of the decorator or other ways of implementing this.
checks = dict() # a mapping of class to check function
# the class decorator
def check(check_function):
def wrap(wrapped):
checks[wrapped] = check_function
return wrapped
return wrap
# example 1: check with lambda. This demonstrates the problem
#check(lambda cls: True)
class checked_with_lambda(object):
def run(self):
pass
# example 2: check with external helper. This works as expected
def my_checker(obj):
return True
#check(my_checker)
class checked_with_function(object):
def run(self):
pass
# this is how the check would be used. This works fine
c = checked_with_lambda()
if checks[type(c)](c):
c.run()
# this is what a unit test would do
# (I have added the looks-like line to show that the check function is actually what I am expecting)
import six, inspect
for cls, check in six.iteritems(checks):
print('Check for "{}":\n'
'looks like: "{}"\n'
'code: "{}"\n'.format(cls.__name__,
repr(check),
inspect.getsource(check)))
If I run this code with Python 2.7.13, this result is as expected:
Check for "checked_with_lambda":
looks like: "<function <lambda> at 0x0000025967FB2A58>"
code: "#check(lambda cls: True)
"
Check for "checked_with_function":
looks like: "<function my_checker at 0x0000025967FB2AC8>"
code: "def my_checker(obj):
return True
"
In PY3, this looks quite different - note how the check-function code for the first class includes the entire class code:
Check for "checked_with_lambda":
looks like: "<function <lambda> at 0x000002A2B295BC80>"
code: "#check(lambda cls: True)
class checked_with_lambda(object):
def run(self):
pass
"
Check for "checked_with_function":
looks like: "<function my_checker at 0x000002A2B295BD08>"
code: "def my_checker(obj):
return True
"
AFAICT this is a deliberate change in Python 3. inspect.getsource is line based, and in python 3 the inspect.BlockFinder helper class detects the line the lambda definition is in is a decorator and returns the whole decorated object. Python 2 did not do this.
You will just have to work around this.

Does python allow me to pass dynamic variables to a decorator at runtime?

I am attempting to integrate a very old system and a newer system at work. The best I can do is to utilize an RSS firehouse type feed the system utilizes. The goal is to use this RSS feed to make the other system perform certain actions when certain people do things.
My idea is to wrap a decorator around certain functions to check if the user (a user ID provided in the RSS feed) has permissions in the new system.
My current solution has a lot of functions that look like this, which are called based on an action field in the feed:
actions_dict = {
...
'action1': function1
}
actions_dict[RSSFEED['action_taken']](RSSFEED['user_id'])
def function1(user_id):
if has_permissions(user_id):
# Do this function
I want to create a has_permissions decorator that takes the user_id so that I can remove this redundant has_permissions check in each of my functions.
#has_permissions(user_id)
def function1():
# Do this function
Unfortunately, I am not sure how to write such a decorator. All the tutorials I see have the #has_permissions() line with a hardcoded value, but in my case it needs to be passed at runtime and will be different each time the function is called.
How can I achieve this functionality?
In your question, you've named both, the check of the user_id, as well as the wanted decorator has_permissions, so I'm going with an example where names are more clear: Let's make a decorator that calls the underlying (decorated) function when the color (a string) is 'green'.
Python decorators are function factories
The decorator itself (if_green in my example below) is a function. It takes a function to be decorated as argument (named function in my example) and returns a function (run_function_if_green in the example). Usually, the returned function calls the passed function at some point, thereby "decorating" it with other actions it might run before or after it, or both.
Of course, it might only conditionally run it, as you seem to need:
def if_green(function):
def run_function_if_green(color, *args, **kwargs):
if color == 'green':
return function(*args, **kwargs)
return run_function_if_green
#if_green
def print_if_green():
print('what a nice color!')
print_if_green('red') # nothing happens
print_if_green('green') # => what a nice color!
What happens when you decorate a function with the decorator (as I did with print_if_green, here), is that the decorator (the function factory, if_green in my example) gets called with the original function (print_if_green as you see it in the code above). As is its nature, it returns a different function. Python then replaces the original function with the one returned by the decorator.
So in the subsequent calls, it's the returned function (run_function_if_green with the original print_if_green as function) that gets called as print_if_green and which conditionally calls further to that original print_if_green.
Functions factories can produce functions that take arguments
The call to the decorator (if_green) only happens once for each decorated function, not every time the decorated functions are called. But as the function returned by the decorator that one time permanently replaces the original function, it gets called instead of the original function every time that original function is invoked. And it can take arguments, if we allow it.
I've given it an argument color, which it uses itself to decide whether to call the decorated function. Further, I've given it the idiomatic vararg arguments, which it uses to call the wrapped function (if it calls it), so that I'm allowed to decorate functions taking an arbitrary number of positional and keyword arguments:
#if_green
def exclaim_if_green(exclamation):
print(exclamation, 'that IS a nice color!')
exclaim_if_green('red', 'Yay') # again, nothing
exclaim_if_green('green', 'Wow') # => Wow that IS a nice color!
The result of decorating a function with if_green is that a new first argument gets prepended to its signature, which will be invisible to the original function (as run_function_if_green doesn't forward it). As you are free in how you implement the function returned by the decorator, it could also call the original function with less, more or different arguments, do any required transformation on them before passing them to the original function or do other crazy stuff.
Concepts, concepts, concepts
Understanding decorators requires knowledge and understanding of various other concepts of the Python language. (Most of which aren't specific to Python, but one might still not be aware of them.)
For brevity's sake (this answer is long enough as it is), I've skipped or glossed over most of them. For a more comprehensive speedrun through (I think) all relevant ones, consult e.g. Understanding Python Decorators in 12 Easy Steps!.
The inputs to decorators (arguments, wrapped function) are rather static in python. There is no way to dynamically pass an argument like you're asking. If the user id can be extracted from somewhere at runtime inside the decorator function however, you can achieve what you want..
In Django for example, things like #login_required expect that the function they're wrapping has request as the first argument, and Request objects have a user attribute that they can utilize. Another, uglier option is to have some sort of global object you can get the current user from (see thread local storage).
The short answer is no: you cannot pass dynamic parameters to decorators.
But... you can certainly invoke them programmatically:
First let's create a decorator that can perform a permission check before executing a function:
import functools
def check_permissions(user_id):
def decorator(f):
#functools.wraps(f)
def wrapper(*args, **kw):
if has_permissions(user_id):
return f(*args, **kw)
else:
# what do you want to do if there aren't permissions?
...
return wrapper
return decorator
Now, when extracting an action from your dictionary, wrap it using the decorator to create a new callable that does an automatic permission check:
checked_action = check_permissions(RSSFEED['user_id'])(
actions_dict[RSSFEED['action_taken']])
Now, when you call checked_action it will first check the permissions corresponding to the user_id before executing the underlying action.
You may easily work around it, example:
from functools import wraps
def some_function():
print("some_function executed")
def some_decorator(decorator_arg1, decorator_arg2):
def decorate(func):
#wraps(func)
def wrapper(*args, **kwargs):
print(decorator_arg1)
ret = func(*args, **kwargs)
print(decorator_arg2)
return ret
return wrapper
return decorate
arg1 = "pre"
arg2 = "post"
decorated = some_decorator(arg1, arg2)(some_function)
In [4]: decorated()
pre
some_function executed
post

Is there a way to access the original function in a mocked method/function such that I can modify the arguments and pass it to the original functions?

I'd like to modify the arguments passed to a method in a module, as opposed to replacing its return value.
I've found a way around this, but it seems like something useful and has turned into a lesson in mocking.
module.py
from third_party import ThirdPartyClass
ThirdPartyClass.do_something('foo', 'bar')
ThirdPartyClass.do_something('foo', 'baz')
tests.py
#mock.patch('module.ThirdPartyClass.do_something')
def test(do_something):
# Instead of directly overriding its return value
# I'd like to modify the arguments passed to this function.
# change return value, no matter inputs
do_something.return_value = 'foo'
# change return value, based on inputs, but have no access to the original function
do_something.side_effect = lambda x, y: y, x
# how can I wrap do_something, so that I can modify its inputs and pass it back to the original function?
# much like a decorator?
I've tried something like the following, but not only is it repetitive and ugly, it doesn't work. After some PDB introspection.. I'm wondering if it's simply due to however this third party library works, as I do see the original functions being called successfully when I drop a pdb inside the side_effect.
Either that, or some auto mocking magic I'm just not following that I'd love to learn about.
def test():
from third_party import ThirdPartyClass
original_do_something = ThirdPartyClass.do_something
with mock.patch('module.ThirdPartyClass.do_something' as mocked_do_something:
def side_effect(arg1, arg2):
return original_do_something(arg1, 'overridden')
mocked_do_something.side_effect = side_effect
# execute module.py
Any guidance is appreciated!
You may want to use parameter wraps for the mock call. (Docs for reference.) This way the original function will be called, but it will have everything from Mock interface.
So for changing parameters called to original function you may want to try it like that:
org.py:
def func(x):
print(x)
main.py:
from unittest import mock
import org
of = org.func
def wrapped(a):
of('--{}--'.format(a))
with mock.patch('org.func', wraps=wrapped):
org.func('x')
org.func.assert_called_with('x')
result:
--x--
The trick is to pass the original underlying function that you still want to access as a parameter to the function.
Eg, for race condition testing, have tempfile.mktemp return an existing pathname:
def mock_mktemp(*, orig_mktemp=tempfile.mktemp, **kwargs):
"""Ensure mktemp returns an existing pathname."""
temp = orig_mktemp(**kwargs)
open(temp, 'w').close()
return temp
Above, orig_mktemp is evaluated when the function is declared, not when it is called, so all invocations will have access to the original method of tempfile.mktemp via orig_mktemp.
I used it as follows:
#unittest.mock.patch('tempfile.mktemp', side_effect=mock_mktemp)
def test_retry_on_existing_temp_path(self, mock_mktemp):
# Simulate race condition: creation of temp path after tempfile.mktemp
...

Replacing parts of the function code on-the-fly

Here I came up with the solution to the other question asked by me on how to remove all costly calling to debug output function scattered over the function code (slowdown was 25 times with using empty function lambda *p: None).
The solution is to edit function code dynamically and prepend all function calls with comment sign #.
from __future__ import print_function
DEBUG = False
def dprint(*args,**kwargs):
'''Debug print'''
print(*args,**kwargs)
def debug(on=False,string='dprint'):
'''Decorator to comment all the lines of the function code starting with string'''
def helper(f):
if not on:
import inspect
source = inspect.getsource(f)
source = source.replace(string, '#'+string) #Beware! Swithces off the whole line after dprint statement
with open('temp_f.py','w') as file:
file.write(source)
from temp_f import f as f_new
return f_new
else:
return f #return f intact
return helper
def f():
dprint('f() started')
print('Important output')
dprint('f() ended')
f = debug(DEBUG,'dprint')(f) #If decorator #debug(True) is used above f(), inspect.getsource somehow includes #debug(True) inside the code.
f()
The problems I see now are these:
# commets all line to the end; but there may be other statements separated by ;. This may be addressed by deleting all pprint calls in f, not commenting, still it may be not that trivial, as there may be nested parantheses.
temp_f.py is created, and then new f code is loaded from it. There should be a better way to do this without writing to hard drive. I found this recipe, but haven't managed to make it work.
if decorator is applied with special syntax used #debug, then inspect.getsource includes the line with decorator to the function code. This line can be manually removed from string, but it may lead to bugs if there are more than one decorator applied to f. I solved it with resorting to old-style decorator application f=decorator(f).
What other problems do you see here?
How can all these problems be solved?
What are upsides and downsides of this approach?
What can be improved here?
Is there any better way to do what I try to achieve with this code?
I think it's a very interesting and contentious technique to preprocess function code before compilation to byte-code. Strange though that nobody got interested in it. I think the code I gave may have a lot of shaky points.
A decorator can return either a wrapper, or the decorated function unaltered. Use it to create a better debugger:
from functools import wraps
def debug(enabled=False):
if not enabled:
return lambda x: x # Noop, returns decorated function unaltered
def debug_decorator(f):
#wraps(f)
def print_start(*args, **kw):
print('{0}() started'.format(f.__name__))
try:
return f(*args, **kw)
finally:
print('{0}() completed'.format(f.__name__))
return print_start
return debug_decorator
The debug function is a decorator factory, when called it produces a decorator function. If debugging is disabled, it simply returns a lambda that returns it argument unchanged, a no-op decorator. When debugging is enabled, it returns a debugging decorator that prints when a decorated function has started and prints again when it returns.
The returned decorator is then applied to the decorated function.
Usage:
DEBUG = True
#debug(DEBUG)
def my_function_to_be_tested():
print('Hello world!')
To reiterate: when DEBUG is set to false, the my_function_to_be_tested remains unaltered, so runtime performance is not affected at all.
Here is the solution I came up with after composing answers from another questions asked by me here on StackOverflow.
This solution don't comment anything and just deletes standalone dprint statements. It uses ast module and works with Abstract Syntax Tree, it lets us avoid parsing source code. This idea was written in the comment here.
Writing to temp_f.py is replaced with execution f in necessary environment. This solution was offered here.
Also, the last solution addresses the problem of decorator recursive application. It's solved by using _blocked global variable.
This code solves the problem asked to be solved in the question. But still, it's suggested not to be used in real projects:
You are correct, you should never resort to this, there are so many
ways it can go wrong. First, Python is not a language designed for
source-level transformations, and it's hard to write it a transformer
such as comment_1 without gratuitously breaking valid code. Second,
this hack would break in all kinds of circumstances - for example,
when defining methods, when defining nested functions, when used in
Cython, when inspect.getsource fails for whatever reason. Python is
dynamic enough that you really don't need this kind of hack to
customize its behavior.
from __future__ import print_function
DEBUG = False
def dprint(*args,**kwargs):
'''Debug print'''
print(*args,**kwargs)
_blocked = False
def nodebug(name='dprint'):
'''Decorator to remove all functions with name 'name' being a separate expressions'''
def helper(f):
global _blocked
if _blocked:
return f
import inspect, ast, sys
source = inspect.getsource(f)
a = ast.parse(source) #get ast tree of f
class Transformer(ast.NodeTransformer):
'''Will delete all expressions containing 'name' functions at the top level'''
def visit_Expr(self, node): #visit all expressions
try:
if node.value.func.id == name: #if expression consists of function with name a
return None #delete it
except(ValueError):
pass
return node #return node unchanged
transformer = Transformer()
a_new = transformer.visit(a)
f_new_compiled = compile(a_new,'<string>','exec')
env = sys.modules[f.__module__].__dict__
_blocked = True
try:
exec(f_new_compiled,env)
finally:
_blocked = False
return env[f.__name__]
return helper
#nodebug('dprint')
def f():
dprint('f() started')
print('Important output')
dprint('f() ended')
print('Important output2')
f()
Other relevant links:
Switching off debug prints

Categories

Resources