I have a file that contains several python functions, each with some statements.
def func1():
codeX...
def func2():
codeY...
codeX and codeY can be multiple statements. I want to be able to parse the file, find a function by name, then evaluate the code in that function.
With the ast module, I can parse the file, find the FunctionDef objects, and get the list of Stmt objects, but how do I turn this into bytecode that I can pass to eval? Should I use the compile module, or the parser module instead?
Basically, the function defs are just used to create separate blocks of code. I want to be able to grab any block of code given the name and then execute that code in eval (providing my own local/global scope objects). If there is a better way to do this than what I described that would be helpful too.
Thanks
I want to be able to grab any block of code given the name and then execute that code ... (providing my own local/global scope objects).
A naive solution looks like this. This is based on the assumption that the functions don't all depend on global variables.
from file_that_contains_several_python_functions import *
Direction = some_value
func1()
func2()
func3()
That should do exactly what you want.
However, if all of your functions rely on global variables -- a design that calls to mind 1970's-era FORTRAN -- then you have to do something slightly more complex.
from file_that_contains_several_python_functions import *
Direction = some_value
func1( globals() )
func2( globals() )
func3( globals() )
And you have to rewrite all of your global-using functions like this.
def func1( context )
globals().update( context )
# Now you have access to all kinds of global variables
This seems ugly because it is. Functions which rely entirely on global variables are not really the best idea.
Using Python 2.6.4:
text = """
def fun1():
print 'fun1'
def fun2():
print 'fun2'
"""
import ast
tree = ast.parse(text)
# tree.body[0] contains FunctionDef for fun1, tree.body[1] for fun2
wrapped = ast.Interactive(body=[a.body[1]])
code = compile(wrapped, 'yourfile', 'single')
eval(code)
fun2() # prints 'fun2'
Take a look at grammar in ast doc: http://docs.python.org/library/ast.html#abstract-grammar. Top-level statement must be either Module, Interactive or Expression, so you need to wrap function def in one of those.
If you're using Python 2.6 or later, then the compile() function accepts AST objects in addition to source code.
>>> import ast
>>> a = ast.parse("print('hello world')")
>>> x = compile(a, "(none)", "exec")
>>> eval(x)
hello world
These modules have all been rearranged for Python 3.
Related
I am attempting to implement a decorator that receives a function, parses it into an AST, eventually will do something to the AST, then reconstruct the original (or modified) function from the AST and return it. My current approach is, once I have the AST, compile it to a code <module> object, then get the constant in it with the name of the function, convert it to FunctionType, and return it. I have the following:
import ast, inspect, types
def as_ast(f):
source = inspect.getsource(f)
source = '\n'.join(source.splitlines()[1:]) # Remove as_ast decoration, pretend there can be no other decorations for now
tree = ast.parse(source)
print(ast.dump(tree, indent=4)) # Debugging log
# I would modify the AST somehow here
filename = f.__code__.co_filename
code = compile(tree, filename, 'exec')
func_code = next(
filter(
lambda x: isinstance(x, types.CodeType) and x.co_name == f.__name__,
code.co_consts)) # Get function object
func = types.FunctionType(func_code, {})
return func
#as_ast
def test(arg: int=4):
print(f'{arg=}')
Now, I would expect that calling test later in this source code will simply have the effect of calling test if the decorator were absent, which is what I observe, so long as I pass an argument for arg. However, if I pass no argument, instead of using the default I gave (4), it throws a TypeError for the missing argument. This makes it pretty clear that my approach for getting a callable function from the AST is not quite correct, as the default argument is not applied, and there may be other details that would slip through as it is now. How might I be able to correctly recreate the function from the AST? The way I currently go from the code module object to the function code object also seems... off intuitively, but I do not know how else one might achieve this.
The root node of the AST is a Module. Calling compile() on the AST, results in a code object for a module. Looking at the compiled code object returned using dis.dis(), from the standard library, shows the module level code builds the function and stores it in the global name space. So the easiest thing to do is exec the compiled code and then get the function from the 'global' environment of the exec call.
The AST node for the function includes a list of the decorators to be applied to the function. Any decorators that haven't been applied yet should be deleted from the list so they don't get applied twice (once when this decorator compiles the code, and once after this decorator returns). And delete this decorator from the list or you'll get an infinite recursion. The question is what to do with any decorators that came before this one. They have already run, but their result is tossed out because this decorator (as_ast) goes back to the source code. You can leave them in the list so they get rerun, or delete them if they don't matter.
In the code below, all the decorators are deleted from the parse tree, under the assumption that the as_ast decorator is applied first. The call to exec() uses a copy of globals() so the decorator has access to any other globally visible names (variables, functions, etc). See the docs for exec() for other considerations. Uncommented the print statements to see what is going on.
import ast
import dis
import inspect
import types
def as_ast(f):
source = inspect.getsource(f)
#print(f"=== source ===\n{source}")
tree = ast.parse(source)
#print(f"\n=== original ===\n{ast.dump(tree, indent=4)}")
# Remove the decorators from the AST, because the modified function will
# be passed to them anyway and we don't want them to be called twice.
for node in ast.walk(tree):
if isinstance(node, ast.FunctionDef):
node.decorator_list.clear()
# Make modifications to the AST here
#print(f"\n=== revised ===\n{ast.dump(tree, indent=4)}")
name = f.__code__.co_name
code = compile(tree, name, 'exec')
#print("\n=== byte code ===")
#dis.dis(code)
#print()
temp_globals = dict(globals())
exec(code, temp_globals)
return temp_globals[name]
Note: this decorator has not been tested much and has not been tested at all on methods or nested functions.
An interesting idea would be to for as_ast to return the AST. Then subsequent decorators could manipulate the AST. Lastly, a from_ast decorator could compile the modified AST into a function.
Is it possible to use with statement in Python anonymous functions? For example, I have a function that writes 1 to a file:
def write_one(filename):
with open(filename, 'wt') as fp:
fp.write('1')
But this function is to be organized in a dict:
my_functions = {
....
}
Obviously I can write this statement to add this function to the dict:
my_functions['write_one'] = write_one
But the problem is the name write_one still exists in the current scope. How can I introduce an anonymous function without polluting the current namespace?
For simple functions, I can use lambda. For slightly complicated functions, I can return a tuple to execute multiple statements (to be precise, expressions). But I didn't find a way to cleverly use lambda so that it can work with with statements. If this is impossible, where it says so in its documentation?
The solution with a del write_one doesn't look good to me. I don't want this name to be introduced at all in the current namespace.
In a word, what I want is something like this:
my_functions['write_one'] = def(filename):
with open(filename, 'wt') as fp:
fp.write('1')
This is kind of awkward with Python's indentation-based rules, I know. But it does its job.
Lambda expressions are quite restricted in what they can do. From the docs
Note that functions created with lambda expressions cannot contain statements or annotations.
Just use a full function definition. If you really want to avoid polluting the namespace, just del the name afterwards.
Or if you simply want to avoid the module namespace from having a bunch of these small functions for code-completion purposes, use _ in the front of the function name.
If you truly want to avoid it, you could use the function constructor and dynamically compile code, etc. Or use some other kind of dynamic code execution, e.g. using eval or exec. But that is almost certainly not worth the trouble.
Especially if you can just del the name after you are done using it.
Or perhaps the best approach is to put all these functions in another namespace, like another module.
There are not many namespaces in Python - a function is basically the local one. That gives a good hint to a possible solution:
def gen_functions():
def f1(i):
print(i)
def f2(i):
print(i+1)
return f1,f2
my_funcs = dict(zip(('a','b'),gen_functions()))
my_funcs['a'](2)
my_funcs['b'](7)
f1(3)
Run this:
>py bla.py
2
8
Traceback (most recent call last):
File "bla.py", line 11, in <module>
f1(3)
NameError: name 'f1' is not defined
So you can make arbitrarily complex functions - to hide them from the global namespace you just enclose them in another function.
I’ve tried to develop a « module expander » tool for Python 3 but I've some issues.
The idea is the following : for a given Python script main.py, the tool generates a functionally equivalent Python script expanded_main.py, by replacing each import statement by the actual code of the imported module; this assumes that the Python source code of the imported is accessible. To do the job the right way, I’m using the builtin module ast of Python as well as astor, a third-party tool allowing to dump the AST back into Python source. The motivation of this import expander is to be able to compile a script into one single bytecode chunk, so the Python VM should not take care of importing modules (this could be useful for MicroPython, for instance).
The simplest case is the statement:
from import my_module1 import *
To transform this, my tool looks for a file my_module1.py and it replaces the import statement by the content of this file. Then, the expanded_main.py can access any name defined in my_module, as if the module was imported the normal way. I don’t care about subtle side effects that may reveal the trick. Also, to simplify, I treat from import my_module1 import a, b, c as the previous import (with asterisk), without caring about possible side effect. So far so good.
Now here is my point. How could you handle this flavor of import:
import my_module2
My first idea was to mimic this by creating a class having the same name as the module and copying the content of the Python file indented:
class my_module2:
# content of my_module2.py
…
This actually works for many cases but, sadly, I discovered that this has several glitches: one of these is that it fails with functions having a body referring to a global variable defined in the module. For example, consider the following two Python files:
# my_module2.py
g = "Hello"
def greetings():
print (g + " World!")
and
# main.py
import my_module2
print(my_module2.g)
my_module2.greetings()
At execution, main.py prints "Hello" and "Hello World!". Now, my expander tool shall generate this:
# expanded_main.py
class my_module2:
g = "Hello"
def greetings():
print (g + " World!")
print(my_module2.g)
my_module2.greetings()
At execution of expanded_main.py, the first print statement is OK ("Hello") but the greetings function raises an exception: NameError: name 'g' is not defined.
What happens actually is that
in the module my_module2, g is a global variable,
in the class my_module2, g is a class variable, which should be referred as my_module2.g.
Other similar side effects happens when you define functions, classes, … in my_module2.py and you want to refer to them in other functions, classes, … of the same my_module2.py.
Any idea how these problems could be solved?
Apart classes, are there other Python constructs that allow to mimic a module?
Final note: I’m aware that the tool should take care 1° of nested imports (recursion), 2° of possible multiple import of the same module. I don't expect to discuss these topics here.
You can execute the source code of a module in the scope of a function, specifically an instance method. The attributes can then be made available by defining __getattr__ on the corresponding class and keeping a copy of the initial function's locals(). Here is some sample code:
class Importer:
def __init__(self):
g = "Hello"
def greetings():
print (g + " World!")
self._attributes = locals()
def __getattr__(self, item):
return self._attributes[item]
module1 = Importer()
print(module1.g)
module1.greetings()
Nested imports are handled naturally by replacing them the same way with an instance of Importer. Duplicate imports shouldn't be a problem either.
For a test-driven pedagogical module, I need to check doctests in a precise order.
Is there a way to grab all callables in the current module, in their order of definition?
What I tried:
Loop on globals and check if the object is a callable. The problem is that globals is a dict and thus not ordered.
Using the doctests directly is not convenient because the "stop at first error" won't work for me as I have several functions to test.
Each function object has a code object which stores the first line number, so you can use:
import inspect
ordered = sorted(inspect.getmembers(moduleobj, inspect.isfunction),
key=lambda kv: kv[1].__code__.co_firstlineno)
to get a sorted list of (name, function) pairs. For Python 2.5 and older, you'll need to use .func_code instead of .__code__.
You may need to further filter on functions that were defined in the module itself and have not been imported; func.__module__ == moduleobj.__name__ should suffice there.
Thanks to Martijn, I eventually found.
This is a complete snippet for Python3.
import sys
import inspect
def f1():
"f1!"
pass
def f3():
"f3!"
pass
def f2():
"f2!"
pass
funcs = [elt[1] for elt in inspect.getmembers(sys.modules[__name__],
inspect.isfunction)]
ordered_funcs = sorted(funcs, key=lambda f: f.__code__.co_firstlineno)
for f in ordered_funcs:
print(f.__doc__)
Please excuse the vague title. If anyone has a suggestion, please let me know! Also please retag with more appropriate tags!
The Problem
I want to have an instance of an imported class be able to view things in the scope (globals, locals) of the importer. Since I'm not sure of the exact mechanism at work here, I can describe it much better with snippets than words.
## File 1
def f1(): print "go f1!"
class C1(object):
def do_eval(self,x): # maybe this should be do_evil, given what happens
print "evaling"
eval(x)
eval(x,globals(),locals())
Then run this code from an iteractive session, there there will be lots of NameErrors
## interactive
class C2(object):
def do_eval(self,x): # maybe this should be do_evil, given what happens
print "evaling"
eval(x)
eval(x,globals(),locals())
def f2():
print "go f2!"
from file1 import C1
import file1
C1().do_eval('file1.f1()')
C1().do_eval('f1()')
C1().do_eval('f2()')
file1.C1().do_eval('file1.f1()')
file1.C1().do_eval('f1()')
file1.C1().do_eval('f2()')
C2().do_eval('f2()')
C2().do_eval('file1.f1()')
C2().do_eval('f1()')
Is there a common idiom / pattern for this sort of task? Am I barking up the wrong tree entirely?
In this example, you can simply hand over functions as objects to the methods in C1:
>>> class C1(object):
>>> def eval(self, x):
>>> x()
>>>
>>> def f2(): print "go f2"
>>> c = C1()
>>> c.eval(f2)
go f2
In Python, you can pass functions and classes to other methods and invoke/create them there.
If you want to actually evaluate a code string, you have to specify the environment, as already mentioned by Thomas.
Your module from above, slightly changed:
## File 1
def f1(): print "go f1!"
class C1(object):
def do_eval(self, x, e_globals = globals(), e_locals = locals()):
eval(x, e_globals, e_locals)
Now, in the interactive interpreter:
>>> def f2():
>>> print "go f2!"
>>> from file1 import * # 1
>>> C1().do_eval("f2()") # 2
NameError: name 'f2' is not defined
>>> C1().do_eval("f2()", globals(), locals()) #3
go f2!
>>> C1().do_eval("f1()", globals(), locals()) #4
go f1!
Some annotations
Here, we insert all objects from file1 into this module's namespace
f2 is not in the namespace of file1, therefore we get a NameError
Now we pass the environment explictly, and the code can be evaluated
f1 is in the namespace of this module, because we imported it
Edit: Added code sample on how to explicitly pass environment for eval.
Functions are always executed in the scope they are defined in, as are methods and class bodies. They are never executed in another scope. Because importing is just another assignment statement, and everything in Python is a reference, the functions, classes and modules don't even know where they are imported to.
You can do two things: explicitly pass the 'environment' you want them to use, or use stack hackery to access their caller's namespace. The former is vastly preferred over the latter, as it's not as implementation-dependent and fragile as the latter.
You may wish to look at the string.Template class, which tries to do something similar.