Convert python objects to python AST-nodes - python

I have a need to dump the modified python object back into source. So I try to find something to convert real python object to python ast.Node (to use later in astor lib to dump source)
Example of usage I want, Python 2:
import ast
import importlib
import astor
m = importlib.import_module('something')
# modify an object
m.VAR.append(123)
ast_nodes = some_magic(m)
source = astor.dump(ast_nodes)
Please help me to find that some_magic

There's no way to do what you want, because that's not how ASTs work.
When the interpreter runs your code, it will generate an AST out of the source files, and interpret that AST to generate python objects.
What happen to those objects once they've been generated has nothing to do with the AST.
It is however possible to get the AST of what generated the object in the first place.
The module inspect lets you get the source code of some python objects:
import ast
import importlib
import inspect
m = importlib.import_module('pprint')
s = inspect.getsource(m)
a = ast.parse(s)
print(ast.dump(a))
# Prints the AST of the pprint module
But getsource() is aptly named.
If I were to change the value of some variable (or any other object) in m, it wouldn't change its source code.
Even if it was possible to regenerate an AST out of an object, there wouldn't be a single solution some_magic() could return.
Imagine I have a variable x in some module, that I reassign in another module:
# In some_module.py
x = 0
# In __main__.py
m = importlib.import_module('some_module')
m.x = 1 + 227
Now, the value of m.x is 228, but there's no way to know what kind of expression led to that value (well, without reading the AST of __main__.py but this would quickly get out of hand). Was it a mere literal? The result of a function call?
If you really have to get a new AST after modifying some value of a module, the best solution would be to transform the original AST by yourself.
You can find where your identifier got its value, and replace the value of the assignment with whatever you want.
For instance, in my small example x = 0 is represented by the following AST:
Assign(targets=[Name(id='x', ctx=Store())], value=Num(n=0))
And to get the AST matching the reassignment I did in __main__.py, I would have to change the value of the above Assign node as the following:
value=BinOp(left=Num(n=1), op=Add(), right=Num(n=227))
If you'd like to go that way, I recommend you check Python's documentation of the AST node transformer (ast.NodeTransformer), as well as this excellent manual that documents all the nodes you can meet in Python ASTs Green Tree Snakes - the missing Python AST docs.

What Vladimir is asking about is certainly useful for compiler optimizations. Indeed, there are ways to accomplish that using the ast library. Here is a simple example demonstrating evaluation of constant functions:
from ast import *
import numpy as np
PURE_FUNS = {'arange' : np.arange}
PROG = '''
A=arange(5)
B=[0, 1, 2, 3, 4]
A[2:3] = 1
C = [A[1], 2, m]
'''
def py_to_ast(o):
if type(o) == np.ndarray:
return List(elts=[py_to_ast(e) for e in o], ctx=Load())
elif type(o) == np.int64:
return Constant(value=o)
# Add elifs for more types here
else:
assert False
class EvalPureFuns(NodeTransformer):
def visit_Call(self, node):
is_const_args = all(type(a) == Constant for a in node.args)
if node.func.id in PURE_FUNS and is_const_args:
res = eval(unparse(node), PURE_FUNS)
return py_to_ast(res)
return node
node = parse(PROG)
node = EvalPureFuns().visit(node)
print(unparse(node))

Related

Trouble naming csv file I'm writing a dataframe to in pandas in Python [duplicate]

I already read How to get a function name as a string?.
How can I do the same for a variable? As opposed to functions, Python variables do not have the __name__ attribute.
In other words, if I have a variable such as:
foo = dict()
foo['bar'] = 2
I am looking for a function/attribute, e.g. retrieve_name() in order to create a DataFrame in Pandas from this list, where the column names are given by the names of the actual dictionaries:
# List of dictionaries for my DataFrame
list_of_dicts = [n_jobs, users, queues, priorities]
columns = [retrieve_name(d) for d in list_of_dicts]
With Python 3.8 one can simply use f-string debugging feature:
>>> foo = dict()
>>> f'{foo=}'.split('=')[0]
'foo'
One drawback of this method is that in order to get 'foo' printed you have to add f'{foo=}' yourself. In other words, you already have to know the name of the variable. In other words, the above code snippet is exactly the same as just
>>> 'foo'
Even if variable values don't point back to the name, you have access to the list of every assigned variable and its value, so I'm astounded that only one person suggested looping through there to look for your var name.
Someone mentioned on that answer that you might have to walk the stack and check everyone's locals and globals to find foo, but if foo is assigned in the scope where you're calling this retrieve_name function, you can use inspect's current frame to get you all of those local variables.
My explanation might be a little bit too wordy (maybe I should've used a "foo" less words), but here's how it would look in code (Note that if there is more than one variable assigned to the same value, you will get both of those variable names):
import inspect
x, y, z = 1, 2, 3
def retrieve_name(var):
callers_local_vars = inspect.currentframe().f_back.f_locals.items()
return [var_name for var_name, var_val in callers_local_vars if var_val is var]
print(retrieve_name(y))
If you're calling this function from another function, something like:
def foo(bar):
return retrieve_name(bar)
foo(baz)
And you want the baz instead of bar, you'll just need to go back a scope further. This can be done by adding an extra .f_back in the caller_local_vars initialization.
See an example here: ideone
The only objects in Python that have canonical names are modules, functions, and classes, and of course there is no guarantee that this canonical name has any meaning in any namespace after the function or class has been defined or the module imported. These names can also be modified after the objects are created so they may not always be particularly trustworthy.
What you want to do is not possible without recursively walking the tree of named objects; a name is a one-way reference to an object. A common or garden-variety Python object contains no references to its names. Imagine if every integer, every dict, every list, every Boolean needed to maintain a list of strings that represented names that referred to it! It would be an implementation nightmare, with little benefit to the programmer.
TL;DR
Use the Wrapper helper from python-varname:
from varname.helpers import Wrapper
foo = Wrapper(dict())
# foo.name == 'foo'
# foo.value == {}
foo.value['bar'] = 2
For list comprehension part, you can do:
n_jobs = Wrapper(<original_value>)
users = Wrapper(<original_value>)
queues = Wrapper(<original_value>)
priorities = Wrapper(<original_value>)
list_of_dicts = [n_jobs, users, queues, priorities]
columns = [d.name for d in list_of_dicts]
# ['n_jobs', 'users', 'queues', 'priorities']
# REMEMBER that you have to access the <original_value> by d.value
I am the author of the python-varname package. Please let me know if you have any questions or you can submit issues on Github.
The long answer
Is it even possible?
Yes and No.
We are retrieving the variable names at runtime, so we need a function to be called to enable us to access the previous frames to retrieve the variable names. That's why we need a Wrapper there. In that function, at runtime, we are parsing the source code/AST nodes in the previous frames to get the exact variable name.
However, the source code/AST nodes in the previous frames are not always available, or they could be modified by other environments (e.g: pytest's assert statement). One simple example is that the codes run via exec(). Even though we are still able to retrieve some information from the bytecode, it needs too much effort and it is also error-prone.
How to do it?
First of all, we need to identify which frame the variable is given. It's not always simply the direct previous frame. For example, we may have another wrapper for the function:
from varname import varname
def func():
return varname()
def wrapped():
return func()
x = wrapped()
In the above example, we have to skip the frame inside wrapped to get to the right frame x = wrapped() so that we are able to locate x. The arguments frame and ignore of varname allow us to skip some of these intermediate frames. See more details in the README file and the API docs of the package.
Then we need to parse the AST node to locate where the variable is assigned value (function call) to. It's not always just a simple assignment. Sometimes there could be complex AST nodes, for example, x = [wrapped()]. We need to identify the correct assignment by traversing the AST tree.
How reliable is it?
Once we identify the assignment node, it is reliable.
varname is all depending on executing package to look for the node. The node executing detects is ensured to be the correct one (see also this).
It partially works with environments where other AST magics apply, including pytest, ipython, macropy, birdseye, reticulate with R, etc. Neither executing nor varname is 100% working with those environments.
Do we need a package to do it?
Well, yes and no, again.
If your scenario is simple, the code provided by #juan Isaza or #scohe001 probably is enough for you to work with the case where a variable is defined at the direct previous frame and the AST node is a simple assignment. You just need to go one frame back and retrieve the information there.
However, if the scenario becomes complicated, or we need to adopt different application scenarios, you probably need a package like python-varname, to handle them. These scenarios may include to:
present more friendly messages when the source code is not available or AST nodes are not accessible
skip intermediate frames (allows the function to be wrapped or called in other intermediate frames)
automatically ignores calls from built-in functions or libraries. For example: x = str(func())
retrieve multiple variable names on the left-hand side of the assignment
etc.
How about the f-string?
Like the answer provided by #Aivar Paalberg. It's definitely fast and reliable. However, it's not at runtime, meaning that you have to know it's foo before you print the name out. But with varname, you don't have to know that variable is coming:
from varname import varname
def func():
return varname()
# In external uses
x = func() # 'x'
y = func() # 'y'
Finally
python-varname is not only able to detect the variable name from an assignment, but also:
Retrieve variable names directly, using nameof
Detect next immediate attribute name, using will
Fetch argument names/sources passed to a function using argname
Read more from its documentation.
However, the final word I want to say is that, try to avoid using it whenever you can.
Because you can't make sure that the client code will run in an environment where the source node is available or AST node is accessible. And of course, it costs resources to parse the source code, identify the environment, retrieve the AST nodes and evaluate them when needed.
On python3, this function will get the outer most name in the stack:
import inspect
def retrieve_name(var):
"""
Gets the name of var. Does it from the out most frame inner-wards.
:param var: variable to get name from.
:return: string
"""
for fi in reversed(inspect.stack()):
names = [var_name for var_name, var_val in fi.frame.f_locals.items() if var_val is var]
if len(names) > 0:
return names[0]
It is useful anywhere on the code. Traverses the reversed stack looking for the first match.
I don't believe this is possible. Consider the following example:
>>> a = []
>>> b = a
>>> id(a)
140031712435664
>>> id(b)
140031712435664
The a and b point to the same object, but the object can't know what variables point to it.
def name(**variables):
return [x for x in variables]
It's used like this:
name(variable=variable)
>> my_var = 5
>> my_var_name = [ k for k,v in locals().items() if v == my_var][0]
>> my_var_name
'my_var'
In case you get an error if myvar points to another variable, try this (suggested by #mherzog)-
>> my_var = 5
>> my_var_name = [ k for k,v in locals().items() if v is my_var][0]
>> my_var_name
'my_var'
locals() - Return a dictionary containing the current scope's local variables.
by iterating through this dictionary we can check the key which has a value equal to the defined variable, just extracting the key will give us the text of variable in string format.
from (after a bit changes)
https://www.tutorialspoint.com/How-to-get-a-variable-name-as-a-string-in-Python
I wrote the package sorcery to do this kind of magic robustly. You can write:
from sorcery import dict_of
columns = dict_of(n_jobs, users, queues, priorities)
and pass that to the dataframe constructor. It's equivalent to:
columns = dict(n_jobs=n_jobs, users=users, queues=queues, priorities=priorities)
Here's one approach. I wouldn't recommend this for anything important, because it'll be quite brittle. But it can be done.
Create a function that uses the inspect module to find the source code that called it. Then you can parse the source code to identify the variable names that you want to retrieve. For example, here's a function called autodict that takes a list of variables and returns a dictionary mapping variable names to their values. E.g.:
x = 'foo'
y = 'bar'
d = autodict(x, y)
print d
Would give:
{'x': 'foo', 'y': 'bar'}
Inspecting the source code itself is better than searching through the locals() or globals() because the latter approach doesn't tell you which of the variables are the ones you want.
At any rate, here's the code:
def autodict(*args):
get_rid_of = ['autodict(', ',', ')', '\n']
calling_code = inspect.getouterframes(inspect.currentframe())[1][4][0]
calling_code = calling_code[calling_code.index('autodict'):]
for garbage in get_rid_of:
calling_code = calling_code.replace(garbage, '')
var_names, var_values = calling_code.split(), args
dyn_dict = {var_name: var_value for var_name, var_value in
zip(var_names, var_values)}
return dyn_dict
The action happens in the line with inspect.getouterframes, which returns the string within the code that called autodict.
The obvious downside to this sort of magic is that it makes assumptions about how the source code is structured. And of course, it won't work at all if it's run inside the interpreter.
>>> locals()['foo']
{}
>>> globals()['foo']
{}
If you wanted to write your own function, it could be done such that you could check for a variable defined in locals then check globals. If nothing is found you could compare on id() to see if the variable points to the same location in memory.
If your variable is in a class, you could use className.dict.keys() or vars(self) to see if your variable has been defined.
This function will print variable name with its value:
import inspect
def print_this(var):
callers_local_vars = inspect.currentframe().f_back.f_locals.items()
print(str([k for k, v in callers_local_vars if v is var][0])+': '+str(var))
***Input & Function call:***
my_var = 10
print_this(my_var)
***Output**:*
my_var: 10
I have a method, and while not the most efficient...it works! (and it doesn't involve any fancy modules).
Basically it compares your Variable's ID to globals() Variables' IDs, then returns the match's name.
def getVariableName(variable, globalVariables=globals().copy()):
""" Get Variable Name as String by comparing its ID to globals() Variables' IDs
args:
variable(var): Variable to find name for (Obviously this variable has to exist)
kwargs:
globalVariables(dict): Copy of the globals() dict (Adding to Kwargs allows this function to work properly when imported from another .py)
"""
for globalVariable in globalVariables:
if id(variable) == id(globalVariables[globalVariable]): # If our Variable's ID matches this Global Variable's ID...
return globalVariable # Return its name from the Globals() dict
In Python, the def and class keywords will bind a specific name to the object they define (function or class). Similarly, modules are given a name by virtue of being called something specific in the filesystem. In all three cases, there's an obvious way to assign a "canonical" name to the object in question.
However, for other kinds of objects, such a canonical name may simply not exist. For example, consider the elements of a list. The elements in the list are not individually named, and it is entirely possible that the only way to refer to them in a program is by using list indices on the containing list. If such a list of objects was passed into your function, you could not possibly assign meaningful identifiers to the values.
Python doesn't save the name on the left hand side of an assignment into the assigned object because:
It would require figuring out which name was "canonical" among multiple conflicting objects,
It would make no sense for objects which are never assigned to an explicit variable name,
It would be extremely inefficient,
Literally no other language in existence does that.
So, for example, functions defined using lambda will always have the "name" <lambda>, rather than a specific function name.
The best approach would be simply to ask the caller to pass in an (optional) list of names. If typing the '...','...' is too cumbersome, you could accept e.g. a single string containing a comma-separated list of names (like namedtuple does).
I think it's so difficult to do this in Python because of the simple fact that you never will not know the name of the variable you're using. So, in his example, you could do:
Instead of:
list_of_dicts = [n_jobs, users, queues, priorities]
dict_of_dicts = {"n_jobs" : n_jobs, "users" : users, "queues" : queues, "priorities" : priorities}
Many of the answers return just one variable name. But that won't work well if more than one variable have the same value. Here's a variation of Amr Sharaki's answer which returns multiple results if more variables have the same value.
def getVariableNames(variable):
results = []
globalVariables=globals().copy()
for globalVariable in globalVariables:
if id(variable) == id(globalVariables[globalVariable]):
results.append(globalVariable)
return results
a = 1
b = 1
getVariableNames(a)
# ['a', 'b']
just another way to do this based on the content of input variable:
(it returns the name of the first variable that matches to the input variable, otherwise None. One can modify it to get all variable names which are having the same content as input variable)
def retrieve_name(x, Vars=vars()):
for k in Vars:
if isinstance(x, type(Vars[k])):
if x is Vars[k]:
return k
return None
If the goal is to help you keep track of your variables, you can write a simple function that labels the variable and returns its value and type. For example, suppose i_f=3.01 and you round it to an integer called i_n to use in a code, and then need a string i_s that will go into a report.
def whatis(string, x):
print(string+' value=',repr(x),type(x))
return string+' value='+repr(x)+repr(type(x))
i_f=3.01
i_n=int(i_f)
i_s=str(i_n)
i_l=[i_f, i_n, i_s]
i_u=(i_f, i_n, i_s)
## make report that identifies all types
report='\n'+20*'#'+'\nThis is the report:\n'
report+= whatis('i_f ',i_f)+'\n'
report+=whatis('i_n ',i_n)+'\n'
report+=whatis('i_s ',i_s)+'\n'
report+=whatis('i_l ',i_l)+'\n'
report+=whatis('i_u ',i_u)+'\n'
print(report)
This prints to the window at each call for debugging purposes and also yields a string for the written report. The only downside is that you have to type the variable twice each time you call the function.
I am a Python newbie and found this very useful way to log my efforts as I program and try to cope with all the objects in Python. One flaw is that whatis() fails if it calls a function described outside the procedure where it is used. For example, int(i_f) was a valid function call only because the int function is known to Python. You could call whatis() using int(i_f**2), but if for some strange reason you choose to define a function called int_squared it must be declared inside the procedure where whatis() is used.
Maybe this could be useful:
def Retriever(bar):
return (list(globals().keys()))[list(map(lambda x: id(x), list(globals().values()))).index(id(bar))]
The function goes through the list of IDs of values from the global scope (the namespace could be edited), finds the index of the wanted/required var or function based on its ID, and then returns the name from the list of global names based on the acquired index.
Whenever I have to do it, mostly while communicating json schema and constants with the frontend I define a class as follows
class Param:
def __init__(self, name, value):
self.name = name
self.value = value
Then define the variable with name and value.
frame_folder_count = Param({'name':'frame_folder_count', 'value':10})
Now you can access the name and value using the object.
>>> frame_folder_count.name
'frame_folder_count'
>>> def varname(v, scope=None):
d = globals() if not scope else vars(scope); return [k for k in d if d[k] == v]
...
>>> d1 = {'a': 'ape'}; d2 = {'b': 'bear'}; d3 = {'c': 'cat'}
>>> ld = [d1, d2, d3]
>>> [varname(d) for d in ld]
[['d1'], ['d2'], ['d3']]
>>> d5 = d3
>>> [varname(d) for d in ld]
[['d1'], ['d2'], ['d3', 'd5']]
>>> def varname(v, scope=None):
d = globals() if not scope else vars(scope); return [k for k in d if d[k] is v]
...
>>> [varname(d) for d in ld]
[['d1'], ['d2'], ['d3', 'd5']]
As you see and is noted here, there can be multiple variables with the same value or even address, so using a wrapper to keep the names with the data is best.
Following method will not return the name of variable but using this method you can create data frame easily if variable is available in global scope.
class CustomDict(dict):
def __add__(self, other):
return CustomDict({**self, **other})
class GlobalBase(type):
def __getattr__(cls, key):
return CustomDict({key: globals()[key]})
def __getitem__(cls, keys):
return CustomDict({key: globals()[key] for key in keys})
class G(metaclass=GlobalBase):
pass
x, y, z = 0, 1, 2
print('method 1:', G['x', 'y', 'z']) # Outcome: method 1: {'x': 0, 'y': 1, 'z': 2}
print('method 2:', G.x + G.y + G.z) # Outcome: method 2: {'x': 0, 'y': 1, 'z': 2}
A = [0, 1]
B = [1, 2]
pd.DataFrame(G.A + G.B) # It will return a data frame with A and B columns
Some of the previous cases would fail if there are two variables with the same value. So it is convenient to alert it:
Defining function:
# Variable to string of variable name
def var_name(variable,i=0):
results = []
for name in globals():
if eval(name) == variable:
results.append(name)
if len(results) > 1:
print('Warning:' )
print(' var_name() has found',len(results), 'possible outcomes.')
print(' Please choose the suitable parameter "i". Where "i" is the index')
print(' that matches your choice from the list below.')
print(' ',results) ; print('')
return results[i]
Use:
var_1 = 10
var_name(var_1) # Output will be "var_1"
If you have 2 variables with the same value like var_1 = 8 and var_2 = 8, then a warning will appear.
var_1 = 8
var_2 = 8
var_name(var_2) # Output will be "var_1" too but Warning will appear
You can get your variable as kwargs and return it as string:
var=2
def getVarName(**kwargs):
return list(kwargs.keys())[0]
print (getVarName(var = var))
Note: variable name must be equal to itself.
I try to get name from inspect locals, but it cann't process var likes a[1], b.val.
After it, I got a new idea --- get var name from the code, and I try it succ!
code like below:
#direct get from called function code
def retrieve_name_ex(var):
stacks = inspect.stack()
try:
func = stacks[0].function
code = stacks[1].code_context[0]
s = code.index(func)
s = code.index("(", s + len(func)) + 1
e = code.index(")", s)
return code[s:e].strip()
except:
return ""
You can try the following to retrieve the name of a function you defined (does not work for built-in functions though):
import re
def retrieve_name(func):
return re.match("<function\s+(\w+)\s+at.*", str(func)).group(1)
def foo(x):
return x**2
print(retrieve_name(foo))
# foo
When finding the name of a variable from its value,
you may have several variables equal to the same value,
for example var1 = 'hello' and var2 = 'hello'.
My solution:
def find_var_name(val):
dict_list = []
global_dict = dict(globals())
for k, v in global_dict.items():
dict_list.append([k, v])
return [item[0] for item in dict_list if item[1] == val]
var1 = 'hello'
var2 = 'hello'
find_var_name('hello')
Outputs
['var1', 'var2']
Compressed version of iDilip's answer:
import inspect
def varname(x):
return [k for k,v in inspect.currentframe().f_back.f_locals.items() if v is x][0]
hi = 123
print(varname(hi))
It's totally possible to get the name of an instance variable, so long as it is the property of a class.
I got this from Effective Python by Brett Slatkin. Hope it helps someone:
The class must implement the get, set, and set_name dunder methods, which are part of the "Descriptor Protocol"
This worked when I ran it:
class FieldThatKnowsItsName():
def __init__(self):
self.name = None
self._value= None
self.owner = None
def __set_name__(self, owner, name):
self.name = name
self.owner = owner
self.owner.fields[self.name] = self
def __get__(self, instance, instance_type):
return self
def __set__(self, instance, value):
self = value
class SuperTable:
fields = {}
field_1=FieldThatKnowsItsName()
field_2=FieldThatKnowsItsName()
table = SuperTable()
print(table.field_1.name)
print(table.field_2.name)
You can then add methods and or extend your datatype as you like.
As a bonus, the set_name(self, owner, name) dunder also passes the parent instance, so the Field class instance can register itself with the parent.
I got this from Effective Python by Brett Slatkin. It took a while to figure out how to implement.
How can I do the same for a variable? As opposed to functions, Python variables do not have the __name__ attribute.
The problem comes up because you are confused about terminology, semantics or both.
"variables" don't belong in the same category as "functions". A "variable" is not a thing that takes up space in memory while the code is running. It is just a name that exists in your source code - so that when you're writing the code, you can explain which thing you're talking about. Python uses names in the source code to refer to (i.e., give a name to) values. (In many languages, a variable is more like a name for a specific location in memory where the value will be stored. But Python's names actually name the thing in question.)
In Python, a function is a value. (In some languages, this is not the case; although there are bytes of memory used to represent the actual executable code, it isn't a discrete chunk of memory that your program logic gets to interact with directly.) In Python, every value is an object, meaning that you can assign names to it freely, pass it as an argument, return it from a function, etc. (In many languages, this is not the case.) Objects in Python have attributes, which are the things you access using the . syntax. Functions in Python have a __name__ attribute, which is assigned when the function is created. Specifically, when a def statement is executed (in most languages, creation of a function works quite differently), the name that appears after def is used as a value for the __name__ attribute, and also, independently, as a variable name that will get the function object assigned to it.
But most objects don't have an attribute like that.
In other words, if I have a variable such as:
That's the thing: you don't "have" the variable in the sense that you're thinking of. You have the object that is named by that variable. Anything else depends on the information incidentally being stored in some other object - such as the locals() of the enclosing function. But it would be better to store the information yourself. Instead of relying on a variable name to carry information for you, explicitly build the mapping between the string name you want to use for the object, and the object itself.

Dynamically creating variables without using a dict

Use case - I am taking python code created in another system, and breaking it up into individual functions and connecting them together. The entire point of this work is to break up large python functions that we did not write into smaller python functions for many business reasons.
I COULD take the code, parse for variables, and arbitrarily put them in a dict when doing this, but that is more than a teeny bit of work, and I'd like to run this to ground before I do.
I understand we should almost never but I need to because I am code generating wrappers for functions I did not write, I need to dynamically create variables inside a function. I also can't use exec because the value could be a complex structure (e.g., a dict).
So, the point of what we're doing is to ask the original authors to make no changes to the incoming code while still executing it across several independent entities.
Just like in the example listed here - we're capturing as much state as we can with the first exit (ideally functions, lambdas and all variables), and re-instating them in the second function so that two functions which formerly had the same scope and context can execute with no changes.
Here is a single block of reproducible code (everything not related to b is code that I can use to wrap the assignment:
Original:
def original_function():
b = 100
b = b + 20
Resulting generated function:
def fun_1() -> str:
import dill
from base64 import urlsafe_b64decode, urlsafe_b64encode
from types import ModuleType
b = 100
locals_keys = frozenset(locals().keys())
global_keys = frozenset(globals().keys())
__context_export = {}
for val in locals_keys:
if not val.startswith("_") and not isinstance(val, ModuleType):
__context_export[val] = dill.dumps(locals()[val])
for val in global_keys:
if not val.startswith("_") and not isinstance(val, ModuleType):
__context_export[val] = dill.dumps(globals()[val])
b64_string = str(urlsafe_b64encode(dill.dumps(__context_export)), encoding="ascii")
from collections import namedtuple
output = namedtuple("FuncOutput", ["context"])
return output(b64_string)
def fun_2(context):
import dill
from base64 import urlsafe_b64encode, urlsafe_b64decode
from types import ModuleType
__base64_decode = urlsafe_b64decode(context)
__context_import_dict = dill.loads(__base64_decode)
for k in __context_import_dict:
val = dill.loads(__context_import_dict[k])
if globals().get(k) is None and not isinstance(val, ModuleType):
globals()[k] = val
b = b + 20
output = fun_1()
fun_2(output[0])
The error I get when I run this is:
UnboundLocalError: local variable 'b' referenced before assignment
Thank you all for the help!
Ok, this was a pretty easy solve after I understood the issues. To be honest, this makes even more sense - since I'm getting the code from externally (as a string), it makes sense that I should mount in the necessary global variables and exec inside closed environment.
TO BE CLEAR - this is executing inside the USER'S environment, so security is not an issue. But this works!
def fun_1() -> str:
import dill
from base64 import urlsafe_b64decode, urlsafe_b64encode
from types import ModuleType, FunctionType
# CODE FROM EXTERNAL
b = 100
# END CODE
locals_keys = frozenset(locals().keys())
global_keys = frozenset(globals().keys())
__context_export = {}
for val in locals_keys:
if (
not val.startswith("_")
and not isinstance(val, ModuleType)
and not isinstance(val, FunctionType)
):
__context_export[val] = dill.dumps(locals()[val])
for val in global_keys:
if (
not val.startswith("_")
and not isinstance(val, ModuleType)
and not isinstance(val, FunctionType)
):
__context_export[val] = dill.dumps(globals()[val])
b64_string = str(urlsafe_b64encode(dill.dumps(__context_export)), encoding="ascii")
from collections import namedtuple
output = namedtuple("FuncOutput", ["context"])
return output(b64_string)
def fun_2(context):
import dill
from base64 import urlsafe_b64encode, urlsafe_b64decode
from types import ModuleType, FunctionType
from pprint import pprint as pp
__base64_decode = urlsafe_b64decode(context)
__context_import_dict = dill.loads(__base64_decode)
variables = {}
for k in __context_import_dict:
variables[k] = dill.loads(__context_import_dict[k])
loc = {}
# CODE FROM EXTERNAL
inner_code_to_execute "b = b + 20"
# END CODE
exec(inner_code_to_execute, variables, loc)
print(loc["b"])
output = fun_1()
fun_2(output[0])

How to tell normal Python function from generator by looking at the AST?

I need to detect whether a ast.FunctionDef in Python 3 AST is normal function definition or a generator definition.
Do I need to traverse the body and look for ast.Yield-s or is there a simpler way?
There's a sneaky way to do this is you compile the AST instance with compile. The code object has a couple of flags attached to it, one of them being 'GENERATOR', that you can use to distinguish these. Of course this depends on certain compilation flags so it isn't really portable across CPython versions or implementations
For example, with a non-generator function:
func = """
def spam_func():
print("spam")
"""
# Create the AST instance for it
m = ast.parse(func)
# get the function code
# co_consts[0] is used because `m` is
# compiled as a module and we want the
# function object
fc = compile(m, '', 'exec').co_consts[0]
# get a string of the flags and
# check for membership
from dis import pretty_flags
'GENERATOR' in pretty_flags(fc.co_flags) # False
Similarly, for a spam_gen generator, you'd get:
gen = """
def spam_gen():
yield "spammy"
"""
m = ast.parse(gen)
gc = compile(m, '', 'exec').co_consts[0]
'GENERATOR' in pretty_flags(gc.co_flags) # True
This might be more sneaky than what you need though, traversing the AST is another viable option that's probably more understandable and portable.
If you have a function object instead of an AST you can always perform the same check by using func.__code__.co_flags:
def spam_gen():
yield "spammy"
from dis import pretty_flags
print(pretty_flags(spam_gen.__code__.co_flags))
# 'OPTIMIZED, NEWLOCALS, GENERATOR, NOFREE'
Traversing the AST would be harder than it seems -- using the compiler is probably the way to go. Here's an example of why looking for a Yield node isn't as simple as it sounds.
>>> s1 = 'def f():\n yield'
>>> any(isinstance(node, ast.Yield) for node in ast.walk(ast.parse(s1)))
True
>>> dis.pretty_flags(compile(s1, '', 'exec').co_consts[0].co_flags)
'OPTIMIZED, NEWLOCALS, GENERATOR, NOFREE'
>>> s2 = 'def f():\n def g():\n yield'
>>> any(isinstance(node, ast.Yield) for node in ast.walk(ast.parse(s2)))
True
>>> dis.pretty_flags(compile(s2, '', 'exec').co_consts[0].co_flags)
'OPTIMIZED, NEWLOCALS, NOFREE'
The AST approach would probably require using NodeVisitor to exclude functions and lambda bodies.

python - lxml: enforcing a specific order for attributes

I have an XML writing script that outputs XML for a specific 3rd party tool.
I've used the original XML as a template to make sure that I'm building all the correct elements, but the final XML does not appear like the original.
I write the attributes in the same order, but lxml is writing them in its own order.
I'm not sure, but I suspect that the 3rd part tool expects attributes to appear in a specific order, and I'd like to resolve this issue so I can see if its the attrib order that making it fail, or something else.
Source element:
<FileFormat ID="1" Name="Development Signature" PUID="dev/1" Version="1.0" MIMEType="text/x-test-signature">
My source script:
sig.fileformat = etree.SubElement(sig.fileformats, "FileFormat", ID = str(db.ID), Name = db.name, PUID="fileSig/{}".format(str(db.ID)), Version = "", MIMEType = "")
My resultant XML:
<FileFormat MIMEType="" PUID="fileSig/19" Version="" Name="Printer Info File" ID="19">
Is there a way of constraining the order they are written?
It looks like lxml serializes attributes in the order you set them:
>>> from lxml import etree as ET
>>> x = ET.Element("x")
>>> x.set('a', '1')
>>> x.set('b', '2')
>>> ET.tostring(x)
'<x a="1" b="2"/>'
>>> y= ET.Element("y")
>>> y.set('b', '2')
>>> y.set('a', '1')
>>> ET.tostring(y)
'<y b="2" a="1"/>'
Note that when you pass attributes using the ET.SubElement() constructor, Python constructs a dictionary of keyword arguments and passes that dictionary to lxml. This loses any ordering you had in the source file, since Python's dictionaries are unordered (or, rather, their order is determined by string hash values, which may differ from platform to platform or, in fact, from execution to execution).
OrderedDict of attributes
As of lxml 3.3.3 (perhaps also in earlier versions) you can pass an OrderedDict of attributes to the lxml.etree.(Sub)Element constructor and the order will be preserved when using lxml.etree.tostring(root):
sig.fileformat = etree.SubElement(sig.fileformats, "FileFormat", OrderedDict([("ID",str(db.ID)), ("Name",db.name), ("PUID","fileSig/{}".format(str(db.ID))), ("Version",""), ("MIMEType","")]))
Note that the ElementTree API (xml.etree.ElementTree) does not preserve attribute order even if you provide an OrderedDict to the xml.etree.ElementTree.(Sub)Element constructor!
UPDATE: Also note that using the **extra parameter of the lxml.etree.(Sub)Element constructor for specifying attributes does not preserve attribute order:
>>> from lxml.etree import Element, tostring
>>> from collections import OrderedDict
>>> root = Element("root", OrderedDict([("b","1"),("a","2")])) # attrib parameter
>>> tostring(root)
b'<root b="1" a="2"/>' # preserved
>>> root = Element("root", b="1", a="2") # **extra parameter
>>> tostring(root)
b'<root a="2" b="1"/>' # not preserved
Attribute ordering and readability
As the commenters have mentioned, attribute order has no semantic significance in XML, which is to say it doesn't change the meaning of an element:
<tag attr1="val1" attr2="val2"/>
<!-- means the same thing as: -->
<tag attr2="val2" attr1="val1"/>
There is an analogous characteristic in SQL, where column order doesn't change
the meaning of a table definition. XML attributes and SQL columns are a set
(not an ordered set), and so all that can "officially" be said about either
one of those is whether the attribute or column is present in the set.
That said, it definitely makes a difference to human readability which order
these things appear in and in situations where constructs like this are authored and appear in text (e.g. source code) and must be interpreted, a careful ordering makes a lot of sense to me.
Typical parser behavior
Any XML parser that treated attribute order as significant would be out of compliance with the XML standard. That doesn't mean it can't happen, but in my experience it is certainly unusual. Still, depending on the provenence of the tool you mention, it's a possibility that may be worth testing.
As far as I know, lxml has no mechanism for specifying the order attributes appear in serialized XML, and I would be surprised if it did.
In order to test the behavior I'd be strongly inclined to just write a text-based template to generate enough XML to test it out:
id = 1
name = 'Development Signature'
puid = 'dev/1'
version = '1.0'
mimetype = 'text/x-test-signature'
template = ('<FileFormat ID="%d" Name="%s" PUID="%s" Version="%s" '
'MIMEType="%s">')
xml = template % (id, name, puid, version, mimetype)
I have seen order matter where the consumer of the XML is expecting canonicalized XML. Canonical XML specifies that the attributes be sorted:
in increasing lexicographic order with namespace URI as the primary
key and local name as the secondary key (an empty namespace URI is
lexicographically least). (section 2.6 of https://www.w3.org/TR/xml-c14n2/)
So if your application is expecting the kind of order you would get out of canonical XML, lxml does support output in canonical form using the method= argument to print. (see heading C14N of https://lxml.de/api.html)
For example:
from lxml import etree as ET
element = ET.Element('Test', B='beta', Z='omega', A='alpha')
val = ET.tostring(element, method="c14n")
print(val)
lxml uses libxml2 under the hood. It preserves attribute order, which means for an individual element you can sort them like this:
x = etree.XML('<x a="1" b="2" d="4" c="3"><y></y></x>')
sorted_attrs = sorted(x.attrib.items())
x.attrib.clear()
x.attrib.update(sorted_attrs)
Not very helpful if you want them all sorted though. If you want them all sorted you can use the c14n2 output method (XML Canonicalisation Version 2):
>>> x = etree.XML('<x a="1" b="2" d="4" c="3"><y></y></x>')
>>> etree.tostring(x, method="c14n2")
b'<x a="1" b="2" c="3" d="4"><y></y></x>'
That will sort the attributes. Unfortunately it has the downside of ignoring pretty_print, which isn't great if you want human-readable XML.
If you use c14n2 then lxml will use custom Python serialisation code to write the XML which calls sorted(x.attrib.items() itself for all attributes. If you don't, then it will instead call into libxml2's xmlNodeDumpOutput() function which doesn't support sorting attributes but does support pretty-printing.
Therefore the only solution is to manually walk the XML tree and sort all the attributes, like this:
from lxml import etree
x = etree.XML('<x a="1" b="2" d="4" c="3"><y z="1" a="2"><!--comment--></y></x>')
for el in x.iter(etree.Element):
sorted_attrs = sorted(el.attrib.items())
el.attrib.clear()
el.attrib.update(sorted_attrs)
etree.tostring(x, pretty_print=True)
# b'<x a="1" b="2" c="3" d="4">\n <y a="2" z="1">\n <!--comment-->\n </y>\n</x>\n'
You need to encapsulate a new string, which gives order when compared, and gives value when print and get strings.
Here is an example:
class S:
def __init__(self, _idx, _obj):
self._obj = (_idx, _obj)
def get_idx(self):
return self._obj[0]
def __le__(self, other):
return self._obj[0] <= other.get_idx()
def __lt__(self, other):
return self._obj[0] < other.get_idx()
def __str__(self):
return self._obj[1].__str__()
def __repr__(self):
return self._obj[1].__repr__()
def __eq__(self, other):
if isinstance(other, str):
return self._obj[1] == other
elif isinstance(other, S):
return self._obj[
0] == other.get_idx() and self.__str__() == other.__str__()
else:
return self._obj[
0] == other.get_idx() and self._obj[1] == other
def __add__(self, other):
return self._obj[1] + other
def __hash__(self):
return self._obj[1].__hash__()
def __getitem__(self, item):
return self._obj[1].__getitem__(item)
def __radd__(self, other):
return other + self._obj[1]
list_sortable = ['c', 'b', 'a']
list_not_sortable = [S(0, 'c'), S(0, 'b'), S(0, 'a')]
print("list_sortable ---- Before sort ----")
for ele in list_sortable:
print(ele)
print("list_not_sortable ---- Before sort ----")
for ele in list_not_sortable:
print(ele)
list_sortable.sort()
list_not_sortable.sort()
print("list_sortable ---- After sort ----")
for ele in list_sortable:
print(ele)
print("list_not_sortable ---- After sort ----")
for ele in list_not_sortable:
print(ele)
running result:
list_sortable ---- Before sort ----
c
b
a
list_not_sortable ---- Before sort ----
c
b
a
list_sortable ---- After sort ----
a
b
c
list_not_sortable ---- After sort ----
c
b
a
dict_sortable ---- After sort ----
a 3
b 2
c 1
dict_not_sortable ---- After sort ----
c 1
b 2
a 3

Find dependencies in a python source/script

I have a bunch of simple scripts in Python with simple expressions[1] like :
C = A+B
D = C * 4
I need to execute them, but most importantly I need to know what are the objects I depend on; in the previous case, the object A and B are outer dependencies. Eg. given i have the former code in a var called source, i wanna be able to:
deps = { "A" : 1 , "B": 2}
exec source in deps
so it's strictly necessary to know how to build the dict deps.
I've lurked into the ast Python module but I had no clue.
[1] simple math aggregations, to an extent for cycles, nothing more.
You can tokenize Python source code using the tokenize module from the standard library. This will allow you to find all variable names used in the script.
Now suppose we define a "non-dependency" as any variable name that comes immediately before an = sign. Then, depending on how simple your script code really is (see the Caveats below), you may be able to determine the variable names which are not non-dependencies this way:
import tokenize
import io
import token
import collections
import keyword
kwset = set(keyword.kwlist)
class Token(collections.namedtuple('Token', 'num val start end line')):
#property
def name(self):
return token.tok_name[self.num]
source = '''
C = A+B
D = C * 4
'''
lastname = None
names = set()
not_dep = set()
for tok in tokenize.generate_tokens(io.BytesIO(source).readline):
tok = Token(*tok)
print(tok.name, tok.val)
if tok.name == 'NAME':
names.add(tok.val)
lastname = tok.val
if tok.name == 'OP' and tok.val == '=':
not_dep.add(lastname)
print(names)
# set(['A', 'C', 'B', 'D'])
print(not_dep)
# set(['C', 'D'])
deps = dict.fromkeys(names - not_dep - kwset, 1)
print(deps)
# {'A': 1, 'B': 1}
Caveats:
If your scripts contain statements other than simple
assignments, then names may become populated with undesired
variable names. For example,
import numpy
would add both 'import' and 'numpy' to the set names.
If your script contains an assignment that makes use of left-hand
side tuple unpacking, such as
E, F = 1, 2
then the naive code above will only recognize that F is not a
dependency.

Categories

Resources