creating a list of functions using globals() - python

I am trying to define 3 functions.
one = 'one'
two = 'two'
three = 'three'
l = [one, two, three]
for item in l:
def _f(): return '::'+item
globals()[item] = _f
del _f
print(one(), two(), three())
However, the three functions are the same, they are equal to the last one. Am I using globals() in the wrong way?

Since item is just a name in the body of _f, you should define _f in a scope where item will have the value you want when you call the function.
You should also not try to inject values into the global namespace like this; just use an ordinary dict.
def make_function(x):
def _():
return '::' + x
return _
d = {item: make_function(item) for item in ['one', 'two', 'three']}
for f in d.values():
print(f())

Related

Create/Initialize a series of functions from data

I'm trying to initialize a collection of function calls based on input data, but not sure how to do this. The ideas is like below. I expect the local_v will stay unchanged once being defined. Can anyone shed some light how to deal with this or where to start with?
ls = [1,2]
for v in ls:
def fn():
local_v = v
print(local_v)
locals()[f'print{v}'] = fn
print1() # output: 2 expect to be 1
print2() # output: 2
What you're trying to create is called a "closure", which is a special type of function that retains the values of its free variables. A closure is created automatically when free variables go out of scope. In your example, since you never leave the scope in which v is defined, your fn objects are ordinary functions, not closures (you can verify that by inspecting their __closure__ property).
To actually create a closure you need to construct a new scope, inject your loop variable into that scope and then leave that scope. In python, a new scope is created for defs, lambdas, comprehensions and classes. You can use any of these, for example, given
ls = [1, 2, 3, 4, 5]
fns = []
you can use a helper function:
def make_fn(arg):
def f():
print(arg)
return f
for item in ls:
fns.append(make_fn(item))
for fn in fns:
fn()
or a lambda:
for item in ls:
fn = (lambda x: lambda: print(x))(item)
fns.append(fn)
or a (pointless) list comprehension:
for item in ls:
fn = [
lambda: print(x)
for x in [item]
][0]
fns.append(fn)
As a side note, modifying locals is not a good idea.
If you want v to be tied to the local context, you'll have to pass it inside the function. You will also need to return a callable to print rather than using the print function directly, or else you will find that print1 and print2 will be None. And at this point, there is no need to redefine fnat each iteration of the loop. Something like that will work:
ls = [1, 2]
def fn(v):
return lambda: print(v)
for v in ls:
locals()[f'print{v}'] = fn(v)
print1() # output: 1
print2() # output: 2
Just to try to be clearer: invoking print1() does not call fn(1), it calls the lambda that returned by fn(1).
Better to store the functions in a dictionary rather than changing locals, which is likely to get confusing. You could use a nested function like this:
def function_maker(v):
def fn():
print(v)
return fn
print_functions = {v: function_maker(v) for v in [1, 2]}
print_functions[1]()
print_functions[2]()
Alternatively use functools.partial
from functools import partial
def fn(v):
print(v)
print_functions = {v: partial(fn, v) for v in [1, 2]}

Python: how to filter out duplicates in lists by object attributes?

Say I have
class a:
b = 1
thing = a()
thing2 = a()
thing3 = a()
thing3.b = 2
lst = [thing,thing2,thing3]
And I want something like this:
lst = filter_out(obj.b==obj2.b,lst)
result(the objects with same attribute is filtered so that there's only one left):
[thing2,thing3]
How can this be achieved? As far as I know, the lambda filter wouldn't work for comparing two objects in a list.
Use a dictionary to map objects by their b attribute. Since the dict can contain no b twice, the dict's values will be your unique elements.
>>> unique = {}
>>> for x in lst:
... unique[x.b] = x
...
>>> list(unique.values())
[<__main__.a object at 0xb724cfcc>, <__main__.a object at 0xb724cfec>]
Depending on whether you want to keep the first or the last unique item, either overwrite existing values (as in above code) or add a if x.b not in unique check before adding the items.
You can use, also, groupby from itertools module, like this example:
from itertools import groupby
class A:
b = 1
thing = A()
thing2 = A()
thing3 = A()
thing3.b = 2
lst = [thing, thing2, thing3]
# For testing purpose
dct_id = {id(thing): "thing", id(thing2): "thing2", id(thing3): "thing3"}
# groupping based on each object's b value
sub = [list(v)[-1] for _, v in groupby(lst, lambda x: x.b)]
# check the groupped objects by their id
for k in sub:
print(dct_id[id(k)])
Output:
thing2
thing3
Edit: Thanks to # tobias_k's comment
If we have this kind of objects:
thing = A()
thing2 = A()
thing3 = A()
thing2.b = 2
lst = [thing, thing2, thing3]
To avoid bad results, we need to sort lst by its object's b value. So sub will become:
sub = [list(v)[-1] for _, v in groupby(sorted(lst, key=lambda x: x.b), lambda x: x.b)]
And repeating the same test we'll have:
thing3
thing2
Ps: it's better to sort sub list in any case to avoid bad results.

Names of objects in a list?

I'm trying to iteratively write dictionaries to file, but am having issues creating the unique filenames for each dict.
def variable_to_value(value):
for n, v in globals().items():
if v == value:
return n
else:
return None
a = {'a': [1,2,3]}
b = {'b': [4,5,6]}
c = {'c': [7,8,9]}
for obj in [a, b, c]:
name = variable_to_value(obj)
print(name)
This prints:
a
obj
obj
How can I access the name of the original object itself instead of obj?
The problem is that obj, your iteration variable is also in globals. Whether you get a or obj is just by luck. You can't solve the problem in general because an object can have any number of assignments in globals. You could update your code to exclude known references, but that is very fragile.
For example
a = {'a': [1,2,3]}
b = {'b': [4,5,6]}
c = {'c': [7,8,9]}
print("'obj' is also in globals")
def variable_to_value(value):
return [n for n,v in globals().items() if v == value]
for obj in [a, b, c]:
name = variable_to_value(obj)
print(name)
print("you can update your code to exclude it")
def variable_to_value(value, exclude=None):
return [n for n,v in globals().items() if v == value and n != exclude]
for obj in [a, b, c]:
name = variable_to_value(obj, 'obj')
print(name)
print("but you'll still see other assignments")
foo = a
bar = b
bax = c
for obj in [a, b, c]:
name = variable_to_value(obj, 'obj')
print(name)
When run
'obj' is also in globals
['a', 'obj']
['b', 'obj']
['c', 'obj']
you can update your code to exclude it
['a']
['b']
['c']
but you'll still see other assignments
['a', 'foo']
['b', 'bar']
['c', 'bax']
The function returns the first name it finds referencing an object in your globals(). However, at each iteration, the name obj will reference each of the objects. So either the name a, b or c is returned or obj, depending on the one which was reached first in globals().
You can avoid returning obj by excluding that name from the search in your function - sort of hackish:
def variable_to_value(value):
for n, v in globals().items():
if v == value and n != 'obj':
return n
else:
return None
Python doesn't actually work like this.
Objects in Python don't have innate names. It's the names that belong to an object, not the other way around: an object can have many names (or no names at all).
You're getting two copies of "obj" printed, because at the time you call variable_to_value, both the name b and the name obj refer to the same object! (The dictionary {'b': [4,5,6]}) So when you search for the global namespace for any value which is equal to obj (note that you should be checking using is rather than ==) it's effectively random whether or not you get b or obj.
So you want to find the name of any object available in the globals()?
Inside the for loop, globals() dict is being mutated, adding obj in its namespace. So on your second pass, you have two references to the same object (originally only referenced by the name 'a').
Danger of using globals(), I suppose.

How to make list variable names into strings

I think this should be simple but I'm not sure how to do it. I have a tuple of list variables:
A = (a,b,c)
where
a = [1,2,3,...]
b = [2,4,6,4,...]
c = [4,6,4,...]
And I want to make a tuple or list where it is the names of the variables. So,
A_names = ('a','b','c')
How could I do this? My tuple will have more variables and it is not always the same variables. I tried something like
A_names = tuple([str(var) for var in A])
but this did not work.
My connection was messed up so I couldn't post this earlier but I believe this solves your problem with out using a dictionary.
import inspect
def retrieve_name(var):
local_vars = inspect.currentframe().f_back.f_locals.items()
return [var_name for var_name, var_val in local_vars if var_val is var]
a = [1,2,3]
b = [2,4,6,4]
c = [4,6,4]
a_list = (a,b,c)
a_names = []
for x in a_list:
a_names += (retrieve_name(x)[0])
print a_names
outputs ['a', 'b', 'c']
The problem with what you are asking is that doing A = (a, b, c) does not assign the variables "a", "b" and "c" to the tuple A. Rather, you are creating a new reference to each of the objects referred to by those names.
For example, if I did A = (a,), a tuple with a single object. I haven't assigned the variable "a". Instead, a reference is created at position 0 in the tuple object. That reference is to the same object referred to by the name a.
>>> a = 1
>>> b = 2
>>> A = (a, b)
>>> A
(1, 2)
>>> a = 3
>>> A
(1, 2)
Notice that assigning a new value to a does not change the value in the tuple at all.
Now, you could use the locals() or globals() dictionaries and look for values that match those in A, but there's no guarantee of accuracy since you can have multiple names referring to the same value and you won't know which is which.
>>> for key, val in locals().items():
if val in A:
print(key, val)
('a', 1)
('b', 2)
Assuming you want dynamic/accessible names, you need to use a dictionary.
Here is an implementation with a dictionary:
my_variables = {'a': [1,2,3,...],
'b': [2,4,6,4,...],
'c': [4,6,4,...]}
my_variable_names = my_variables.keys()
for name in my_variable_names:
print(my_variables[name])
Just out of academic interest:
dir() will give you a list of the variables currently visible,
locals() gives the list of local variables
globals() (guess)
Note that some unexpected variables will show up (starting and ending in __), which are already defined by Python.
A = {'a' : [1,2,3,...],
'b' : [2,4,6,4,...],
'c' : [4,6,4,...]}
A_names = A.keys()
for name in A_names:
print(A[name])
Then you can always add a new value to the dictionary by saying:
A.update({'d' : [3,6,3,8,...], 'e' : [1,7,2,2,...]})
Alternatively, you can change the value of an item by going:
A.update({'a' : [1,3,2,...]})
To print a specific value, you can just type:
print(A['c'])

Dynamical output in Python Functions

When we use def, we can use **kwargs and *args to define dynamic inputs to the function
Is there anything similar for the return tuples, I've been looking for something that behaves like this:
def foo(data):
return 2,1
a,b=foo(5)
a=2
b=1
a=foo(5)
a=2
However if I only declare one value to unpack, it sends the whole tuple over there:
a=foo(5)
a=(2,1)
I could use 'if' statements, but I was wondering if there was something less cumbersome. I could also use some hold variable to store that value, but my return value might be kind of large to have just some place holder for that.
Thanks
If you need to fully generalize the return value, you could do something like this:
def function_that_could_return_anything(data):
# do stuff
return_args = ['list', 'of', 'return', 'values']
return_kwargs = {'dict': 0, 'of': 1, 'return': 2, 'values': 3}
return return_args, return_kwargs
a, b = function_that_could_return_anything(...)
for thing in a:
# do stuff
for item in b.items():
# do stuff
In my opinion it would be simpler to just return a dictionary, then access parameters with get():
dict_return_value = foo()
a = dict_return_value.get('key containing a', None)
if a:
# do stuff with a
I couldn't quite understand exactly what you're asking, so I'll take a couple guesses.
If you want to use a single value sometimes, consider a namedtuple:
from collections import namedtuple
AAndB = namedtuple('AAndB', 'a b')
def foo(data):
return AAndB(2,1)
# Unpacking all items.
a,b=foo(5)
# Using a single value.
foo(5).a
Or, if you're using Python 3.x, there's extended iterable unpacking to easily unpack only some of the values:
def foo(data):
return 3,2,1
a, *remainder = foo(5) # a==3, remainder==[2,1]
a, *remainder, c = foo(5) # a==3, remainder==[2], c==1
a, b, c, *remainder = foo(5) # a==3, b==2, c==1, remainder==[]
Sometimes the name _ is used to indicate that you are discarding the value:
a, *_ = foo(5)

Categories

Resources