This question already has answers here:
Function closure vs. callable class
(6 answers)
Closed 2 years ago.
Is either of the two styles preferred or more "pythonic" for creating closures (edit: "closure-like objects")?
def make_doer(closed_var):
def doer(foo):
pass # Do something with closed_var and foo
return doer
class Doer:
def __init__(self, closed_var):
self._closed_var = closed_var
def __call__(self, foo):
pass # Do something with self._closed_var and foo
The only differences I can tell are that the former is a tiny bit shorter but the second has an advantage in that the docstring for the resulting function (__call__ in the second case) is less nested/hidden. Neither seems like a huge deal, anything else that would tip the balance?
Closures have to do with maintaining references to objects from scopes that have passed, or from earlier scopes.
Here is the simplest example of the form a Closure takes:
def outer():
x=7
def inner(y):
return x + y
return inner
i = outer()
Related
This question already has answers here:
Local variables in nested functions
(4 answers)
Closed 2 years ago.
I have a Python class MyObject (a subclass of tuple) and another class for a set of these objects, MyObjectSet (a subclass of set). I’d like that, for any non-builtin method that I define for MyObject, a method of the same name be defined for MyObjectSet with value equal to the sum of the method over the contents of the MyObjectSet.
I had thought that something like the code below would work, but the result doesn’t match my intended outcome. In practice MyObject and MyObjectSet have a lot more to them and are justified.
class MyObject(tuple):
def stat_1(self):
return len(self)
def stat_2(self):
return sum(self)
class MyObjectSet(set):
pass
for stat_name in dir(MyObject):
if not stat_name.startswith("__"):
stat_func = getattr(MyObject, stat_name)
if callable(stat_func):
setattr(MyObjectSet, stat_name, lambda S: sum(stat_func(p) for p in S))
if __name__ == "__main__":
S = MyObjectSet(MyObject(t) for t in [(1,2), (3,4)])
result, expected = S.stat_1(), sum(p.stat_1() for p in S)
print(f"S.size() = {result}, expected {expected}")
result, expected = S.stat_2(), sum(p.stat_2() for p in S)
print(f"S.sum() = {result}, expected {expected}")
Is there any way to achieve this functionality?
replace your lambda with this:
lambda S, f=stat_func: sum(f(p) for p in S)
It copies the stat_func into f, instead of capturing a reference to it, which was what happened in your original code (so all stat_funcs inside your different lambdas ended up being the last value assigned to the stat_func in the for loop.
You can simply override __getattr__ to treat any possible method call as a summing wrapper around the object's method of the same name. This simple example will just raise an AttributeError if the underlying method doesn't exist; you may want to catch the exception and raise another error of your own.
class MyObjectSet(set):
def __getattr__(self, mn):
return lambda: sum(methodcaller(mn)(x) for x in self)
This question already has answers here:
Alternatives for returning multiple values from a Python function [closed]
(14 answers)
Closed 4 years ago.
When I write functions in Python, I typically need to pass quite a few variables to the function. Also, output of such functions contains more than a few variables. In order to manage this variables I/O, I resort to the dictionary datatype, where I pack all input variables into a dictionary to inject into a function and then compile another dictionary at the end of the function for returning to the main program. This of course needs another unpacking of the output dictionary.
dict_in = {'var1':var1,
'var2':var2,
'varn':varn}
def foo(dict_in):
var1 = dict_in['var1']
var2 = dict_in['var2']
varn = dict_in['varn']
""" my code """
dict_out = {'op1':op1,
'op2':op2,
'op_m':op_m}
return dict_out
As the list of variables grows, I suspect that this will be an inefficient approach to deal with the variables I/O.
Can someone suggest a better, more efficient and less error-prone approach to this practice?
You could take advantage of kwargs to unpack named variables
def foo(**kwargs):
kwargs['var1'] = do_something(kwargs['var1'])
...
return kwargs
If you find yourself writing a lot of functions that act on the same data, one better way would be using classes to contain your data.
class Thing:
def __init__(self, a, b, c):
var_1 = a
var_2 = b
var_3 = c
# you can then define methods on it
def foo(self):
self.var_1 *= self.var_2
# and use it
t = Thing(1, 2, 3)
t.foo()
print(t.var_1)
There are a number of methods of creating these in an easier way. Some of them include:
attrs:
>>> #attr.s
... class SomeClass(object):
... a_number = attr.ib(default=42)
... list_of_numbers = attr.ib(default=attr.Factory(list))
...
... def hard_math(self, another_number):
... return self.a_number + sum(self.list_of_numbers) * another_number
namedtuples
>>> Point = namedtuple('Point', ['x', 'y'])
>>> p = Point(11, y=22) # instantiate with positional or keyword arguments
>>> p.x + p.y # fields accessible by name
33
Dataclasses
These are not in python yet, but will be added in 3.7. I am adding them in here here because they will likely be the tool of choice in the future.
This question already has answers here:
Should I, and how to, add methods to int in python?
(1 answer)
Can I add custom methods/attributes to built-in Python types?
(8 answers)
Closed 5 years ago.
If I have a class such as this
class foo():
def __init__(self, value = 0):
self.__value = value
def __set__(self, instance, value):
self.__value = value
def calc(self):
return self.__value * 3
def __repr__(self):
return str(self.__value)
I can now make a variable of the class foo and use it's functions.
n = foo(3)
print(n.calc())
No problems there but if I keep going with something like this.
n = 5
print(n.calc())
I will get an error, because I have now set n to an int object with the value 5 and thus does not have the calc() function.
I normally work with C++ so I'm confused because I thought that the __set__ function was supposed to override the = operator and then set __value to the value of 5 just like if I were to to use
operator=(int value)
In C++, I have looked for an explanation but have not found any.
All help appreciated.
As stated here.
The following methods only apply when an instance of the class
containing the method (a so-called descriptor class) appears in an
owner class (the descriptor must be in either the owner’s class
dictionary or in the class dictionary for one of its parents).
This question already has answers here:
True dynamic and anonymous functions possible in Python?
(8 answers)
Closed 6 years ago.
I am getting crazy with this kind of problem:
I have a list of string representing functions (for eval), I need first to replace the variables with generic x[0], x[1],....
Some time ago I discovered that I can do this using subs(). Then I need to generate a list of functions (to define constraints in SciPy minimize).
I am trying something like:
el=[budget.v.values()[0],silly.v.values()[0]] # my list of string/equations
fl=[]
for i in range(len(el)):
def sos(v):
vdict = dict(zip(q.v.values(),v))
return eval(el[i]).subs(vdict)
fl.append(sos)
del sos # this may be unnecessary
The result for fl is:
[<function sos at 0x19a26aa28>, <function sos at 0x199e3f398>]
but the two functions always give the same result (corresponding to the last 'sos' definition). How can I retain different function definitions?
Your comment:
but the two functions always give the same result (corresponding to the last 'sos' definition)
Is a big clue that you've likely run into this common gotcha!
Your code isn't in a runnable form so I can't verify this but it clearly has this bug. There are various ways to fix this including using functools.partial as explained in the first link.
For example (untested as your code isn't runnable as-is):
import functools
for i in range(len(el)):
def sos(i, v):
vdict = dict(zip(q.v.values(),v))
return eval(el[i]).subs(vdict)
fl.append(functools.partial(sos, i))
Given this you can now refactor this code to avoid redefining the function inside the loop:
def sos(i, v):
vdict = dict(zip([2], v))
return eval(el[i]).subs(vdict)
for i in range(len(el)):
fl.append(functools.partial(sos, i))
To give you a complete and runnable example:
import functools
def add_x(x, v):
return x + v
add_5 = functools.partial(add_x, 5)
print add_5(1)
Produces:
6
This question already has answers here:
Python: Reference to a class from a string?
(4 answers)
Closed 7 years ago.
So i have a set of classes and a string with one of the class names. How do I instantiate a class based on that string?
class foo:
def __init__(self, left, right):
self.left = left
self.right = right
str = "foo"
x = Init(str, A, B)
I want x to be an instantiation of class foo.
In your case you can use something like:
get_class = lambda x: globals()[x]
c = get_class("foo")
And it's even easier to get the class from the module:
import somemodule
getattr(somemodule, "SomeClass")
If you know the namespace involved, you can use it directly -- for example, if all classes are in module zap, the dictionary vars(zap) is that namespace; if they're all in the current module, globals() is probably the handiest way to get that dictionary.
If the classes are not all in the same namespace, then building an "artificial" namespace (a dedicated dict with class names as keys and class objects as values), as #Ignacio suggests, is probably the simplest approach.
classdict = {'foo': foo}
x = classdict['foo'](A, B)
classname = "Foo"
foo = vars()[classname](Bar, 0, 4)
Or perhaps
def mkinst(cls, *args, **kwargs):
try:
return globals()[cls](*args, **kwargs)
except:
raise NameError("Class %s is not defined" % cls)
x = mkinst("Foo", bar, 0, 4, disc="bust")
y = mkinst("Bar", foo, batman="robin")
Miscellaneous notes on the snippet:
*args and **kwargs are special parameters in Python, they mean «an array of non-keyword args» and «a dict of keyword args» accordingly.
PEP-8 (official Python style guide) recommends using cls for class variables.
vars() returns a dict of variables defined in the local scope.
globals() returns a dict of variables currently present in the environment outside of local scope.
try this
cls = __import__('cls_name')
and this - http://effbot.org/zone/import-confusion.htm maybe helpful
You might consider usage of metaclass as well:
Cls = type('foo', (), foo.__dict__)
x = Cls(A, B)
Yet it creates another similar class.