Python Lambda Mutability - python

class TestClass(object):
def __init__(self):
self.value = 100
self.x = lambda: self.value.__add__(100)
self.run()
def run(self):
self.x()
print self.value
t = TestClass()
#Output: 100
I would like to able to define a lambda function such as the one in TestClass and have it alter an instance variable. It would seem that the way the lambda is constructed means that it does not modify the original value. I suspect that this to do with Python's reference strategy which I do more or less understand.
So accepting the flaws in what I have done, is there a similar way to get similar functionality? I ultimately need to define many methods like x and intend to keep them in a dictionary as they will form a simple instruction set. As far as I can tell I need either to use lambdas or exec to do what I want.

__add__ is not inplace, so the return value of TestClass.x is self.value + 100, but self.value is not altered. Try this:
import random
HATE_LAMBDAS = random.choice(True, False)
class TestClass(object):
def __init__(self):
self.value = 100
if HATE_LAMBDAS:
def x():
self.value += 100
self.x = x
else:
self.x = lambda: setattr(self, "value", self.value + 100)
self.run()
def run(self):
self.x()
print self.value
t = TestClass()
#Output: 200
Use the setattr to increment the value while still using a lambda. Beware however, lambda's in python are one of its worst features. However, both methods work.
Edit
Just remebered something that you might find usefull! The standard library has a module called operator which implements standard operators as functions. If you plan on using lambdas a lot, you might like to investigate it.

I'm just guessing at what you want to accomplish, but no lambdas
necessary.
class TestClass(object):
def __init__(self):
self.value = 100
self.methods = { 'add': self.w,
'subtract': self.x,
'mult': self.y,
'div': self.z,
}
self.run()
def w(self): self.value += 100
def x(self): self.value -= 100
def y(self): self.value *= 100
def z(self): self.value /= 100
def run(self):
self.x()
print self.value

Related

Return from __call__ immediately if a value gets changed in the class

I want to return to the main function whenever value of a is set to 1 by any of the functions(func1,func2,func3) called in the class. Can anyone help me in solving this?I need to return immediately before calling rest of the functions when value is set. I have attached a code snippet for reference.
class Property:
def __init__(self):
## initializing the attribute
self.a = 0
def __call__(self, val):
self.func1(val)
self.func2(val)
self.func3(val)
def func1(self,value):
if value==20:
self.a = 1
return
print("all okay in func1")
def func2(self,value):
if value==40:
self.a = 1
return
print("all okay in func2")
def func3(self,value):
if value==60:
self.a = 1
return
print("all okay in func3")
def main():
obj = Property()
obj(20)
How's this?
class Property:
def __init__(self):
## initializing the attribute
self.a = 0
def set_a_to_1_and_raise(self):
self.a = 1
throw MyCustomException
def __call__(self, val):
try:
self.func1(val)
self.func2(val)
self.func3(val)
except MyCustomException:
return
def func1(self,value):
if value==20:
self.set_a_to_1_and_raise()
print("all okay in func1")
def func2(self,value):
if value==40:
self.set_a_to_1_and_raise()
print("all okay in func2")
def func3(self,value):
if value==60:
self.set_a_to_1_and_raise()
print("all okay in func3")
Edit: Added #tdelaney's nice suggestion regarding the name of the function self.set_a_to_1_and_raise() to make it obvious that it raises an exception when you call it.
In contrast to my other answer #Copperfield had the suggestion of using the Python #property decorators to do this, which could be handy for some applications. This saves you from having to changing the code where you assign self.a = 1, and it will just "automatically" notice whenever you happen to make such an assignment. It's a bit more "under the hood", i.e. less transparent to readers of the code, but could still be useful:
class Property:
def __init__(self):
## initializing the attribute (Note change to _a, not a)
self._a = 0
def __call__(self, val):
try:
self.func1(val)
self.func2(val)
self.func3(val)
except MyCustomException:
return
def func1(self,value):
if value==20:
self.a = 1
print("all okay in func1")
def func2(self,value):
if value==40:
self.a = 1
print("all okay in func2")
def func3(self,value):
if value==60:
self.a = 1
print("all okay in func3")
#property
def a(self):
return self._a
#a.setter
def a(self, new_value):
self._a = new_value
if self._a == 1:
throw MyCustomException
What is happening here is that we have specified a as a property. On the surface it works just like a normal member variable in how you assign and access it, however with the special #a.setter decorator we can define a custom function that runs whenever you set its value. So here we can make it check what value is being assigned, and throw a MyCustomException if that value is 1, which we can then catch after skipping the functions that you didn't want to run.

Override attribute get behaviour but only by external methods in Python

Consider the following code:
class A():
def __init__(self, thing):
self.thing = thing
def do_something(self):
if self.thing > 0:
print('thing is positive')
else:
print('thing is not positive')
def some_function(a):
if a.thing > 0:
print('this thing is positive')
else:
print('this thing is not positive')
class B(A):
#property
def thing(self):
return 0
#thing.setter
def thing(self, val):
pass
# Purposely don't want to override A.do_something
a = A(5)
print(a.thing) # 5
a.do_something() # thing is positive
some_function(a) # this thing is positive
isinstance(a, A) # True
b = B(5)
print(b.thing) # 0
b.do_something() # thing is not positive (!!! - not what we want - see below)
some_function(b) # this thing is not positive
isinstance(b, A) # True
Suppose that do_something is a complicated function which we don't want to override. This could be because it is in an external package and we want to be able to keep using the latest version of this package containing A without having to update B each time. Now suppose that an outside function accesses a.thing by referencing it directly. We want B to extend A so that this external function always sees b.thing == 0. However, we want to do this without modifying the behaviour of internal methods. In the example above, we want to modify the behaviour of some_function, but we do this at the cost of also changing the behaviour of the internal method b.do_something.
The obvious way to fix this would be to have the external functions some_function use a get_thing() method. But if these external functions have been already written in another package modifying these is not possible.
Another way would be to have B update the value of self.thing before calling the parent class' method,
class B(A):
def __init__(self, thing):
self.thing = 0
self._thing = thing
def do_something(self):
self.thing = self._thing
rval = super().do_something()
self.thing = 0
return rval
however this seems clunky and if the developer of A adds new methods, then B would change the behaviour of these methods if it is not updated.
Is there a best practice on how to go about extending a class like this which allows use to override __getattribute__ if called by an external function, but without changing any internal behaviour?
I think you can set class B like the following to achieve what you want:
class B:
def __init__(self, thing):
self.thing = 0
self._a = A(thing)
def __getattr__(self, name):
return getattr(self._a, name)
The full code is below:
class A:
def __init__(self, thing):
self.thing = thing
def do_something(self):
if self.thing > 0:
print('thing is positive')
else:
print('thing is not positive')
def some_function(a):
if a.thing > 0:
print('this thing is positive')
else:
print('this thing is not positive')
class B:
def __init__(self, thing):
self.thing = 0
self._a = A(thing)
def __getattr__(self, name):
return getattr(self._a, name)
if __name__ == '__main__':
a = A(5)
print(a.thing) # 5
a.do_something() # thing is positive
some_function(a) # this thing is positive
b = B(5)
print(b.thing) # 0
b.do_something() # thing is positive
some_function(b) # this thing is not positive

elegant way to pass operator to attribute in python

Basically my Question: Is there an elegant way, if I have a given Container class
class Container:
def __init__(self, value):
self.value = value
to pass any operator to the value parameter of the Container class?
So one obvious solution would be to override any single operator on its own. So for example for + I could do:
class Container:
def __init__(self, value):
self.value = value
def __add__(self, other):
return self.value + other
so that for example
c1 = Container(1)
c1 += 1
print(c1)
would result in
2
and
c2 = Container("ab")
c2 += "c"
print(c2)
would result in
abc
So what I could do is to override all arithmetic built-in functions from python. But my Question is, is there a more elegant (shorter) way to do this for any operator?
You can factor out some of the more repetitive boilerplate:
import operator
class Container:
def __init__(self, value):
self.value = value
def _generic_op(f):
def _(self, other):
return f(self.value, other)
return _
__add__ = _generic_op(operator.add)
__sub__ = _generic_op(operator.sub)
__mul__ = _generic_op(operator.mul)
# etc
# No need for _generic_op as a class attribute
del _generic_op
If the number of overloads are smaller than the different classes you want to overload in, you can define saperate class for each overload and inherit it. So,
class Adder:
def __add__(self, val):
self.value += val
return self
class Subtractor:
def __sub__(self, val):
self.value -= val
return self
class Foo(Adder, Subtractor):
def __init__(self, value):
self.value = value
foo = Foo(5)
foo -= 3
foo += 7
print(foo.value)
But I am against doing such operations between custom DataTypes and native dataTypes. Consider this just a simple example that doesn't follow good coding principles in that sense.
If incase you still want to allow adding int/float directly, you might also want to handle Foo class objects inside too.

how to implement #property

I was comparing three slightly different implementations of #property in python. Python documentation and "Source 1" initialize the private variable, _var_name. Furthermore, the code from Source 1 has a bug; it doesn't access .setter when initializing. By contrast, the third example correctly initializes the public variable x.
Is there a good reason to initialize _x in place of x in __init__? Are there any additional differences between these that I haven't described?
From the docs:
class C:
def __init__(self):
self._x = None
#property
def x(self):
"""I'm the 'x' property."""
return self._x
#x.setter
def x(self, value):
self._x = value
#x.deleter
def x(self):
del self._x
Source 1:
class Celsius:
def __init__(self, temperature = 0):
self._temperature = temperature
def to_fahrenheit(self):
return (self.temperature * 1.8) + 32
#property
def temperature(self):
print("Getting value")
return self._temperature
#temperature.setter
def temperature(self, value):
if value < -273:
raise ValueError("Temperature below -273 is not possible")
print("Setting value")
self._temperature = value
Source 2:
class P:
def __init__(self,x):
self.x = x
#property
def x(self):
return self.__x
#x.setter
def x(self, x):
if x < 0:
self.__x = 0
elif x > 1000:
self.__x = 1000
else:
self.__x = x
Is there a good reason to initialize __x or _x in place of x in __init__?
Properties are often used to transform the input in some way. An internal method (including __init__) often already has the transformed data, and doesn't want it to get transformed again. For example, consider this somewhat silly but obvious example:
class C:
# ...
def __init__(self):
f = open(C.default_filename, 'rb')
# ...
self._file = f
#property
def file(self):
return self._file.__name__
#file.setter
def file(self, filename):
self._file = open(f, 'rb')
Even when you're not doing anything that would be wrong to pass through the setter, internal code often knows about the class invariants, so the checks done by setters may be extra overhead for no benefit. For example, if you wanted a method to set the temperature to 0°C, it could just set self._x = 0 instead of self.x = 0, because you know that 0 doesn't need to be checked.
On the other hand, some internal methods may want to see x the same way the public does. In that case, it should use the property rather than the underlying attribute. In fact, your Source 1 is a perfect example—__init__ just saves its parameter directly to _temperature, allowing you to construct temperatures below absolute 0 (which is bad, because that's actually hotter than infinity, and CPUs like to be cold). And it would be silly to repeat the same precondition test in __init__ that you already wrote in temperature.setter; in this case, just set self.temperature instead.
There is additional difference in whether a single or double underscore is used.
A single underscore makes the attribute "private by convention"; a double underscore goes further and mangles the name, which means it can't be accidentally accessed from outside your class's code.
Using obj._x works on your instances; obj.__x raises AttributeError. But it only prevents accidental access—they can still use obj._C__x if they really want to get at it. The main reason to do this is to protect subclasses or superclasses from accidentally using the same names.

Is it true that once property is used in Python, it'd always be better initializing via property?

I used to initialize private attributes in __init__ like below (this way of initializing is also commonly seen),
class Duck():
def __init__(self, input_name):
self.__name = input_name
#property
def name(self):
return self.__name
#name.setter
def name(self, input_name):
self.__name = input_name
# Use private attribute __name internally for other purposes below...
But I just want to make sure if it is actually safer to just use property at the very beginning __init__, for example, in next example, for input greater than 1000 or less than 0, I want to evaluate to 1000 and 0, respectively, rather than the original input value. It seems I can't use self.__x = x,
class P:
def __init__(self,x):
# If self.__x = x, not desirable
self.x = x
#property
def x(self):
return self.__x
#x.setter
def x(self, x):
if x < 0:
self.__x = 0
elif x > 1000:
self.__x = 1000
else:
self.__x = x
I assume you work with python2. Properties are not supported for old-style classes. Just change the first line to
class P(object):
And whether you use self._x or self.__x for the attribute behind does not matter. Just do it consistent, i.e. change the constructor line back to
self.__x = x
Just don't call that self.x as the property.
Edit:
There was a misunderstanding I think. Here the complete code I propose:
class P(object):
def __init__(self,x):
self.__x = x
#property
def x(self):
return self.__x
#x.setter
def x(self, x):
if x < 0:
self.__x = 0
elif x > 1000:
self.__x = 1000
else:
self.__x = x
p = P(3)
p.x = 1001
print p.x
Edit 2 - The actual question:
I apologize, did simply not grasp the heading and actual question here, but was focused on making your class work. My distraction was that if you are in python2 and use old-style classes, the setter would not really get called.
Then like indicated in the comment-conversation below, I don't have a definite answer on whether to initialize the attribute or the property, but I (personally) would say:
a. If the initialisation deserves the same validation as performed in
the setter, then use the property, as else you'd need to copy the
setter code.
b. If however the value to initialise doesn't need validation (for
instance, you set it to an a priori validated constant default
value), then there is no reason to use the property.

Categories

Resources