Pythonic Mangling - python

I stumbled on mangling by accident - I put two underscores instead of one in a class function name - but have found it to be quite useful. For example, I have various objects that need some air traffic control between them so I can call their parent objects with the same function, i.e. parentobject.__remove(). It's not much different to use parentobject._remove_myclass() but I kinda like the mangling!
Mangling seems designed to protect parent class objects from being overridden so is exploiting this a) "pythonic" and more importantly b) reliable/a good idea?
class myClass():
def __mc_func(self):
print ('Hello!')
def _yetAnotherClass__mc_func(self):
print ('Mangled from yetAnotherClass!')
def new_otherClass(self):
return otherClass(self)
def new_yetAnotherClass(self):
return yetAnotherClass(self)
class otherClass():
def __init__(self, myClass_instance):
self.mci = myClass_instance
def func(self):
self.mci.__mc_func()
class yetAnotherClass():
def __init__(self, myClass_instance):
self.mci = myClass_instance
def func(self):
self.mci.__mc_func()
g = myClass()
h = g.new_otherClass()
try:
h.func()
except AttributeError as e:
print (e)
#'myClass' object has no attribute '_otherClass__mc_func'
j = g.new_yetAnotherClass()
j.func()
#Mangled from yetAnotherClass!

Related

Inheritance method overwrite in some conditions [duplicate]

When creating a simple object hierarchy in Python, I'd like to be able to invoke methods of the parent class from a derived class. In Perl and Java, there is a keyword for this (super). In Perl, I might do this:
package Foo;
sub frotz {
return "Bamf";
}
package Bar;
#ISA = qw(Foo);
sub frotz {
my $str = SUPER::frotz();
return uc($str);
}
In Python, it appears that I have to name the parent class explicitly from the child.
In the example above, I'd have to do something like Foo::frotz().
This doesn't seem right since this behavior makes it hard to make deep hierarchies. If children need to know what class defined an inherited method, then all sorts of information pain is created.
Is this an actual limitation in python, a gap in my understanding or both?
Use the super() function:
class Foo(Bar):
def baz(self, **kwargs):
return super().baz(**kwargs)
For Python < 3, you must explicitly opt in to using new-style classes and use:
class Foo(Bar):
def baz(self, arg):
return super(Foo, self).baz(arg)
Python also has super as well:
super(type[, object-or-type])
Return a proxy object that delegates method calls to a parent or sibling class of type.
This is useful for accessing inherited methods that have been overridden in a class.
The search order is same as that used by getattr() except that the type itself is skipped.
Example:
class A(object): # deriving from 'object' declares A as a 'new-style-class'
def foo(self):
print "foo"
class B(A):
def foo(self):
super(B, self).foo() # calls 'A.foo()'
myB = B()
myB.foo()
ImmediateParentClass.frotz(self)
will be just fine, whether the immediate parent class defined frotz itself or inherited it. super is only needed for proper support of multiple inheritance (and then it only works if every class uses it properly). In general, AnyClass.whatever is going to look up whatever in AnyClass's ancestors if AnyClass doesn't define/override it, and this holds true for "child class calling parent's method" as for any other occurrence!
Python 3 has a different and simpler syntax for calling parent method.
If Foo class inherits from Bar, then from Bar.__init__ can be invoked from Foo via super().__init__():
class Foo(Bar):
def __init__(self, *args, **kwargs):
# invoke Bar.__init__
super().__init__(*args, **kwargs)
Many answers have explained how to call a method from the parent which has been overridden in the child.
However
"how do you call a parent class's method from child class?"
could also just mean:
"how do you call inherited methods?"
You can call methods inherited from a parent class just as if they were methods of the child class, as long as they haven't been overwritten.
e.g. in python 3:
class A():
def bar(self, string):
print("Hi, I'm bar, inherited from A"+string)
class B(A):
def baz(self):
self.bar(" - called by baz in B")
B().baz() # prints out "Hi, I'm bar, inherited from A - called by baz in B"
yes, this may be fairly obvious, but I feel that without pointing this out people may leave this thread with the impression you have to jump through ridiculous hoops just to access inherited methods in python. Especially as this question rates highly in searches for "how to access a parent class's method in Python", and the OP is written from the perspective of someone new to python.
I found:
https://docs.python.org/3/tutorial/classes.html#inheritance
to be useful in understanding how you access inherited methods.
Here is an example of using super():
#New-style classes inherit from object, or from another new-style class
class Dog(object):
name = ''
moves = []
def __init__(self, name):
self.name = name
def moves_setup(self):
self.moves.append('walk')
self.moves.append('run')
def get_moves(self):
return self.moves
class Superdog(Dog):
#Let's try to append new fly ability to our Superdog
def moves_setup(self):
#Set default moves by calling method of parent class
super(Superdog, self).moves_setup()
self.moves.append('fly')
dog = Superdog('Freddy')
print dog.name # Freddy
dog.moves_setup()
print dog.get_moves() # ['walk', 'run', 'fly'].
#As you can see our Superdog has all moves defined in the base Dog class
There's a super() in Python too. It's a bit wonky, because of Python's old- and new-style classes, but is quite commonly used e.g. in constructors:
class Foo(Bar):
def __init__(self):
super(Foo, self).__init__()
self.baz = 5
I would recommend using CLASS.__bases__
something like this
class A:
def __init__(self):
print "I am Class %s"%self.__class__.__name__
for parentClass in self.__class__.__bases__:
print " I am inherited from:",parentClass.__name__
#parentClass.foo(self) <- call parents function with self as first param
class B(A):pass
class C(B):pass
a,b,c = A(),B(),C()
If you don't know how many arguments you might get, and want to pass them all through to the child as well:
class Foo(bar)
def baz(self, arg, *args, **kwargs):
# ... Do your thing
return super(Foo, self).baz(arg, *args, **kwargs)
(From: Python - Cleanest way to override __init__ where an optional kwarg must be used after the super() call?)
There is a super() in python also.
Example for how a super class method is called from a sub class method
class Dog(object):
name = ''
moves = []
def __init__(self, name):
self.name = name
def moves_setup(self,x):
self.moves.append('walk')
self.moves.append('run')
self.moves.append(x)
def get_moves(self):
return self.moves
class Superdog(Dog):
#Let's try to append new fly ability to our Superdog
def moves_setup(self):
#Set default moves by calling method of parent class
super().moves_setup("hello world")
self.moves.append('fly')
dog = Superdog('Freddy')
print (dog.name)
dog.moves_setup()
print (dog.get_moves())
This example is similar to the one explained above.However there is one difference that super doesn't have any arguments passed to it.This above code is executable in python 3.4 version.
In this example cafec_param is a base class (parent class) and abc is a child class. abc calls the AWC method in the base class.
class cafec_param:
def __init__(self,precip,pe,awc,nmonths):
self.precip = precip
self.pe = pe
self.awc = awc
self.nmonths = nmonths
def AWC(self):
if self.awc<254:
Ss = self.awc
Su = 0
self.Ss=Ss
else:
Ss = 254; Su = self.awc-254
self.Ss=Ss + Su
AWC = Ss + Su
return self.Ss
def test(self):
return self.Ss
#return self.Ss*4
class abc(cafec_param):
def rr(self):
return self.AWC()
ee=cafec_param('re',34,56,2)
dd=abc('re',34,56,2)
print(dd.rr())
print(ee.AWC())
print(ee.test())
Output
56
56
56
In Python 2, I didn't have a lot luck with super(). I used the answer from
jimifiki on this SO thread how to refer to a parent method in python?.
Then, I added my own little twist to it, which I think is an improvement in usability (Especially if you have long class names).
Define the base class in one module:
# myA.py
class A():
def foo( self ):
print "foo"
Then import the class into another modules as parent:
# myB.py
from myA import A as parent
class B( parent ):
def foo( self ):
parent.foo( self ) # calls 'A.foo()'
class department:
campus_name="attock"
def printer(self):
print(self.campus_name)
class CS_dept(department):
def overr_CS(self):
department.printer(self)
print("i am child class1")
c=CS_dept()
c.overr_CS()
If you want to call the method of any class, you can simply call Class.method on any instance of the class. If your inheritance is relatively clean, this will work on instances of a child class too:
class Foo:
def __init__(self, var):
self.var = var
def baz(self):
return self.var
class Bar(Foo):
pass
bar = Bar(1)
assert Foo.baz(bar) == 1
class a(object):
def my_hello(self):
print "hello ravi"
class b(a):
def my_hello(self):
super(b,self).my_hello()
print "hi"
obj = b()
obj.my_hello()
This is a more abstract method:
super(self.__class__,self).baz(arg)

How to initialize a class member using a classmethod

I have a class, which holds some member x (say, some data that is needed by all instances, but independent of them):
class Foo(object):
x = 23
# some more code goes here
Now, the procedure of determining x became more complex plus I wanted to be able to "refresh" x at certain times, so I decided to write an extra function for it
class Foo(object):
#classmethod
def generate_x(cls):
cls.x = 23
# some more code goes here
However, this class definition lacks an initialization call of generate_x.
What I tried so far:
This does not work:
class Foo(object):
# generate_x() # NameError: name 'generate_x' is not defined
# Foo.generate_x() # NameError: name 'Foo' is not defined
#classmethod
def generate_x(cls):
cls.x = 23
This works but less clear, because code is used outside the class definition
class Foo(object):
#classmethod
def generate_x(cls):
cls.x = 23
# ...
Foo.generate_x()
Are there better alternatives to this? Is using #classmethod the best approach here? What I'm searching is a class-equivalent of __init__.
Considering code clarity, is there a better way than the latter to instantiate Foo.x automatically using a function?
One way to achieve this is by using a decorator:
def with_x(cls):
cls.generate_x()
return cls
#with_x
class Foo(object):
#classmethod
def generate_x(cls):
cls.x = 23
(That said, I personally would just call Foo.generate_x explicitly after the class declaration, and avoid all the magic altogether.)
Use a descriptor.
class Complicated:
def __init__(self, location, get_value):
self.location =location
self.get_value = staticmethod(get_value)
def __get__(self, obj, owner):
try:
a = getattr(owner, self.location)
except AttributeError:
a = self.get_value()
setattr(owner, self.location, a)
return a
class My class:
x = Complicated ('_x', get_x)

conditional class inheritance in python

I am trying to dynamically create classes in Python and am relatively new to classes and class inheritance. Basically I want my final object to have different types of history depending on different needs. I have a solution but I feel there must be a better way. I dreamed up something like this.
class A:
def __init__(self):
self.history={}
def do_something():
pass
class B:
def __init__(self):
self.history=[]
def do_something_else():
pass
class C(A,B):
def __init__(self, a=False, b=False):
if a:
A.__init__(self)
elif b:
B.__init__(self)
use1 = C(a=True)
use2 = C(b=True)
You probably don't really need that, and this is probably an XY problem, but those happen regularly when you are learning a language. You should be aware that you typically don't need to build huge class hierarchies with Python like you do with some other languages. Python employs "duck typing" -- if a class has the method you want to use, just call it!
Also, by the time __init__ is called, the instance already exists. You can't (easily) change it out for a different instance at that time (though, really, anything is possible).
if you really want to be able to instantiate a class and receive what are essentially instances of completely different objects depending on what you passed to the constructor, the simple, straightforward thing to do is use a function that returns instances of different classes.
However, for completeness, you should know that classes can define a __new__ method, which gets called before __init__. This method can return an instance of the class, or an instance of a completely different class, or whatever the heck it wants. So, for example, you can do this:
class A(object):
def __init__(self):
self.history={}
def do_something(self):
print("Class A doing something", self.history)
class B(object):
def __init__(self):
self.history=[]
def do_something_else(self):
print("Class B doing something", self.history)
class C(object):
def __new__(cls, a=False, b=False):
if a:
return A()
elif b:
return B()
use1 = C(a=True)
use2 = C(b=True)
use3 = C()
use1.do_something()
use2.do_something_else()
print (use3 is None)
This works with either Python 2 or 3. With 3 it returns:
Class A doing something {}
Class B doing something []
True
I'm assuming that for some reason you can't change A and B, and you need the functionality of both.
Maybe what you need are two different classes:
class CAB(A, B):
'''uses A's __init__'''
class CBA(B, A):
'''uses B's __init__'''
use1 = CAB()
use2 = CBA()
The goal is to dynamically create a class.
I don't really recommend dynamically creating a class. You can use a function to do this, and you can easily do things like pickle the instances because they're available in the global namespace of the module:
def make_C(a=False, b=False):
if a:
return CAB()
elif b:
return CBA()
But if you insist on "dynamically creating the class"
def make_C(a=False, b=False):
if a:
return type('C', (A, B), {})()
elif b:
return type('C', (B, A), {})()
And usage either way is:
use1 = make_C(a=True)
use2 = make_C(b=True)
I was thinking about the very same thing and came up with a helper method for returning a class inheriting from the type provided as an argument.
The helper function defines and returns the class, which is inheriting from the type provided as an argument.
The solution presented itself when I was working on a named value class. I wanted a value, that could have its own name, but that could behave as a regular variable. The idea could be implemented mostly for debugging processes, I think. Here is the code:
def getValueClass(thetype):
"""Helper function for getting the `Value` class
Getting the named value class, based on `thetype`.
"""
# if thetype not in (int, float, complex): # if needed
# raise TypeError("The type is not numeric.")
class Value(thetype):
__text_signature__ = "(value, name: str = "")"
__doc__ = f"A named value of type `{thetype.__name__}`"
def __init__(self, value, name: str = ""):
"""Value(value, name) -- a named value"""
self._name = name
def __new__(cls, value, name: str = ""):
instance = super().__new__(cls, value)
return instance
def __repr__(self):
return f"{super().__repr__()}"
def __str__(self):
return f"{self._name} = {super().__str__()}"
return Value
Some examples:
IValue = getValueClass(int)
FValue = getValueClass(float)
CValue = getValueClass(complex)
iv = IValue(3, "iv")
print(f"{iv!r}")
print(iv)
print()
fv = FValue(4.5, "fv")
print(f"{fv!r}")
print(fv)
print()
cv = CValue(7 + 11j, "cv")
print(f"{cv!r}")
print(cv)
print()
print(f"{iv + fv + cv = }")
The output:
3
iv = 3
4.5
fv = 4.5
(7+11j)
cv = (7+11j)
iv + fv + cv = (14.5+11j)
When working in IDLE, the variables seem to behave as built-in types, except when printing:
>>> vi = IValue(4, "vi")
>>> vi
4
>>> print(vi)
vi = 4
>>> vf = FValue(3.5, 'vf')
>>> vf
3.5
>>> vf + vi
7.5
>>>

Method Inheritance in Python

I have a parent class and two child class. The parent class is an abstract base class that has method combine that gets inherited by the child classes. But each child implements combine differently from a parameter perspective therefore each of their own methods take different number of parameters. In Python, when a child inherits a method and requires re-implementing it, that newly re-implemented method must match parameter by parameter. Is there a way around this? I.e. the inherited method can have dynamic parameter composition?
This code demonstrates that signature of overridden method can easily change.
class Parent(object):
def foo(self, number):
for _ in range(number):
print "Hello from parent"
class Child(Parent):
def foo(self, number, greeting):
for _ in range(number):
print greeting
class GrandChild(Child):
def foo(self):
super(GrandChild,self).foo(1, "hey")
p = Parent()
p.foo(3)
c = Child()
c.foo(2, "Hi")
g = GrandChild()
g.foo()
As the other answer demonstrates for plain classes, the signature of an overridden inherited method can be different in the child than in the parent.
The same is true even if the parent is an abstract base class:
import abc
class Foo:
__metaclass__ = abc.ABCMeta
#abc.abstractmethod
def bar(self, x, y):
return x + y
class ChildFoo(Foo):
def bar(self, x):
return super(self.__class__, self).bar(x, 3)
class DumbFoo(Foo):
def bar(self):
return "derp derp derp"
cf = ChildFoo()
print cf.bar(5)
df = DumbFoo()
print df.bar()
Inappropriately complicated detour
It is an interesting exercise in Python metaclasses to try to restrict the ability to override methods, such that their argument signature must match that of the base class. Here is an attempt.
Note: I'm not endorsing this as a good engineering idea, and I did not spend time tying up loose ends so there are likely little caveats about the code below that could make it more efficient or something.
import types
import inspect
def strict(func):
"""Add some info for functions having strict signature.
"""
arg_sig = inspect.getargspec(func)
func.is_strict = True
func.arg_signature = arg_sig
return func
class StrictSignature(type):
def __new__(cls, name, bases, attrs):
func_types = (types.MethodType,) # include types.FunctionType?
# Check each attribute in the class being created.
for attr_name, attr_value in attrs.iteritems():
if isinstance(attr_value, func_types):
# Check every base for #strict functions.
for base in bases:
base_attr = base.__dict__.get(attr_name)
base_attr_is_function = isinstance(base_attr, func_types)
base_attr_is_strict = hasattr(base_attr, "is_strict")
# Assert that inspected signatures match.
if base_attr_is_function and base_attr_is_strict:
assert (inspect.getargspec(attr_value) ==
base_attr.arg_signature)
# If everything passed, create the class.
return super(StrictSignature, cls).__new__(cls, name, bases, attrs)
# Make a base class to try out strictness
class Base:
__metaclass__ = StrictSignature
#strict
def foo(self, a, b, c="blah"):
return a + b + len(c)
def bar(self, x, y, z):
return x
#####
# Now try to make some classes inheriting from Base.
#####
class GoodChild(Base):
# Was declared strict, better match the signature.
def foo(self, a, b, c="blah"):
return c
# Was never declared as strict, so no rules!
def bar(im_a_little, teapot):
return teapot/2
# These below can't even be created. Uncomment and try to run the file
# and see. It's not just that you can't instantiate them, you can't
# even get the *class object* defined at class creation time.
#
#class WrongChild(Base):
# def foo(self, a):
# return super(self.__class__, self).foo(a, 5)
#
#class BadChild(Base):
# def foo(self, a, b, c="halb"):
# return super(self.__class__, self).foo(a, b, c)
Note, like with most "strict" or "private" type ideas in Python, that you are still free to monkey-patch functions onto even a "good class" and those monkey-patched functions don't have to satisfy the signature constraint.
# Instance level
gc = GoodChild()
gc.foo = lambda self=gc: "Haha, I changed the signature!"
# Class level
GoodChild.foo = lambda self: "Haha, I changed the signature!"
and even if you add more complexity to the meta class that checks whenever any method type attributes are updated in the class's __dict__ and keeps making the assert statement when the class is modified, you can still use type.__setattr__ to bypass customized behavior and set an attribute anyway.
In these cases, I imagine Jeff Goldblum as Ian Malcolm from Jurassic Park, looking at you blankly and saying "Consenting adults, uhh, find a way.."

python: multiple inheritance and __add__() in base class

I've got a base class where I want to handle __add__() and want to support when __add__ing two subclass instances - that is have the methods of both subclasses in the resulting instance.
import copy
class Base(dict):
def __init__(self, **data):
self.update(data)
def __add__(self, other):
result = copy.deepcopy(self)
result.update(other)
# how do I now join the methods?
return result
class A(Base):
def a(self):
print "test a"
class B(Base):
def b(self):
print "test b"
if __name__ == '__main__':
a = A(a=1, b=2)
b = B(c=1)
c = a + b
c.b() # should work
c.a() # should work
Edit: To be more specific: I've got a class Hosts that holds a dict(host01=.., host02=..) (hence the subclassing of dict) - this offers some base methods such as run_ssh_commmand_on_all_hosts()
Now I've got a subclass HostsLoadbalancer that holds some special methods such as drain(), and I've got a class HostsNagios that holds some nagios-specific methods.
What I'm doing then, is something like:
nagios_hosts = nagios.gethosts()
lb_hosts = loadbalancer.gethosts()
hosts = nagios_hosts + lb_hosts
hosts.run_ssh_command_on_all_hosts('uname')
hosts.drain() # method of HostsLoadbalancer - drains just the loadbalancer-hosts
hosts.acknoledge_downtime() # method of NagiosHosts - does this just for the nagios hosts, is overlapping
What is the best solution for this problem?
I think I can somehow "copy all methods" - like this:
for x in dir(other):
setattr(self, x, getattr(other, x))
Am I on the right track? Or should I use Abstract Base Classes?
In general this is a bad idea. You're trying to inject methods into a type. That being said, you can certainly do this in python, but you'll have to realize that you want to create a new type each time you do this. Here's an example:
import copy
class Base(dict):
global_class_cache = {}
def __init__(self, **data):
self.local_data = data
def __add__(self, other):
new_instance = self._new_type((type(self), type(other)))()
new_instance.update(copy.deepcopy(self).__dict__)
new_instance.update(copy.deepcopy(other).__dict__)
return new_instance
def _new_type(self, parents):
parents = tuple(parents)
if parents not in Base.global_class_cache:
name = '_'.join(cls.__name__ for cls in parents)
Base.global_class_cache[parents] = type(name, parents, {})
return Base.global_class_cache[parents]
class A(Base):
def a(self):
print "test a"
class B(Base):
def b(self):
print "test b"
if __name__ == '__main__':
a = A(a=1, b=2)
b = B(c=1)
c = a + b
c.b() # should work
c.a() # should work
print c.__class__.__name__
UPDATE
I've updated the example to remove manually moving the methods -- we're using mixins here.
It is difficult to answer your question without more information. If Base is supposed to be a common interface to all classes, then you could use simple inheritance to implement the common behavior while preserving the methods of the subclasses. For instance, imagine that you need a Base class where all the objects have a say_hola() method, but subclasses can have arbitrary additional methods in addition to say_hola():
class Base(object):
def say_hola(self):
print "hola"
class C1(Base):
def add(self, a, b):
return a+b
class C2(Base):
def say_bonjour(self):
return 'bon jour'
This way all instances of C1 and C2 have say_hola() in addition to their specific methods.
A more general pattern is to create a Mixin. From Wikipedia:
In object-oriented programming
languages, a mixin is a class that
provides a certain functionality to be
inherited by a subclass, while not
meant for instantiation (the
generation of objects of that class).
Inheriting from a mixin is not a form
of specialization but is rather a
means of collecting functionality. A
class may inherit most or all of its
functionality from one or more mixins
through multiple inheritance.

Categories

Resources