I am trying to dynamically create classes in Python and am relatively new to classes and class inheritance. Basically I want my final object to have different types of history depending on different needs. I have a solution but I feel there must be a better way. I dreamed up something like this.
class A:
def __init__(self):
self.history={}
def do_something():
pass
class B:
def __init__(self):
self.history=[]
def do_something_else():
pass
class C(A,B):
def __init__(self, a=False, b=False):
if a:
A.__init__(self)
elif b:
B.__init__(self)
use1 = C(a=True)
use2 = C(b=True)
You probably don't really need that, and this is probably an XY problem, but those happen regularly when you are learning a language. You should be aware that you typically don't need to build huge class hierarchies with Python like you do with some other languages. Python employs "duck typing" -- if a class has the method you want to use, just call it!
Also, by the time __init__ is called, the instance already exists. You can't (easily) change it out for a different instance at that time (though, really, anything is possible).
if you really want to be able to instantiate a class and receive what are essentially instances of completely different objects depending on what you passed to the constructor, the simple, straightforward thing to do is use a function that returns instances of different classes.
However, for completeness, you should know that classes can define a __new__ method, which gets called before __init__. This method can return an instance of the class, or an instance of a completely different class, or whatever the heck it wants. So, for example, you can do this:
class A(object):
def __init__(self):
self.history={}
def do_something(self):
print("Class A doing something", self.history)
class B(object):
def __init__(self):
self.history=[]
def do_something_else(self):
print("Class B doing something", self.history)
class C(object):
def __new__(cls, a=False, b=False):
if a:
return A()
elif b:
return B()
use1 = C(a=True)
use2 = C(b=True)
use3 = C()
use1.do_something()
use2.do_something_else()
print (use3 is None)
This works with either Python 2 or 3. With 3 it returns:
Class A doing something {}
Class B doing something []
True
I'm assuming that for some reason you can't change A and B, and you need the functionality of both.
Maybe what you need are two different classes:
class CAB(A, B):
'''uses A's __init__'''
class CBA(B, A):
'''uses B's __init__'''
use1 = CAB()
use2 = CBA()
The goal is to dynamically create a class.
I don't really recommend dynamically creating a class. You can use a function to do this, and you can easily do things like pickle the instances because they're available in the global namespace of the module:
def make_C(a=False, b=False):
if a:
return CAB()
elif b:
return CBA()
But if you insist on "dynamically creating the class"
def make_C(a=False, b=False):
if a:
return type('C', (A, B), {})()
elif b:
return type('C', (B, A), {})()
And usage either way is:
use1 = make_C(a=True)
use2 = make_C(b=True)
I was thinking about the very same thing and came up with a helper method for returning a class inheriting from the type provided as an argument.
The helper function defines and returns the class, which is inheriting from the type provided as an argument.
The solution presented itself when I was working on a named value class. I wanted a value, that could have its own name, but that could behave as a regular variable. The idea could be implemented mostly for debugging processes, I think. Here is the code:
def getValueClass(thetype):
"""Helper function for getting the `Value` class
Getting the named value class, based on `thetype`.
"""
# if thetype not in (int, float, complex): # if needed
# raise TypeError("The type is not numeric.")
class Value(thetype):
__text_signature__ = "(value, name: str = "")"
__doc__ = f"A named value of type `{thetype.__name__}`"
def __init__(self, value, name: str = ""):
"""Value(value, name) -- a named value"""
self._name = name
def __new__(cls, value, name: str = ""):
instance = super().__new__(cls, value)
return instance
def __repr__(self):
return f"{super().__repr__()}"
def __str__(self):
return f"{self._name} = {super().__str__()}"
return Value
Some examples:
IValue = getValueClass(int)
FValue = getValueClass(float)
CValue = getValueClass(complex)
iv = IValue(3, "iv")
print(f"{iv!r}")
print(iv)
print()
fv = FValue(4.5, "fv")
print(f"{fv!r}")
print(fv)
print()
cv = CValue(7 + 11j, "cv")
print(f"{cv!r}")
print(cv)
print()
print(f"{iv + fv + cv = }")
The output:
3
iv = 3
4.5
fv = 4.5
(7+11j)
cv = (7+11j)
iv + fv + cv = (14.5+11j)
When working in IDLE, the variables seem to behave as built-in types, except when printing:
>>> vi = IValue(4, "vi")
>>> vi
4
>>> print(vi)
vi = 4
>>> vf = FValue(3.5, 'vf')
>>> vf
3.5
>>> vf + vi
7.5
>>>
Related
I have a case, where I have an instance of a class in python which holds instances of other classes. For my use case, I would like a way to use the methods of the "inner" classes from the outer class without referencing the attribute holding the inner class.
I have made a simplistic example here:
class A:
def __init__(self):
pass
def say_hi(self):
print("Hi")
def say_goodbye(self):
print("Goodbye")
class C:
def __init__(self, other_instance):
self.other_instance= other_instance
def say_good_night(self):
print("Good night")
my_a = A()
my_c = C(other_instance=my_a)
# How to make this possible:
my_c.say_hi()
# Instead of
my_c.other_instance.say_hi()
Class inheritance is not possible, as the object passed to C may be an instance of a range of classes. Is this possible in Python?
I think this is the simplest solution although it is possible with metaprogramming.
class A:
def __init__(self):
pass
def say_hi(self):
print("Hi")
def say_goodbye(self):
print("Goodbye")
class C:
def __init__(self, other_class):
self.other_class = other_class
C._add_methods(other_class)
def say_good_night(self):
print("Good night")
#classmethod
def _add_methods(cls, obj):
type_ = type(obj)
for k, v in type_.__dict__.items():
if not k.startswith('__'):
setattr(cls, k, v)
my_a = A()
my_c = C(other_class=my_a)
my_c.say_hi()
output :
Hi
First we get the type of passed instance, then we iterate through it's attribute (because methods are attributes of the class not the instance).
If self.other_class is only needed for this purpose, you can omit it as well.
So, because you have done:
my_a = A() and my_c = C(other_class=my_a).
my_c.other_class is the same as my_a asthey point to the same location in memory.
Therefore, as you can do my_a.say_hi() you could also do my_c.other_class.say_hi().
Also, just a note, as you are calling A() before you store it into other_classes, I would probably rename the variable other_classes to class_instances.
Personally, I think that would make more sense, as each of those classes would have already been instantiated.
Being new to OOP, I wanted to know if there is any way of inheriting one of multiple classes based on how the child class is called in Python. The reason I am trying to do this is because I have multiple methods with the same name but in three parent classes which have different functionality. The corresponding class will have to be inherited based on certain conditions at the time of object creation.
For example, I tried to make Class C inherit A or B based on whether any arguments were passed at the time of instantiating, but in vain. Can anyone suggest a better way to do this?
class A:
def __init__(self,a):
self.num = a
def print_output(self):
print('Class A is the parent class, the number is 7',self.num)
class B:
def __init__(self):
self.digits=[]
def print_output(self):
print('Class B is the parent class, no number given')
class C(A if kwargs else B):
def __init__(self,**kwargs):
if kwargs:
super().__init__(kwargs['a'])
else:
super().__init__()
temp1 = C(a=7)
temp2 = C()
temp1.print_output()
temp2.print_output()
The required output would be 'Class A is the parent class, the number is 7' followed by 'Class B is the parent class, no number given'.
Thanks!
Whether you're just starting out with OOP or have been doing it for a while, I would suggest you get a good book on design patterns. A classic is Design Patterns by Gamma. Helm. Johnson and Vlissides.
Instead of using inheritance, you can use composition with delegation. For example:
class A:
def do_something(self):
# some implementation
class B:
def do_something(self):
# some implementation
class C:
def __init__(self, use_A):
# assign an instance of A or B depending on whether argument use_A is True
self.instance = A() if use_A else B()
def do_something(self):
# delegate to A or B instance:
self.instance.do_something()
Update
In response to a comment made by Lev Barenboim, the following demonstrates how you can make composition with delegation appear to be more like regular inheritance so that if class C has has assigned an instance of class A, for example, to self.instance, then attributes of A such as x can be accessed internally as self.x as well as self.instance.x (assuming class C does not define attribute x itself) and likewise if you create an instance of C named c, you can refer to that attribute as c.x as if class C had inherited from class A.
The basis for doing this lies with builtin methods __getattr__ and __getattribute__. __getattr__ can be defined on a class and will be called whenever an attribute is referenced but not defined. __getattribute__ can be called on an object to retrieve an attribute by name.
Note that in the following example, class C no longer even has to define method do_something if all it does is delegate to self.instance:
class A:
def __init__(self, x):
self.x = x
def do_something(self):
print('I am A')
class B:
def __init__(self, x):
self.x = x
def do_something(self):
print('I am B')
class C:
def __init__(self, use_A, x):
# assign an instance of A or B depending on whether argument use_A is True
self.instance = A(x) if use_A else B(x)
# called when an attribute is not found:
def __getattr__(self, name):
# assume it is implemented by self.instance
return self.instance.__getattribute__(name)
# something unique to class C:
def foo(self):
print ('foo called: x =', self.x)
c = C(True, 7)
print(c.x)
c.foo()
c.do_something()
# This will throw an Exception:
print(c.y)
Prints:
7
foo called: x = 7
I am A
Traceback (most recent call last):
File "C:\Ron\test\test.py", line 34, in <module>
print(c.y)
File "C:\Ron\test\test.py", line 23, in __getattr__
return self.instance.__getattribute__(name)
AttributeError: 'A' object has no attribute 'y'
I don't think you can pass values to the condition of the class from inside itself.
Rather, you can define a factory method like this :
class A:
def sayClass(self):
print("Class A")
class B:
def sayClass(self):
print("Class B")
def make_C_from_A_or_B(make_A):
class C(A if make_A else B):
def sayClass(self):
super().sayClass()
print("Class C")
return C()
make_C_from_A_or_B(True).sayClass()
which output :
Class A
Class C
Note: You can find information about the factory pattern with an example I found good enough on this article (about a parser factory)
I have an class that can be constructed via alternative constructors using class methods.
class A:
def __init__(self, a, b):
self.a = a
self.b = b
#classmethod
def empty(cls, b):
return cls( 0 , b)
So let's say instead of constructing A like A() I can now also do A.empty().
For user convenience, I would like to extend this empty method even further, so that I can initialize A via A.empty() as well as the more specialized but closely-related A.empty.typeI() and A.empty.typeII().
My naive approach did not quite do what I wanted:
class A:
def __init__(self, a, b):
self.a = a
self.b = b
#classmethod
def empty(cls, b):
def TypeI(b):
return cls( 0 , b-1)
def TypeII(b):
return cls( 0 , b-2)
return cls( 0 , b)
Can anyone tell me how that could be done (or at least convince me why that would be terrible idea). I want to stress that for usage I imagine such an approach to be very convenient and clear for the users as the functions are grouped intuitively.
You can implement what you want by making Empty a nested class of A rather than a class method. More than anything else this provides a convenient namespace — instances of it are never created — in which to place various alternative constructors and can easily be extended.
class A(object):
def __init__(self, a, b):
self.a = a
self.b = b
def __repr__(self):
return 'A({}, {})'.format(self.a, self.b)
class Empty(object): # nested class
def __new__(cls, b):
return A(0, b) # ignore cls & return instance of enclosing class
#staticmethod
def TypeI(b):
return A(0, b-1)
#staticmethod
def TypeII(b):
return A(0, b-2)
a = A(1, 1)
print('a: {}'.format(a)) # --> a: A(1, 1)
b = A.Empty(2)
print('b: {}'.format(b)) # --> b: A(0, 2)
bi = A.Empty.TypeI(4)
print('bi: {}'.format(bi)) # --> bi: A(0, 3)
bii = A.Empty.TypeII(6)
print('bii: {}'.format(bii)) # --> bii: A(0, 4)
You can’t really do that because A.empty.something would require the underlying method object to be bound to the type, so you can actually call it. And Python simply won’t do that because the type’s member is empty, not TypeI.
So what you would need to do is to have some object empty (for example a SimpleNamespace) in your type which returns bound classmethods. The problem is that we cannot yet access the type as we define it with the class structure. So we cannot access its members to set up such an object. Instead, we would have to do it afterwards:
class A:
def __init__ (self, a, b):
self.a = a
self.b = b
#classmethod
def _empty_a (cls, b):
return cls(1, b)
#classmethod
def _empty_b (cls, b):
return cls(2, b)
A.empty = SimpleNamespace(a = A._empty_a, b = A._empty_b)
Now, you can access that member’s items and get bound methods:
>>> A.empty.a
<bound method type._empty_a of <class '__main__.A'>>
>>> A.empty.a('foo').a
1
Of course, that isn’t really that pretty. Ideally, we want to set this up when we define the type. We could use meta classes for this but we can actually solve this easily using a class decorator. For example this one:
def delegateMember (name, members):
def classDecorator (cls):
mapping = { m: getattr(cls, '_' + m) for m in members }
setattr(cls, name, SimpleNamespace(**mapping))
return cls
return classDecorator
#delegateMember('empty', ['empty_a', 'empty_b'])
class A:
def __init__ (self, a, b):
self.a = a
self.b = b
#classmethod
def _empty_a (cls, b):
return cls(1, b)
#classmethod
def _empty_b (cls, b):
return cls(2, b)
And magically, it works:
>>> A.empty.empty_a
<bound method type._empty_a of <class '__main__.A'>>
Now that we got it working somehow, of course we should discuss whether this is actually something you want to do. My opinion is that you shouldn’t. You can already see from the effort it took that this isn’t something that’s usually done in Python. And that’s already a good sign that you shouldn’t do it. Explicit is better than implicit, so it’s probably a better idea to just expect your users to type the full name of the class method. My example above was of course structured in a way that A.empty.empty_a would have been longer than just a A.empty_a. But even with your name, there isn’t a reason why it couldn’t be just an underscore instead of a dot.
And also, you can simply add multiple default paths inside a single method. Provide default argument values, or use sensible fallbacks, and you probably don’t need many class methods to create alternative versions of your type.
It is generally better to have uniform class interfaces, meaning the different usages should be consistent with each other. I consider A.empty() and A.empty.type1() to be inconsistent with each other, because the prefix A.empty ituitively means different things in each of them.
A better interface would be:
class A:
#classmethod
def empty_default(cls, ...): ...
#classmethod
def empty_type1(cls, ...): ...
#classmethod
def empty_type2(cls, ...): ...
Or:
class A:
#classmethod
def empty(cls, empty_type, ...): ...
Here's an enhanced implementation of my other answer that makes it — as one commenter put it — "play well with inheritance". You may not need this, but others doing something similar might.
It accomplishes this by using a metaclass to dynamically create and add an nested Empty class similar to that shown in the other answer. The main difference is that the default Empty class in derived classes will now return Derived instances instead of instances of A, the base class.
Derived classes can override this default behavior by defining their own nested Empty class (it can even be derived from the one in the one in the base class). Also note that for Python 3, metaclasses are specified using different syntax:
class A(object, metaclass=MyMetaClass):
Here's the revised implementation using Python 2 metaclass syntax:
class MyMetaClass(type):
def __new__(metaclass, name, bases, classdict):
# create the class normally
MyClass = super(MyMetaClass, metaclass).__new__(metaclass, name, bases,
classdict)
# add a default nested Empty class if one wasn't defined
if 'Empty' not in classdict:
class Empty(object):
def __new__(cls, b):
return MyClass(0, b)
#staticmethod
def TypeI(b):
return MyClass(0, b-1)
#staticmethod
def TypeII(b):
return MyClass(0, b-2)
setattr(MyClass, 'Empty', Empty)
return MyClass
class A(object):
__metaclass__ = MyMetaClass
def __init__(self, a, b):
self.a = a
self.b = b
def __repr__(self):
return '{}({}, {})'.format(self.__class__.__name__, self.a, self.b)
a = A(1, 1)
print('a: {}'.format(a)) # --> a: A(1, 1)
b = A.Empty(2)
print('b: {}'.format(b)) # --> b: A(0, 2)
bi = A.Empty.TypeI(4)
print('bi: {}'.format(bi)) # --> bi: A(0, 3)
bii = A.Empty.TypeII(6)
print('bii: {}'.format(bii)) # --> bii: A(0, 4)
With the above, you can now do something like this:
class Derived(A):
pass # inherits everything, except it will get a custom Empty
d = Derived(1, 2)
print('d: {}'.format(d)) # --> d: Derived(1, 2)
e = Derived.Empty(3)
print('e: {}'.format(e)) # --> e: Derived(0, 3)
ei = Derived.Empty.TypeI(5)
print('ei: {}'.format(ei)) # --> ei: Derived(0, 4)
eii = Derived.Empty.TypeII(7)
print('eii: {}'.format(eii)) # --> eii: Derived(0, 5)
I have a parent class and two child class. The parent class is an abstract base class that has method combine that gets inherited by the child classes. But each child implements combine differently from a parameter perspective therefore each of their own methods take different number of parameters. In Python, when a child inherits a method and requires re-implementing it, that newly re-implemented method must match parameter by parameter. Is there a way around this? I.e. the inherited method can have dynamic parameter composition?
This code demonstrates that signature of overridden method can easily change.
class Parent(object):
def foo(self, number):
for _ in range(number):
print "Hello from parent"
class Child(Parent):
def foo(self, number, greeting):
for _ in range(number):
print greeting
class GrandChild(Child):
def foo(self):
super(GrandChild,self).foo(1, "hey")
p = Parent()
p.foo(3)
c = Child()
c.foo(2, "Hi")
g = GrandChild()
g.foo()
As the other answer demonstrates for plain classes, the signature of an overridden inherited method can be different in the child than in the parent.
The same is true even if the parent is an abstract base class:
import abc
class Foo:
__metaclass__ = abc.ABCMeta
#abc.abstractmethod
def bar(self, x, y):
return x + y
class ChildFoo(Foo):
def bar(self, x):
return super(self.__class__, self).bar(x, 3)
class DumbFoo(Foo):
def bar(self):
return "derp derp derp"
cf = ChildFoo()
print cf.bar(5)
df = DumbFoo()
print df.bar()
Inappropriately complicated detour
It is an interesting exercise in Python metaclasses to try to restrict the ability to override methods, such that their argument signature must match that of the base class. Here is an attempt.
Note: I'm not endorsing this as a good engineering idea, and I did not spend time tying up loose ends so there are likely little caveats about the code below that could make it more efficient or something.
import types
import inspect
def strict(func):
"""Add some info for functions having strict signature.
"""
arg_sig = inspect.getargspec(func)
func.is_strict = True
func.arg_signature = arg_sig
return func
class StrictSignature(type):
def __new__(cls, name, bases, attrs):
func_types = (types.MethodType,) # include types.FunctionType?
# Check each attribute in the class being created.
for attr_name, attr_value in attrs.iteritems():
if isinstance(attr_value, func_types):
# Check every base for #strict functions.
for base in bases:
base_attr = base.__dict__.get(attr_name)
base_attr_is_function = isinstance(base_attr, func_types)
base_attr_is_strict = hasattr(base_attr, "is_strict")
# Assert that inspected signatures match.
if base_attr_is_function and base_attr_is_strict:
assert (inspect.getargspec(attr_value) ==
base_attr.arg_signature)
# If everything passed, create the class.
return super(StrictSignature, cls).__new__(cls, name, bases, attrs)
# Make a base class to try out strictness
class Base:
__metaclass__ = StrictSignature
#strict
def foo(self, a, b, c="blah"):
return a + b + len(c)
def bar(self, x, y, z):
return x
#####
# Now try to make some classes inheriting from Base.
#####
class GoodChild(Base):
# Was declared strict, better match the signature.
def foo(self, a, b, c="blah"):
return c
# Was never declared as strict, so no rules!
def bar(im_a_little, teapot):
return teapot/2
# These below can't even be created. Uncomment and try to run the file
# and see. It's not just that you can't instantiate them, you can't
# even get the *class object* defined at class creation time.
#
#class WrongChild(Base):
# def foo(self, a):
# return super(self.__class__, self).foo(a, 5)
#
#class BadChild(Base):
# def foo(self, a, b, c="halb"):
# return super(self.__class__, self).foo(a, b, c)
Note, like with most "strict" or "private" type ideas in Python, that you are still free to monkey-patch functions onto even a "good class" and those monkey-patched functions don't have to satisfy the signature constraint.
# Instance level
gc = GoodChild()
gc.foo = lambda self=gc: "Haha, I changed the signature!"
# Class level
GoodChild.foo = lambda self: "Haha, I changed the signature!"
and even if you add more complexity to the meta class that checks whenever any method type attributes are updated in the class's __dict__ and keeps making the assert statement when the class is modified, you can still use type.__setattr__ to bypass customized behavior and set an attribute anyway.
In these cases, I imagine Jeff Goldblum as Ian Malcolm from Jurassic Park, looking at you blankly and saying "Consenting adults, uhh, find a way.."
I've got a base class where I want to handle __add__() and want to support when __add__ing two subclass instances - that is have the methods of both subclasses in the resulting instance.
import copy
class Base(dict):
def __init__(self, **data):
self.update(data)
def __add__(self, other):
result = copy.deepcopy(self)
result.update(other)
# how do I now join the methods?
return result
class A(Base):
def a(self):
print "test a"
class B(Base):
def b(self):
print "test b"
if __name__ == '__main__':
a = A(a=1, b=2)
b = B(c=1)
c = a + b
c.b() # should work
c.a() # should work
Edit: To be more specific: I've got a class Hosts that holds a dict(host01=.., host02=..) (hence the subclassing of dict) - this offers some base methods such as run_ssh_commmand_on_all_hosts()
Now I've got a subclass HostsLoadbalancer that holds some special methods such as drain(), and I've got a class HostsNagios that holds some nagios-specific methods.
What I'm doing then, is something like:
nagios_hosts = nagios.gethosts()
lb_hosts = loadbalancer.gethosts()
hosts = nagios_hosts + lb_hosts
hosts.run_ssh_command_on_all_hosts('uname')
hosts.drain() # method of HostsLoadbalancer - drains just the loadbalancer-hosts
hosts.acknoledge_downtime() # method of NagiosHosts - does this just for the nagios hosts, is overlapping
What is the best solution for this problem?
I think I can somehow "copy all methods" - like this:
for x in dir(other):
setattr(self, x, getattr(other, x))
Am I on the right track? Or should I use Abstract Base Classes?
In general this is a bad idea. You're trying to inject methods into a type. That being said, you can certainly do this in python, but you'll have to realize that you want to create a new type each time you do this. Here's an example:
import copy
class Base(dict):
global_class_cache = {}
def __init__(self, **data):
self.local_data = data
def __add__(self, other):
new_instance = self._new_type((type(self), type(other)))()
new_instance.update(copy.deepcopy(self).__dict__)
new_instance.update(copy.deepcopy(other).__dict__)
return new_instance
def _new_type(self, parents):
parents = tuple(parents)
if parents not in Base.global_class_cache:
name = '_'.join(cls.__name__ for cls in parents)
Base.global_class_cache[parents] = type(name, parents, {})
return Base.global_class_cache[parents]
class A(Base):
def a(self):
print "test a"
class B(Base):
def b(self):
print "test b"
if __name__ == '__main__':
a = A(a=1, b=2)
b = B(c=1)
c = a + b
c.b() # should work
c.a() # should work
print c.__class__.__name__
UPDATE
I've updated the example to remove manually moving the methods -- we're using mixins here.
It is difficult to answer your question without more information. If Base is supposed to be a common interface to all classes, then you could use simple inheritance to implement the common behavior while preserving the methods of the subclasses. For instance, imagine that you need a Base class where all the objects have a say_hola() method, but subclasses can have arbitrary additional methods in addition to say_hola():
class Base(object):
def say_hola(self):
print "hola"
class C1(Base):
def add(self, a, b):
return a+b
class C2(Base):
def say_bonjour(self):
return 'bon jour'
This way all instances of C1 and C2 have say_hola() in addition to their specific methods.
A more general pattern is to create a Mixin. From Wikipedia:
In object-oriented programming
languages, a mixin is a class that
provides a certain functionality to be
inherited by a subclass, while not
meant for instantiation (the
generation of objects of that class).
Inheriting from a mixin is not a form
of specialization but is rather a
means of collecting functionality. A
class may inherit most or all of its
functionality from one or more mixins
through multiple inheritance.