I'm simulating a distributed system in which all nodes follow some protocol. This includes assessing some small variations in the protocol. A variation means alternative implementation of a single method. All nodes always follow the same variation, which is determined by experiment configuration (only one configuration is active at any given time). What is the clearest way to do it, without sacrificing performance?
As an experiment can be quite extensive, I clearly don't want any conditionals. Before I've just used inheritance, like:
class Node(object):
def dumb_method(self, argument):
# ...
def slow_method(self, argument):
# ...
# A lot more methods
class SmarterNode(Node):
def dumb_method(self, argument):
# A somewhat smarter variant ...
class FasterNode(SmarterNode):
def slow_method(self, argument):
# A faster variant ...
But now I need to test all possible variants and don't want an exponential number of classes cluttering the source. I also want other people peeping at the code to understand it with minimal effort. What are your suggestions?
Edit: One thing I failed to emphasize enough: for all envisioned use cases, it seems that patching the class upon configuration is good. I mean: it can be made to work by simple Node.dumb_method = smart_method. But somehow it didn't feel right. Would this kind of solution cause major headache to a random smart reader?
You can create new subtypes with the type function. You just have to give it the subclasses namespace as a dict.
# these are supposed to overwrite methods
def foo(self):
return "foo"
def bar(self):
return "bar"
def variants(base, methods):
"""
given a base class and list of dicts like [{ foo = <function foo> }]
returns types T(base) where foo was overwritten
"""
for d in methods:
yield type('NodeVariant', (base,), d)
from itertools import combinations
def subdicts(**fulldict):
""" returns all dicts that are subsets of `fulldict` """
items = fulldict.items()
for i in range(len(items)+1):
for subset in combinations(items, i):
yield dict(subset)
# a list of method variants
combos = subdicts(slow_method=foo, dumb_method=bar)
# base class
class Node(object):
def dumb_method(self):
return "dumb"
def slow_method(self):
return "slow"
# use the base and our variants to make a number of types
types = variants(Node, combos)
# instantiate each type and call boths methods on it for demonstration
print [(var.dumb_method(), var.slow_method()) for var
in (cls() for cls in types)]
# [('dumb', 'slow'), ('dumb', 'foo'), ('bar', 'slow'), ('bar', 'foo')]
You could use the __slots__ mechanism and a factory class. You would need to instantiate a NodeFactory for each experiment, but it would handle creating Node instances for you from there on. Example:
class Node(object):
__slots__ = ["slow","dumb"]
class NodeFactory(object):
def __init__(self, slow_method, dumb_method):
self.slow = slow_method
self.dumb = dumb_method
def makenode(self):
n = Node()
n.dumb = self.dumb
n.slow = self.slow
return n
an example run:
>>> def foo():
... print "foo"
...
>>> def bar():
... print "bar"
...
>>> nf = NodeFactory(foo, bar)
>>> n = nf.makenode()
>>> n.dumb()
bar
>>> n.slow()
foo
I'm not sure if you're trying to do something akin to this (allows swap-out runtime "inheritance"):
class Node(object):
__methnames = ('method','method1')
def __init__(self, type):
for i in self.__methnames:
setattr(self, i, getattr(self, i+"_"+type))
def dumb_method(self, argument):
# ...
def slow_method(self, argument):
# ...
n = Node('dumb')
n.method() # calls dumb_method
n = Node('slow')
n.method() # calls slow_method
Or if you're looking for something like this (allows running (and therefore testing) of all methods of the class):
class Node(object):
#do something
class NodeTest(Node):
def run_tests(self, ending = ''):
for i in dir(self):
if(i.endswith(ending)):
meth = getattr(self, i)
if(callable(meth)):
meth() #needs some default args.
# or yield meth if you can
You can use a metaclass for this.
If will let you create a class on the fly with method names according to every variations.
Should the method to be called be decided when the class is instantiated or after? Assuming it is when the class is instantiated, how about the following:
class Node():
def Fast(self):
print "Fast"
def Slow(self):
print "Slow"
class NodeFactory():
def __init__(self, method):
self.method = method
def SetMethod(self, method):
self.method = method
def New(self):
n = Node()
n.Run = getattr(n, self.method)
return n
nf = NodeFactory("Fast")
nf.New().Run()
# Prints "Fast"
nf.SetMethod("Slow")
nf.New().Run()
# Prints "Slow"
Related
Imagine you have the following code:
class A:
pass
NewA = ... # copy A
NewA.__init__ = decorator(A.__init__) # but don't change A's init function, just NewA's
I am looking for a way to change some of the attributes/methods in the cloned class and the rest I want them to be similar to the base class object (preferebly even through MappingProxyType so that when A changes the unchanged logic of NewA reflects the changes as well).
I came across this ancient thread, where there some suggetions which don't fully work:
Repurposing inheritance class NewA(A): pass which doesn't exactly result in what I am looking for
Dynamically generating a new class using type and somehow having an eye out for the tons of cases that might happen (having mutable attributes/descriptors/calls to globals ...)
Using copy.deepcopy which is totally wrong (since class object's internal data representation is a MappingProxyType which we cannot copy/deepcopy)
Is there a way to still achive this without manually handling every corner case, especially considering the fact that the base class we intend to copy could be anything (with metaclasses and custom init_subclass parents, and a mixture of attributes mutable and what not, and potentially with __slots__)?
Here is a humble attempt to get you started. I've tested it out with a class with slots and it seems to work. I am not very sure about that aspect of it though.
import types
import copy
def clone_class(klass):
def _exec_body(ns):
# don't add in slots descriptors, it will fail!
ns_no_slots = {
k:v for k,v in vars(klass).items()
if not isinstance(v, types.MemberDescriptorType)
}
ns |= copy.deepcopy(ns_no_slots)
return ns
return types.new_class(
name=klass.__name__,
bases=klass.__bases__,
kwds={"metaclass": type(klass)},
exec_body=_exec_body,
)
Now, this seems to work with classes that have __slots__. The one thing that might trip things up is if the metaclass has slots (which must be empty). But that would be really weird.
Here is a test script:
import types
import copy
def clone_class(klass):
def _exec_body(ns):
ns_no_slots = {
k:v for k,v in vars(klass).items()
if not isinstance(v, types.MemberDescriptorType)
}
ns |= copy.deepcopy(ns_no_slots)
return ns
return types.new_class(
name=klass.__name__,
bases=klass.__bases__,
kwds={"metaclass": type(klass)},
exec_body=_exec_body,
)
class Meta(type):
def meta_magic(cls):
print("magical!")
class Foo(metaclass=Meta):
__slots__ = ('x','y')
#property
def size(self):
return 42
class Bar(Foo):
state = []
__slots__ = ('z',)
def __init__(self, x=1, y=2, z=3):
self.x = x
self.y = y
self.z = z
#property
def closed(self):
return False
BarClone = clone_class(Bar)
bar = BarClone()
BarClone.state.append('foo')
print(BarClone.state, Bar.state)
BarClone.meta_magic()
This prints:
['foo'] []
magical!
I have a class factory method that is used to instantiate an object. With multiple objects are created through this method, I want to be able to compare the classes of the objects. When using isinstance, the comparison is False, as can be seen in the simple example below. Also running id(a.__class__) and id(b.__class__), gives different ids.
Is there a simple way of achieving this? I know that this does not exactly conform to duck-typing, however this is the easiest solution for the program I am writing.
def factory():
class MyClass(object):
def compare(self, other):
print('Comparison Result: {}'.format(isinstance(other, self.__class__)))
return MyClass()
a = factory()
b = factory()
print(a.compare(b))
The reason is that MyClass is created dynamically every time you run factory. If you print(id(MyClass)) inside factory you get different results:
>>> a = factory()
140465711359728
>>> b = factory()
140465712488632
This is because they are actually different classes, dynamically created and locally scoped at the time of the call.
One way to fix this is to return (or yield) multiple instances:
>>> def factory(n):
class MyClass(object):
def compare(self, other):
print('Comparison Result: {}'.format(isinstance(other, self.__class__)))
for i in range(n):
yield MyClass()
>>> a, b = factory(2)
>>> a.compare(b)
Comparison Result: True
is a possible implementation.
EDIT: If the instances are created dynamically, then the above solution is invalid. One way to do it is to create a superclass outside, then inside the factory function subclass from that superclass:
>>> class MyClass(object):
pass
>>> def factory():
class SubClass(MyClass):
def compare(self, other):
print('Comparison Result: {}'.format(isinstance(other, self.__class__)))
return SubClass()
However, this does not work because they are still different classes. So you need to change your comparison method to check against the first superclass:
isinstance(other, self.__class__.__mro__[1])
If your class definition is inside the factory function, than each instance of the class you create will be an instance of a separate class. That's because the class definition is a statement, that's executed just like any other assignment. The name and contents of the different classes will be the same, but their identities will be distinct.
I don't think there's any simple way to get around that without changing the structure of your code in some way. You've said that your actual factory function is a method of a class, which suggests that you might be able to move the class definition somewhere else so that it can be shared by multiple calls to the factory method. Depending on what information you expect the inner class to use from the outer class, you might define it at class level (so there'd be only one class definition used everywhere), or you could define it in another method, like __init__ (which would create a new inner class for every instance of the outer class).
Here's what that last approach might look like:
class Outer(object):
def __init__(self):
class Inner(object):
def compare(self, other):
print('Comparison Result: {}'.format(isinstance(other, self.__class__)))
self.Inner = Inner
def factory(self):
return self.Inner()
f = Outer()
a = f.factory()
b = f.factory()
print(a.compare(b)) # True
g = Outer() # create another instance of the outer class
c = g.factory()
print(a.compare(c)) # False
It's not entirely clear what you're asking. It seems to me you want a simpler version of the code you already posted. If that's incorrect, this answer is not relevant.
You can create classes dynamically by explicitly constructing a new instance of the type type.
def compare(self, other):
...
def factory():
return type("MyClass", (object,), { 'compare': compare }()
type takes three arguments: the name, the parents, and the predefined slots. So this will behave the same way as your previous code.
Working off the answer from #rassar, and adding some more detail to represent the actual implementation (e.g. the factory-method existing in a parent class), I have come up with a working example below.
From #rassar's answer, I realised that the class is dynamically created each time, and so defining it within the parent object (or even above that), means that it will be the same class definition each time it is called.
class Parent(object):
class MyClass(object):
def __init__(self, parent):
self.parent = parent
def compare(self, other):
print('Comparison Result: {}'.format(isinstance(other, self.__class__)))
def factory(self):
return self.MyClass(self)
a = Parent()
b = a.factory()
c = a.factory()
b.compare(c)
print(id(b.__class__))
print(id(c.__class__))
I have a parent class and two child class. The parent class is an abstract base class that has method combine that gets inherited by the child classes. But each child implements combine differently from a parameter perspective therefore each of their own methods take different number of parameters. In Python, when a child inherits a method and requires re-implementing it, that newly re-implemented method must match parameter by parameter. Is there a way around this? I.e. the inherited method can have dynamic parameter composition?
This code demonstrates that signature of overridden method can easily change.
class Parent(object):
def foo(self, number):
for _ in range(number):
print "Hello from parent"
class Child(Parent):
def foo(self, number, greeting):
for _ in range(number):
print greeting
class GrandChild(Child):
def foo(self):
super(GrandChild,self).foo(1, "hey")
p = Parent()
p.foo(3)
c = Child()
c.foo(2, "Hi")
g = GrandChild()
g.foo()
As the other answer demonstrates for plain classes, the signature of an overridden inherited method can be different in the child than in the parent.
The same is true even if the parent is an abstract base class:
import abc
class Foo:
__metaclass__ = abc.ABCMeta
#abc.abstractmethod
def bar(self, x, y):
return x + y
class ChildFoo(Foo):
def bar(self, x):
return super(self.__class__, self).bar(x, 3)
class DumbFoo(Foo):
def bar(self):
return "derp derp derp"
cf = ChildFoo()
print cf.bar(5)
df = DumbFoo()
print df.bar()
Inappropriately complicated detour
It is an interesting exercise in Python metaclasses to try to restrict the ability to override methods, such that their argument signature must match that of the base class. Here is an attempt.
Note: I'm not endorsing this as a good engineering idea, and I did not spend time tying up loose ends so there are likely little caveats about the code below that could make it more efficient or something.
import types
import inspect
def strict(func):
"""Add some info for functions having strict signature.
"""
arg_sig = inspect.getargspec(func)
func.is_strict = True
func.arg_signature = arg_sig
return func
class StrictSignature(type):
def __new__(cls, name, bases, attrs):
func_types = (types.MethodType,) # include types.FunctionType?
# Check each attribute in the class being created.
for attr_name, attr_value in attrs.iteritems():
if isinstance(attr_value, func_types):
# Check every base for #strict functions.
for base in bases:
base_attr = base.__dict__.get(attr_name)
base_attr_is_function = isinstance(base_attr, func_types)
base_attr_is_strict = hasattr(base_attr, "is_strict")
# Assert that inspected signatures match.
if base_attr_is_function and base_attr_is_strict:
assert (inspect.getargspec(attr_value) ==
base_attr.arg_signature)
# If everything passed, create the class.
return super(StrictSignature, cls).__new__(cls, name, bases, attrs)
# Make a base class to try out strictness
class Base:
__metaclass__ = StrictSignature
#strict
def foo(self, a, b, c="blah"):
return a + b + len(c)
def bar(self, x, y, z):
return x
#####
# Now try to make some classes inheriting from Base.
#####
class GoodChild(Base):
# Was declared strict, better match the signature.
def foo(self, a, b, c="blah"):
return c
# Was never declared as strict, so no rules!
def bar(im_a_little, teapot):
return teapot/2
# These below can't even be created. Uncomment and try to run the file
# and see. It's not just that you can't instantiate them, you can't
# even get the *class object* defined at class creation time.
#
#class WrongChild(Base):
# def foo(self, a):
# return super(self.__class__, self).foo(a, 5)
#
#class BadChild(Base):
# def foo(self, a, b, c="halb"):
# return super(self.__class__, self).foo(a, b, c)
Note, like with most "strict" or "private" type ideas in Python, that you are still free to monkey-patch functions onto even a "good class" and those monkey-patched functions don't have to satisfy the signature constraint.
# Instance level
gc = GoodChild()
gc.foo = lambda self=gc: "Haha, I changed the signature!"
# Class level
GoodChild.foo = lambda self: "Haha, I changed the signature!"
and even if you add more complexity to the meta class that checks whenever any method type attributes are updated in the class's __dict__ and keeps making the assert statement when the class is modified, you can still use type.__setattr__ to bypass customized behavior and set an attribute anyway.
In these cases, I imagine Jeff Goldblum as Ian Malcolm from Jurassic Park, looking at you blankly and saying "Consenting adults, uhh, find a way.."
Whenever I define a class whose instances create objects of other classes, I like defining the types of those other objects as class members:
class Foo(object):
DICT_TYPE = dict # just a trivial example
def __init__(self):
self.mydict = self.DICT_TYPE()
class Bar(Foo):
DICT_TYPE = OrderedDict # no need to override __init__ now
The idea is to allow potential subclasses to easily override it.
I've just found a problem with this habbit, when the "type" I use is not really a type, but a factory function. For example, RLock is confusingly not a class:
def RLock(*args, **kwargs):
return _RLock(*args, **kwargs)
Thus using it the same way is no good:
class Foo(object):
LOCK_TYPE = threading.RLock # alas, RLock() is a function...
def __init__(self):
self.lock = self.LOCK_TYPE()
The problem here is that since RLock is a function, self.LOCK_TYPE gets bound to self, resulting with a bound-method, consequently leading to an error.
Here's a quick demonstration of how things go wrong when a function is used instead of a class (for a case simpler than RLock above):
def dict_factory():
return {}
class Foo(object):
DICT_TYPE1 = dict
DICT_TYPE2 = dict_factory
f = Foo()
f.DICT_TYPE1()
=> {}
f.DICT_TYPE2()
=> TypeError: dict_factory() takes no arguments (1 given)
Does anybody have a good solution for this problem? Is my habbit of defining those class members fundamentally wrong?
I guess I could replace it with a factory method. Would that be a better approach?
class Foo(object);
def __init__(self):
self.lock = self._make_lock()
def _make_lock(self):
return threading.RLock()
you could use the staticmethod decorator to ensure your class does not get passed in
>>> class Foo(object):
... DICT_TYPE = staticmethod(my_dict)
...
>>> f = Foo()
>>> f.DICT_TYPE()
{}
The problem can be bypassed by using a classproperty (e.g. as defined in this answer):
class Foo(object):
#classproperty
def DICT_TYPE(cls):
return dict_factory
I've read What are Class methods in Python for? but the examples in that post are complex. I am looking for a clear, simple, bare-bones example of a particular use case for classmethods in Python.
Can you name a small, specific example use case where a Python classmethod would be the right tool for the job?
Helper methods for initialization:
class MyStream(object):
#classmethod
def from_file(cls, filepath, ignore_comments=False):
with open(filepath, 'r') as fileobj:
for obj in cls(fileobj, ignore_comments):
yield obj
#classmethod
def from_socket(cls, socket, ignore_comments=False):
raise NotImplemented # Placeholder until implemented
def __init__(self, iterable, ignore_comments=False):
...
Well __new__ is a pretty important classmethod. It's where instances usually come from
so dict() calls dict.__new__ of course, but there is another handy way to make dicts sometimes which is the classmethod dict.fromkeys()
eg.
>>> dict.fromkeys("12345")
{'1': None, '3': None, '2': None, '5': None, '4': None}
I don't know, something like named constructor methods?
class UniqueIdentifier(object):
value = 0
def __init__(self, name):
self.name = name
#classmethod
def produce(cls):
instance = cls(cls.value)
cls.value += 1
return instance
class FunkyUniqueIdentifier(UniqueIdentifier):
#classmethod
def produce(cls):
instance = super(FunkyUniqueIdentifier, cls).produce()
instance.name = "Funky %s" % instance.name
return instance
Usage:
>>> x = UniqueIdentifier.produce()
>>> y = FunkyUniqueIdentifier.produce()
>>> x.name
0
>>> y.name
Funky 1
The biggest reason for using a #classmethod is in an alternate constructor that is intended to be inherited. This can be very useful in polymorphism. An example:
class Shape(object):
# this is an abstract class that is primarily used for inheritance defaults
# here is where you would define classmethods that can be overridden by inherited classes
#classmethod
def from_square(cls, square):
# return a default instance of cls
return cls()
Notice that Shape is an abstract class that defines a classmethod from_square, since Shape is not really defined, it does not really know how to derive itself from a Square so it simply returns a default instance of the class.
Inherited classes are then allowed to define their own versions of this method:
class Square(Shape):
def __init__(self, side=10):
self.side = side
#classmethod
def from_square(cls, square):
return cls(side=square.side)
class Rectangle(Shape):
def __init__(self, length=10, width=10):
self.length = length
self.width = width
#classmethod
def from_square(cls, square):
return cls(length=square.side, width=square.side)
class RightTriangle(Shape):
def __init__(self, a=10, b=10):
self.a = a
self.b = b
self.c = ((a*a) + (b*b))**(.5)
#classmethod
def from_square(cls, square):
return cls(a=square.length, b=square.width)
class Circle(Shape):
def __init__(self, radius=10):
self.radius = radius
#classmethod
def from_square(cls, square):
return cls(radius=square.length/2)
The usage allows you to treat all of these uninstantiated classes polymorphically
square = Square(3)
for polymorphic_class in (Square, Rectangle, RightTriangle, Circle):
this_shape = polymorphic_class.from_square(square)
This is all fine and dandy you might say, but why couldn't I just use as #staticmethod to accomplish this same polymorphic behavior:
class Circle(Shape):
def __init__(self, radius=10):
self.radius = radius
#staticmethod
def from_square(square):
return Circle(radius=square.length/2)
The answer is that you could, but you do not get the benefits of inheritance because Circle has to be called out explicitly in the method. Meaning if I call it from an inherited class without overriding, I would still get Circle every time.
Notice what is gained when I define another shape class that does not really have any custom from_square logic:
class Hexagon(Shape):
def __init__(self, side=10):
self.side = side
# note the absence of classmethod here, this will use from_square it inherits from shape
Here you can leave the #classmethod undefined and it will use the logic from Shape.from_square while retaining who cls is and return the appropriate shape.
square = Square(3)
for polymorphic_class in (Square, Rectangle, RightTriangle, Circle, Hexagon):
this_shape = polymorphic_class.from_square(square)
I find that I most often use #classmethod to associate a piece of code with a class, to avoid creating a global function, for cases where I don't require an instance of the class to use the code.
For example, I might have a data structure which only considers a key valid if it conforms to some pattern. I may want to use this from inside and outside of the class. However, I don't want to create yet another global function:
def foo_key_is_valid(key):
# code for determining validity here
return valid
I'd much rather group this code with the class it's associated with:
class Foo(object):
#classmethod
def is_valid(cls, key):
# code for determining validity here
return valid
def add_key(self, key, val):
if not Foo.is_valid(key):
raise ValueError()
..
# lets me reuse that method without an instance, and signals that
# the code is closely-associated with the Foo class
Foo.is_valid('my key')
Another useful example of classmethod is in extending enumerated types. A classic Enum provides symbolic names which can be used later in the code for readability, grouping, type-safety, etc. This can be extended to add useful features using a classmethod. In the example below, Weekday is an enuerated type for the days of the week. It has been extended using classmethod so that instead of keeping track of the weekday ourselves, the enumerated type can extract the date and return the related enum member.
from enum import Enum
from datetime import date
class Weekday(Enum):
MONDAY = 1
TUESDAY = 2
WEDNESDAY = 3
THURSDAY = 4
FRIDAY = 5
SATURDAY = 6
SUNDAY = 7
#
#classmethod
def from_date(cls, date):
return cls(date.isoweekday())
Weekday.from_date(date.today())
<Weekday.TUESDAY: 2>
Source: https://docs.python.org/3/howto/enum.html
in class MyClass(object):
'''
classdocs
'''
obj=0
x=classmethod
def __init__(self):
'''
Constructor
'''
self.nom='lamaizi'
self.prenom='anas'
self.age=21
self.ville='Casablanca'
if __name__:
ob=MyClass()
print(ob.nom)
print(ob.prenom)
print(ob.age)
print(ob.ville)