Related
In a framework, I often want to provide a base class that the framework user sub classes. The base class provides controlled access to the base class. One way to accomplish this is to provide unimplemented methods with different names, for example by adding an underscore as prefix:
class Base:
def method(self, arg):
# ...
result = self._method(arg)
# ...
return result
def _method(self, arg):
raise NotImplementedError
However, this scheme only works for one level of inheritance. For more levels, the different method names make it hard to keep an overview of what's going on. Moreover, the framework user has to override different methods depending on the base class he chooses:
class Base:
def method(self, arg):
# ...
result = self._method_sub(arg)
# ...
return result
def _method_sub(self, arg):
raise NotImplementedError
class Intermediate(Base):
def _method_sub(self, arg):
# ...
result = self._method_sub_sub(arg)
# ...
return result
def _method_sub_sub(self, arg):
raise NotImplementedError
Calling super methods does not help when the base method needs to access return values of the child method. I feel object orientation is slightly flawed, missing a child keyword that allows to forward calls to the child class. What solutions exist to solve this problem?
Does this give you what you want?
import abc
class Base(object):
__metaclass__ = abc.ABCMeta
def calculate(self):
result = self.doCalculate()
if 3 < result < 7: # do whatever validation you want
return result
else:
raise ValueError()
#abc.abstractmethod
def doCalculate(self):
pass
class Intermediate(Base):
__metaclass__ = abc.ABCMeta
class Leaf(Intermediate):
def doCalculate(self):
return 5
leaf = Leaf()
print leaf.calculate()
I think the question focuses on different points where behavior extension in an intermediate class can happen. The intermediate class obviously shall refine the "control" part here.
1st Solution
Mostly this can be done the classical way by just overriding the "safe" method - particularly when "both Base and Intermediate are abstract classes provided by the framework", things can be organized so.
The final "silly" implementation class which does the spade work overrides the unsafe method.
Think of this example:
class DoublePositive:
def double(self, x):
assert x > 0
return self._double(x)
def _double(self, x):
raise NotImplementedError
class DoubleIntPositive(DoublePositive):
def double(self, x):
assert isinstance(x, int)
return DoublePositive.double(self, x)
class DoubleImplementation(DoubleIntPositive):
def _double(self, x):
return 2 * x
2nd Solution
Calling virtual child class methods, thus behavior extension at "inner" execution points in a non-classical manner, could be done by introspection in Python - by stepping down the class __bases__ or method resolution order __mro__ with a helper function.
Example:
def child_method(cls, meth, _scls=None):
scls = _scls or meth.__self__.__class__
for base in scls.__bases__:
if base is cls:
cmeth = getattr(scls, meth.__name__, None)
if cmeth.__func__ is getattr(cls, meth.__name__, None).__func__:
return child_method(scls, meth) # next child
if cmeth:
return cmeth.__get__(meth.__self__)
for base in scls.__bases__:
r = child_method(cls, meth, base) # next base
if r is not None:
return r
if _scls is None:
raise AttributeError("child method %r missing" % meth.__name__)
return None
class Base(object):
def double(self, x):
assert x > 0
return Base._double(self, x)
def _double(self, x):
return child_method(Base, self._double)(x)
class Inter(Base):
def _double(self, x):
assert isinstance(x, float)
return child_method(Inter, self._double)(x)
class Impl(Inter):
def _double(self, x):
return 2.0 * x
The helper function child_method() here is thus kind of opposite of Python's super().
3rd solution
If calls should be chainable flexibly, things can be organized as a kind of handler chain explicitly. Think of self.addHandler(self.__privmeth) in the __init__() chain - or even via a tricky meta class. Study e.g. the urllib2 handler chains.
I would like to create a class member that can be assigned a user-specified value by the constructor, but could not be changed afterwards. Is there a way to do this?
So far I have gotten the following code, which mostly works but is not "idiot proof".
def constantify(f):
def fset(self, value):
raise SyntaxError ('Not allowed to change value')
def fget(self):
return f(self)
return property(fget, fset)
class dummy(object):
def __init__(self,constval):
self.iamvar = None
self._CONST = constval
#constantify
def SOMECONST(self):
return self._CONST
dum = dummy(42)
print 'Original Val:', dum.SOMECONST
this prints "Original Val: 42"
dum.SOMECONST = 24
This gives the correct SyntaxError
But, enter an idiot,
dum._CONST = 0
print 'Current Val:', dum.SOMECONST
gives "Current Val: 0"
Is there a better idiot-proof way of achieving this?
Or is it the case that an class member that is initializable but remains const afterwards is somehow not a "pythonic" way? (I'm still a newbie learning the pythonic way)
In that case, What would be a pythonic way of creating a class for which each instance is "configurable" at the time of instantiation only?
Kalpit
Update:
I don't want to create a class for which all the members are immutable. I only want some members to be constant, and others variable at any time.
The simplest way I could think of, is to override the __setattr__ and raise an Error whenever the particular attribute is set, like this
class dummy(object):
def __init__(self, arg):
super(dummy, self).__setattr__("data", arg)
def __setattr__(self, name, value):
if name == "data":
raise AttributeError("Can't modify data")
else:
super(dummy, self).__setattr__(name, value)
a = dummy(5)
print a.data
# 5
a.data = "1"
# AttributeError: Can't modify data
One nice thing about collections.namedtuple is that you can derive another class from an instance of it:
from collections import namedtuple
class Foo(namedtuple('Foo', ['a', 'b'])):
def __new__(cls, a, b, *args, **kwargs):
return super(Foo, cls).__new__(cls, a, b)
def __init__(self, a, b, c):
# a & b are immutable and handled by __new__
self.c = c
I am using the python mock framework for testing (http://www.voidspace.org.uk/python/mock/) and I want to mock out a superclass and focus on testing the subclasses' added behavior.
(For those interested I have extended pymongo.collection.Collection and I want to only test my added behavior. I do not want to have to run mongodb as another process for testing purposes.)
For this discussion, A is the superclass and B is the subclass. Furthermore, I define direct and indirect superclass calls as shown below:
class A(object):
def method(self):
...
def another_method(self):
...
class B(A):
def direct_superclass_call(self):
...
A.method(self)
def indirect_superclass_call(self):
...
super(A, self).another_method()
Approach #1
Define a mock class for A called MockA and use mock.patch to substitute it for the test at runtime. This handles direct superclass calls. Then manipulate B.__bases__ to handle indirect superclass calls. (see below)
The issue that arises is that I have to write MockA and in some cases (as in the case for pymongo.collection.Collection) this can involve a lot of work to unravel all of the internal calls to mock out.
Approach #2
The desired approach is to somehow use a mock.Mock() class to handle calls on the the mock just in time, as well as defined return_value or side_effect in place in the test. In this manner, I have to do less work by avoiding the definition of MockA.
The issue that I am having is that I cannot figure out how to alter B.__bases__ so that an instance of mock.Mock() can be put in place as a superclass (I must need to somehow do some direct binding here). Thus far I have determined, that super() examines the MRO and then calls the first class that defines the method in question. I cannot figure out how to get a superclass to handle the check to it and succeed if it comes across a mock class. __getattr__ does not seem to be used in this case. I want super to to think that the method is defined at this point and then use the mock.Mock() functionality as usual.
How does super() discover what attributes are defined within the class in the MRO sequence? And is there a way for me to interject here and to somehow get it to utilize a mock.Mock() on the fly?
import mock
class A(object):
def __init__(self, value):
self.value = value
def get_value_direct(self):
return self.value
def get_value_indirect(self):
return self.value
class B(A):
def __init__(self, value):
A.__init__(self, value)
def get_value_direct(self):
return A.get_value_direct(self)
def get_value_indirect(self):
return super(B, self).get_value_indirect()
# approach 1 - use a defined MockA
class MockA(object):
def __init__(self, value):
pass
def get_value_direct(self):
return 0
def get_value_indirect(self):
return 0
B.__bases__ = (MockA, ) # - mock superclass
with mock.patch('__main__.A', MockA):
b2 = B(7)
print '\nApproach 1'
print 'expected result = 0'
print 'direct =', b2.get_value_direct()
print 'indirect =', b2.get_value_indirect()
B.__bases__ = (A, ) # - original superclass
# approach 2 - use mock module to mock out superclass
# what does XXX need to be below to use mock.Mock()?
#B.__bases__ = (XXX, )
with mock.patch('__main__.A') as mymock:
b3 = B(7)
mymock.get_value_direct.return_value = 0
mymock.get_value_indirect.return_value = 0
print '\nApproach 2'
print 'expected result = 0'
print 'direct =', b3.get_value_direct()
print 'indirect =', b3.get_value_indirect() # FAILS HERE as the old superclass is called
#B.__bases__ = (A, ) # - original superclass
is there a way for me to interject here and to somehow get it to utilize a mock.Mock() on the fly?
There may be better approaches, but you can always write your own super() and inject it into the module that contains the class you're mocking. Have it return whatever it should based on what's calling it.
You can either just define super() in the current namespace (in which case the redefinition only applies to the current module after the definition), or you can import __builtin__ and apply the redefinition to __builtin__.super, in which case it will apply globally in the Python session.
You can capture the original super function (if you need to call it from your implementation) using a default argument:
def super(type, obj=None, super=super):
# inside the function, super refers to the built-in
I played around with mocking out super() as suggested by kindall. Unfortunately, after a great deal of effort it became quite complicated to handle complex inheritance cases.
After some work I realized that super() accesses the __dict__ of classes directly when resolving attributes through the MRO (it does not do a getattr type of call). The solution is to extend a mock.MagicMock() object and wrap it with a class to accomplish this. The wrapped class can then be placed in the __bases__ variable of a subclass.
The wrapped object reflects all defined attributes of the target class to the __dict__ of the wrapping class so that super() calls resolve to the properly patched in attributes within the internal MagicMock().
The following code is the solution that I have found to work thus far. Note that I actually implement this within a context handler. Also, care has to be taken to patch in the proper namespaces if importing from other modules.
This is a simple example illustrating the approach:
from mock import MagicMock
import inspect
class _WrappedMagicMock(MagicMock):
def __init__(self, *args, **kwds):
object.__setattr__(self, '_mockclass_wrapper', None)
super(_WrappedMagicMock, self).__init__(*args, **kwds)
def wrap(self, cls):
# get defined attribtues of spec class that need to be preset
base_attrs = dir(type('Dummy', (object,), {}))
attrs = inspect.getmembers(self._spec_class)
new_attrs = [a[0] for a in attrs if a[0] not in base_attrs]
# pre set mocks for attributes in the target mock class
for name in new_attrs:
setattr(cls, name, getattr(self, name))
# eat up any attempts to initialize the target mock class
setattr(cls, '__init__', lambda *args, **kwds: None)
object.__setattr__(self, '_mockclass_wrapper', cls)
def unwrap(self):
object.__setattr__(self, '_mockclass_wrapper', None)
def __setattr__(self, name, value):
super(_WrappedMagicMock, self).__setattr__(name, value)
# be sure to reflect to changes wrapper class if activated
if self._mockclass_wrapper is not None:
setattr(self._mockclass_wrapper, name, value)
def _get_child_mock(self, **kwds):
# when created children mocks need only be MagicMocks
return MagicMock(**kwds)
class A(object):
x = 1
def __init__(self, value):
self.value = value
def get_value_direct(self):
return self.value
def get_value_indirect(self):
return self.value
class B(A):
def __init__(self, value):
super(B, self).__init__(value)
def f(self):
return 2
def get_value_direct(self):
return A.get_value_direct(self)
def get_value_indirect(self):
return super(B, self).get_value_indirect()
# nominal behavior
b = B(3)
assert b.get_value_direct() == 3
assert b.get_value_indirect() == 3
assert b.f() == 2
assert b.x == 1
# using mock class
MockClass = type('MockClassWrapper', (), {})
mock = _WrappedMagicMock(A)
mock.wrap(MockClass)
# patch the mock in
B.__bases__ = (MockClass, )
A = MockClass
# set values within the mock
mock.x = 0
mock.get_value_direct.return_value = 0
mock.get_value_indirect.return_value = 0
# mocked behavior
b = B(7)
assert b.get_value_direct() == 0
assert b.get_value_indirect() == 0
assert b.f() == 2
assert b.x == 0
How can I declare a few methods with the same name, but with different numbers of parameters or different types in one class?
What must I change in the following class?
class MyClass:
""""""
#----------------------------------------------------------------------
def __init__(self):
"""Constructor"""
def my_method(self,parameter_A_that_Must_Be_String):
print parameter_A_that_Must_Be_String
def my_method(self,parameter_A_that_Must_Be_String,parameter_B_that_Must_Be_String):
print parameter_A_that_Must_Be_String
print parameter_B_that_Must_Be_String
def my_method(self,parameter_A_that_Must_Be_String,parameter_A_that_Must_Be_Int):
print parameter_A_that_Must_Be_String * parameter_A_that_Must_Be_Int
You can have a function that takes in a variable number of arguments.
def my_method(*args, **kwds):
# Do something
# When you call the method
my_method(a1, a2, k1=a3, k2=a4)
# You get:
args = (a1, a2)
kwds = {'k1':a3, 'k2':a4}
So you can modify your function as follows:
def my_method(*args):
if len(args) == 1 and isinstance(args[0], str):
# Case 1
elif len(args) == 2 and isinstance(args[1], int):
# Case 2
elif len(args) == 2 and isinstance(args[1], str):
# Case 3
You can't. There are not overloads or multimethods or similar things. One name refers to one thing. As far as the language is concerned anyway, you can always emulate them yourself... You could check types with isinstance (but please do it properly - e.g. in Python 2, use basestring to detect both strings and unicode), but it's ugly, generally discouraged and rarely useful. If the methods do different things, give them different names. Consider polymorphism as well.
Using Python 3.5 or higher, you can use #typing.overload to provide type annotations for overloaded functions/methods.
From the docs:
#overload
def process(response: None) -> None:
...
#overload
def process(response: int) -> tuple[int, str]:
...
#overload
def process(response: bytes) -> str:
...
def process(response):
<actual implementation>
Short answer: you can't (see this previous discussion). Typically you'd use something like (you could add more type checking and reorder):
def my_method(self,parameter_A, parameter_B=None):
if isinstance(parameter_B, int):
print parameter_A * parameter_B
else:
print parameter_A
if parameter_B is not None:
print parameter_B
You can try multimethods in Python:
http://www.artima.com/weblogs/viewpost.jsp?thread=101605
But I don't believe multimethod is a way to go. Rather objects that you pass to a method should have common interface. You are trying to achieve method overloading similar to the one in C++, but it is very rarely required in Python. One way to do this is a cascade of ifs using isinstance, but that's ugly.
Python is nothing like Java.
There are not really types, just objects with methods.
There is a way to test if a passed object is from a class, but it is mainly bad practices.
However, the code you want to produce for the two first methods should be something like
class MyClass(object):
def my_method(self, str1, str2=None):
print str1
if str2: print str2
For the third, well... Use a different name...
This cannot work. No matter how many arguments you have, the name m will be overriden with the second m method.
class C:
def m(self):
print('m first')
def m(self, x):
print(f'm second {x}')
ci=C();
#ci.m() # will not work TypeError: m() missing 1 required positional argument: 'x'
ci.m(1) # works
The output will simple be:
m second 1
You probably want a pattern similar to the following:
Note that adding '_' to the beginning of a method name is convention for marking a private method.
class MyClass:
""""""
#----------------------------------------------------------------------
def __init__(self):
"""Constructor"""
def my_method(self,parameter_A_that_Must_Be_String, param2=None):
if type(param2) == str:
return self._my_method_extra_string_version(parameter_A_that_Must_Be_String, param2)
elif type(param2) == int:
return self._my_method_extra_int_version(parameter_A_that_Must_Be_String, param2)
else:
pass # use the default behavior in this function
print parameter_A_that_Must_Be_String
def _my_method_extra_string_version(self,parameter_A_that_Must_Be_String, parameter_B_that_Must_Be_String):
print parameter_A_that_Must_Be_String
print parameter_B_that_Must_Be_String
def _my_method_extra_int_version(self,parameter_A_that_Must_Be_String, parameter_A_that_Must_Be_Int):
print parameter_A_that_Must_Be_String * parameter_A_that_Must_Be_Int
class MyClass:
def __init__(this, foo_str, bar_int):
this.__foo = foo_str
this.__bar = bar_int
def foo(this, new=None):
if new != None:
try:
this.__foo = str(new)
except ValueError:
print("Illegal value. foo unchanged.")
return this.__foo
def bar(this, new=None):
if new != None:
try:
this.__bar = int(new)
except ValueError:
print("Illegal value. bar unchanged.")
return this.__bar
obj = MyClass("test", 42)
print(obj.foo(), obj.bar())
print(obj.foo("tset"), obj.bar(24))
print(obj.foo(42), obj.bar("test"))
Output:
test 42
tset 24
Illegal value. bar unchanged.
42 24
I think one very simple example is missing from all the answers, and that is: what to do when the only difference between variations on the method is the number of arguments. The answer still is to use a method with variable number of arguments.
Say, you start with a method that requires use of two arguments
def method(int_a, str_b):
print("Got arguments: '{0}' and '{1}'".format(int_a, str_b)
then you need to add a variant with just the second argument (say, because the integer is redundant), the solution is very simple:
def _method_2_param(int_a, str_b):
print("Got arguments: '{0}' and '{1}'".format(int_a, str_b))
def _method_1_param(str_b):
print("Got argument: '{0}'".format(str_b))
def method(*args, **kwargs):
if len(args) + len(kwargs) == 2:
return _method_2_param(args, kwargs)
elif len(args) + len(kwargs) == 1:
return _method_1_param(args, kwargs)
else:
raise TypeError("Method requires one or two arguments")
The nice thing about this solution is that no matter if the calling code used keyword arguments or positional arguments before, it will still continue to work.
As of Python 3.10 a more elegant solution would be to use Structural Pattern Matching.
def my_method(parameters):
match parameters:
case str():
# Case 1
case (str(), str()):
# Case 2
case (str(), int()):
# Case 3
case _:
print('no match')
I can't find a definitive answer for this. As far as I know, you can't have multiple __init__ functions in a Python class. So how do I solve this problem?
Suppose I have a class called Cheese with the number_of_holes property. How can I have two ways of creating cheese objects...
One that takes a number of holes like this: parmesan = Cheese(num_holes = 15).
And one that takes no arguments and just randomizes the number_of_holes property: gouda = Cheese().
I can think of only one way to do this, but this seems clunky:
class Cheese():
def __init__(self, num_holes = 0):
if (num_holes == 0):
# Randomize number_of_holes
else:
number_of_holes = num_holes
What do you say? Is there another way?
Actually None is much better for "magic" values:
class Cheese():
def __init__(self, num_holes = None):
if num_holes is None:
...
Now if you want complete freedom of adding more parameters:
class Cheese():
def __init__(self, *args, **kwargs):
#args -- tuple of anonymous arguments
#kwargs -- dictionary of named arguments
self.num_holes = kwargs.get('num_holes',random_holes())
To better explain the concept of *args and **kwargs (you can actually change these names):
def f(*args, **kwargs):
print 'args: ', args, ' kwargs: ', kwargs
>>> f('a')
args: ('a',) kwargs: {}
>>> f(ar='a')
args: () kwargs: {'ar': 'a'}
>>> f(1,2,param=3)
args: (1, 2) kwargs: {'param': 3}
http://docs.python.org/reference/expressions.html#calls
Using num_holes=None as the default is fine if you are going to have just __init__.
If you want multiple, independent "constructors", you can provide these as class methods. These are usually called factory methods. In this case you could have the default for num_holes be 0.
class Cheese(object):
def __init__(self, num_holes=0):
"defaults to a solid cheese"
self.number_of_holes = num_holes
#classmethod
def random(cls):
return cls(randint(0, 100))
#classmethod
def slightly_holey(cls):
return cls(randint(0, 33))
#classmethod
def very_holey(cls):
return cls(randint(66, 100))
Now create object like this:
gouda = Cheese()
emmentaler = Cheese.random()
leerdammer = Cheese.slightly_holey()
One should definitely prefer the solutions already posted, but since no one mentioned this solution yet, I think it is worth mentioning for completeness.
The #classmethod approach can be modified to provide an alternative constructor which does not invoke the default constructor (__init__). Instead, an instance is created using __new__.
This could be used if the type of initialization cannot be selected based on the type of the constructor argument, and the constructors do not share code.
Example:
class MyClass(set):
def __init__(self, filename):
self._value = load_from_file(filename)
#classmethod
def from_somewhere(cls, somename):
obj = cls.__new__(cls) # Does not call __init__
super(MyClass, obj).__init__() # Don't forget to call any polymorphic base class initializers
obj._value = load_from_somewhere(somename)
return obj
All of these answers are excellent if you want to use optional parameters, but another Pythonic possibility is to use a classmethod to generate a factory-style pseudo-constructor:
def __init__(self, num_holes):
# do stuff with the number
#classmethod
def fromRandom(cls):
return cls( # some-random-number )
Why do you think your solution is "clunky"? Personally I would prefer one constructor with default values over multiple overloaded constructors in situations like yours (Python does not support method overloading anyway):
def __init__(self, num_holes=None):
if num_holes is None:
# Construct a gouda
else:
# custom cheese
# common initialization
For really complex cases with lots of different constructors, it might be cleaner to use different factory functions instead:
#classmethod
def create_gouda(cls):
c = Cheese()
# ...
return c
#classmethod
def create_cheddar(cls):
# ...
In your cheese example you might want to use a Gouda subclass of Cheese though...
Those are good ideas for your implementation, but if you are presenting a cheese making interface to a user. They don't care how many holes the cheese has or what internals go into making cheese. The user of your code just wants "gouda" or "parmesean" right?
So why not do this:
# cheese_user.py
from cheeses import make_gouda, make_parmesean
gouda = make_gouda()
paremesean = make_parmesean()
And then you can use any of the methods above to actually implement the functions:
# cheeses.py
class Cheese(object):
def __init__(self, *args, **kwargs):
#args -- tuple of anonymous arguments
#kwargs -- dictionary of named arguments
self.num_holes = kwargs.get('num_holes',random_holes())
def make_gouda():
return Cheese()
def make_paremesean():
return Cheese(num_holes=15)
This is a good encapsulation technique, and I think it is more Pythonic. To me this way of doing things fits more in line more with duck typing. You are simply asking for a gouda object and you don't really care what class it is.
Overview
For the specific cheese example, I agree with many of the other answers about using default values to signal random initialization or to use a static factory method. However, there may also be related scenarios that you had in mind where there is value in having alternative, concise ways of calling the constructor without hurting the quality of parameter names or type information.
Since Python 3.8 and functools.singledispatchmethod can help accomplish this in many cases (and the more flexible multimethod can apply in even more scenarios). (This related post describes how one could accomplish the same in Python 3.4 without a library.) I haven't seen examples in the documentation for either of these that specifically shows overloading __init__ as you ask about, but it appears that the same principles for overloading any member method apply (as shown below).
"Single dispatch" (available in the standard library) requires that there be at least one positional parameter and that the type of the first argument be sufficient to distinguish among the possible overloaded options. For the specific Cheese example, this doesn't hold since you wanted random holes when no parameters were given, but multidispatch does support the very same syntax and can be used as long as each method version can be distinguish based on the number and type of all arguments together.
Example
Here is an example of how to use either method (some of the details are in order to please mypy which was my goal when I first put this together):
from functools import singledispatchmethod as overload
# or the following more flexible method after `pip install multimethod`
# from multimethod import multidispatch as overload
class MyClass:
#overload # type: ignore[misc]
def __init__(self, a: int = 0, b: str = 'default'):
self.a = a
self.b = b
#__init__.register
def _from_str(self, b: str, a: int = 0):
self.__init__(a, b) # type: ignore[misc]
def __repr__(self) -> str:
return f"({self.a}, {self.b})"
print([
MyClass(1, "test"),
MyClass("test", 1),
MyClass("test"),
MyClass(1, b="test"),
MyClass("test", a=1),
MyClass("test"),
MyClass(1),
# MyClass(), # `multidispatch` version handles these 3, too.
# MyClass(a=1, b="test"),
# MyClass(b="test", a=1),
])
Output:
[(1, test), (1, test), (0, test), (1, test), (1, test), (0, test), (1, default)]
Notes:
I wouldn't usually make the alias called overload, but it helped make the diff between using the two methods just a matter of which import you use.
The # type: ignore[misc] comments are not necessary to run, but I put them in there to please mypy which doesn't like decorating __init__ nor calling __init__ directly.
If you are new to the decorator syntax, realize that putting #overload before the definition of __init__ is just sugar for __init__ = overload(the original definition of __init__). In this case, overload is a class so the resulting __init__ is an object that has a __call__ method so that it looks like a function but that also has a .register method which is being called later to add another overloaded version of __init__. This is a bit messy, but it please mypy becuase there are no method names being defined twice. If you don't care about mypy and are planning to use the external library anyway, multimethod also has simpler alternative ways of specifying overloaded versions.
Defining __repr__ is simply there to make the printed output meaningful (you don't need it in general).
Notice that multidispatch is able to handle three additional input combinations that don't have any positional parameters.
Use num_holes=None as a default, instead. Then check for whether num_holes is None, and if so, randomize. That's what I generally see, anyway.
More radically different construction methods may warrant a classmethod that returns an instance of cls.
The best answer is the one above about default arguments, but I had fun writing this, and it certainly does fit the bill for "multiple constructors". Use at your own risk.
What about the new method.
"Typical implementations create a new instance of the class by invoking the superclass’s new() method using super(currentclass, cls).new(cls[, ...]) with appropriate arguments and then modifying the newly-created instance as necessary before returning it."
So you can have the new method modify your class definition by attaching the appropriate constructor method.
class Cheese(object):
def __new__(cls, *args, **kwargs):
obj = super(Cheese, cls).__new__(cls)
num_holes = kwargs.get('num_holes', random_holes())
if num_holes == 0:
cls.__init__ = cls.foomethod
else:
cls.__init__ = cls.barmethod
return obj
def foomethod(self, *args, **kwargs):
print "foomethod called as __init__ for Cheese"
def barmethod(self, *args, **kwargs):
print "barmethod called as __init__ for Cheese"
if __name__ == "__main__":
parm = Cheese(num_holes=5)
I'd use inheritance. Especially if there are going to be more differences than number of holes. Especially if Gouda will need to have different set of members then Parmesan.
class Gouda(Cheese):
def __init__(self):
super(Gouda).__init__(num_holes=10)
class Parmesan(Cheese):
def __init__(self):
super(Parmesan).__init__(num_holes=15)
Since my initial answer was criticised on the basis that my special-purpose constructors did not call the (unique) default constructor, I post here a modified version that honours the wishes that all constructors shall call the default one:
class Cheese:
def __init__(self, *args, _initialiser="_default_init", **kwargs):
"""A multi-initialiser.
"""
getattr(self, _initialiser)(*args, **kwargs)
def _default_init(self, ...):
"""A user-friendly smart or general-purpose initialiser.
"""
...
def _init_parmesan(self, ...):
"""A special initialiser for Parmesan cheese.
"""
...
def _init_gouda(self, ...):
"""A special initialiser for Gouda cheese.
"""
...
#classmethod
def make_parmesan(cls, *args, **kwargs):
return cls(*args, **kwargs, _initialiser="_init_parmesan")
#classmethod
def make_gouda(cls, *args, **kwargs):
return cls(*args, **kwargs, _initialiser="_init_gouda")
This is how I solved it for a YearQuarter class I had to create. I created an __init__ which is very tolerant to a wide variety of input.
You use it like this:
>>> from datetime import date
>>> temp1 = YearQuarter(year=2017, month=12)
>>> print temp1
2017-Q4
>>> temp2 = YearQuarter(temp1)
>>> print temp2
2017-Q4
>>> temp3 = YearQuarter((2017, 6))
>>> print temp3
2017-Q2
>>> temp4 = YearQuarter(date(2017, 1, 18))
>>> print temp4
2017-Q1
>>> temp5 = YearQuarter(year=2017, quarter = 3)
>>> print temp5
2017-Q3
And this is how the __init__ and the rest of the class looks like:
import datetime
class YearQuarter:
def __init__(self, *args, **kwargs):
if len(args) == 1:
[x] = args
if isinstance(x, datetime.date):
self._year = int(x.year)
self._quarter = (int(x.month) + 2) / 3
elif isinstance(x, tuple):
year, month = x
self._year = int(year)
month = int(month)
if 1 <= month <= 12:
self._quarter = (month + 2) / 3
else:
raise ValueError
elif isinstance(x, YearQuarter):
self._year = x._year
self._quarter = x._quarter
elif len(args) == 2:
year, month = args
self._year = int(year)
month = int(month)
if 1 <= month <= 12:
self._quarter = (month + 2) / 3
else:
raise ValueError
elif kwargs:
self._year = int(kwargs["year"])
if "quarter" in kwargs:
quarter = int(kwargs["quarter"])
if 1 <= quarter <= 4:
self._quarter = quarter
else:
raise ValueError
elif "month" in kwargs:
month = int(kwargs["month"])
if 1 <= month <= 12:
self._quarter = (month + 2) / 3
else:
raise ValueError
def __str__(self):
return '{0}-Q{1}'.format(self._year, self._quarter)
class Cheese:
def __init__(self, *args, **kwargs):
"""A user-friendly initialiser for the general-purpose constructor.
"""
...
def _init_parmesan(self, *args, **kwargs):
"""A special initialiser for Parmesan cheese.
"""
...
def _init_gauda(self, *args, **kwargs):
"""A special initialiser for Gauda cheese.
"""
...
#classmethod
def make_parmesan(cls, *args, **kwargs):
new = cls.__new__(cls)
new._init_parmesan(*args, **kwargs)
return new
#classmethod
def make_gauda(cls, *args, **kwargs):
new = cls.__new__(cls)
new._init_gauda(*args, **kwargs)
return new
I do not see a straightforward answer with an example yet. The idea is simple:
use __init__ as the "basic" constructor as python only allows one __init__ method
use #classmethod to create any other constructors and call the basic constructor
Here is a new try.
class Person:
def __init__(self, name, age):
self.name = name
self.age = age
#classmethod
def fromBirthYear(cls, name, birthYear):
return cls(name, date.today().year - birthYear)
Usage:
p = Person('tim', age=18)
p = Person.fromBirthYear('tim', birthYear=2004)
Here (drawing on this earlier answer, the pure Python version of classmethod in the docs, and as suggested by this comment) is a decorator that can be used to create multiple constructors.
from types import MethodType
from functools import wraps
class constructor:
def __init__(self, func):
#wraps(func)
def wrapped(cls, *args, **kwargs):
obj = cls.__new__(cls) # Create new instance but don't init
super(cls, obj).__init__() # Init any classes it inherits from
func(obj, *args, **kwargs) # Run the constructor with obj as self
return obj
self.wrapped = wrapped
def __get__(self, _, cls):
return MethodType(self.wrapped, cls) # Bind this constructor to the class
class Test:
def __init__(self, data_sequence):
""" Default constructor, initiates with data sequence """
self.data = [item ** 2 for item in data_sequence]
#constructor
def zeros(self, size):
""" Initiates with zeros """
self.data = [0 for _ in range(size)]
a = Test([1,2,3])
b = Test.zeros(100)
This seems the cleanest way in some cases (see e.g. multiple dataframe constructors in Pandas), where providing multiple optional arguments to a single constructor would be inconvenient: for example cases where it would require too many parameters, be unreadable, be slower or use more memory than needed. However, as earlier comments have argued, in most cases it is probably more Pythonic to route through a single constructor with optional parameters, adding class methods where needed.