What is a clean "pythonic" way to implement multiple constructors? - python

I can't find a definitive answer for this. As far as I know, you can't have multiple __init__ functions in a Python class. So how do I solve this problem?
Suppose I have a class called Cheese with the number_of_holes property. How can I have two ways of creating cheese objects...
One that takes a number of holes like this: parmesan = Cheese(num_holes = 15).
And one that takes no arguments and just randomizes the number_of_holes property: gouda = Cheese().
I can think of only one way to do this, but this seems clunky:
class Cheese():
def __init__(self, num_holes = 0):
if (num_holes == 0):
# Randomize number_of_holes
else:
number_of_holes = num_holes
What do you say? Is there another way?

Actually None is much better for "magic" values:
class Cheese():
def __init__(self, num_holes = None):
if num_holes is None:
...
Now if you want complete freedom of adding more parameters:
class Cheese():
def __init__(self, *args, **kwargs):
#args -- tuple of anonymous arguments
#kwargs -- dictionary of named arguments
self.num_holes = kwargs.get('num_holes',random_holes())
To better explain the concept of *args and **kwargs (you can actually change these names):
def f(*args, **kwargs):
print 'args: ', args, ' kwargs: ', kwargs
>>> f('a')
args: ('a',) kwargs: {}
>>> f(ar='a')
args: () kwargs: {'ar': 'a'}
>>> f(1,2,param=3)
args: (1, 2) kwargs: {'param': 3}
http://docs.python.org/reference/expressions.html#calls

Using num_holes=None as the default is fine if you are going to have just __init__.
If you want multiple, independent "constructors", you can provide these as class methods. These are usually called factory methods. In this case you could have the default for num_holes be 0.
class Cheese(object):
def __init__(self, num_holes=0):
"defaults to a solid cheese"
self.number_of_holes = num_holes
#classmethod
def random(cls):
return cls(randint(0, 100))
#classmethod
def slightly_holey(cls):
return cls(randint(0, 33))
#classmethod
def very_holey(cls):
return cls(randint(66, 100))
Now create object like this:
gouda = Cheese()
emmentaler = Cheese.random()
leerdammer = Cheese.slightly_holey()

One should definitely prefer the solutions already posted, but since no one mentioned this solution yet, I think it is worth mentioning for completeness.
The #classmethod approach can be modified to provide an alternative constructor which does not invoke the default constructor (__init__). Instead, an instance is created using __new__.
This could be used if the type of initialization cannot be selected based on the type of the constructor argument, and the constructors do not share code.
Example:
class MyClass(set):
def __init__(self, filename):
self._value = load_from_file(filename)
#classmethod
def from_somewhere(cls, somename):
obj = cls.__new__(cls) # Does not call __init__
super(MyClass, obj).__init__() # Don't forget to call any polymorphic base class initializers
obj._value = load_from_somewhere(somename)
return obj

All of these answers are excellent if you want to use optional parameters, but another Pythonic possibility is to use a classmethod to generate a factory-style pseudo-constructor:
def __init__(self, num_holes):
# do stuff with the number
#classmethod
def fromRandom(cls):
return cls( # some-random-number )

Why do you think your solution is "clunky"? Personally I would prefer one constructor with default values over multiple overloaded constructors in situations like yours (Python does not support method overloading anyway):
def __init__(self, num_holes=None):
if num_holes is None:
# Construct a gouda
else:
# custom cheese
# common initialization
For really complex cases with lots of different constructors, it might be cleaner to use different factory functions instead:
#classmethod
def create_gouda(cls):
c = Cheese()
# ...
return c
#classmethod
def create_cheddar(cls):
# ...
In your cheese example you might want to use a Gouda subclass of Cheese though...

Those are good ideas for your implementation, but if you are presenting a cheese making interface to a user. They don't care how many holes the cheese has or what internals go into making cheese. The user of your code just wants "gouda" or "parmesean" right?
So why not do this:
# cheese_user.py
from cheeses import make_gouda, make_parmesean
gouda = make_gouda()
paremesean = make_parmesean()
And then you can use any of the methods above to actually implement the functions:
# cheeses.py
class Cheese(object):
def __init__(self, *args, **kwargs):
#args -- tuple of anonymous arguments
#kwargs -- dictionary of named arguments
self.num_holes = kwargs.get('num_holes',random_holes())
def make_gouda():
return Cheese()
def make_paremesean():
return Cheese(num_holes=15)
This is a good encapsulation technique, and I think it is more Pythonic. To me this way of doing things fits more in line more with duck typing. You are simply asking for a gouda object and you don't really care what class it is.

Overview
For the specific cheese example, I agree with many of the other answers about using default values to signal random initialization or to use a static factory method. However, there may also be related scenarios that you had in mind where there is value in having alternative, concise ways of calling the constructor without hurting the quality of parameter names or type information.
Since Python 3.8 and functools.singledispatchmethod can help accomplish this in many cases (and the more flexible multimethod can apply in even more scenarios). (This related post describes how one could accomplish the same in Python 3.4 without a library.) I haven't seen examples in the documentation for either of these that specifically shows overloading __init__ as you ask about, but it appears that the same principles for overloading any member method apply (as shown below).
"Single dispatch" (available in the standard library) requires that there be at least one positional parameter and that the type of the first argument be sufficient to distinguish among the possible overloaded options. For the specific Cheese example, this doesn't hold since you wanted random holes when no parameters were given, but multidispatch does support the very same syntax and can be used as long as each method version can be distinguish based on the number and type of all arguments together.
Example
Here is an example of how to use either method (some of the details are in order to please mypy which was my goal when I first put this together):
from functools import singledispatchmethod as overload
# or the following more flexible method after `pip install multimethod`
# from multimethod import multidispatch as overload
class MyClass:
#overload # type: ignore[misc]
def __init__(self, a: int = 0, b: str = 'default'):
self.a = a
self.b = b
#__init__.register
def _from_str(self, b: str, a: int = 0):
self.__init__(a, b) # type: ignore[misc]
def __repr__(self) -> str:
return f"({self.a}, {self.b})"
print([
MyClass(1, "test"),
MyClass("test", 1),
MyClass("test"),
MyClass(1, b="test"),
MyClass("test", a=1),
MyClass("test"),
MyClass(1),
# MyClass(), # `multidispatch` version handles these 3, too.
# MyClass(a=1, b="test"),
# MyClass(b="test", a=1),
])
Output:
[(1, test), (1, test), (0, test), (1, test), (1, test), (0, test), (1, default)]
Notes:
I wouldn't usually make the alias called overload, but it helped make the diff between using the two methods just a matter of which import you use.
The # type: ignore[misc] comments are not necessary to run, but I put them in there to please mypy which doesn't like decorating __init__ nor calling __init__ directly.
If you are new to the decorator syntax, realize that putting #overload before the definition of __init__ is just sugar for __init__ = overload(the original definition of __init__). In this case, overload is a class so the resulting __init__ is an object that has a __call__ method so that it looks like a function but that also has a .register method which is being called later to add another overloaded version of __init__. This is a bit messy, but it please mypy becuase there are no method names being defined twice. If you don't care about mypy and are planning to use the external library anyway, multimethod also has simpler alternative ways of specifying overloaded versions.
Defining __repr__ is simply there to make the printed output meaningful (you don't need it in general).
Notice that multidispatch is able to handle three additional input combinations that don't have any positional parameters.

Use num_holes=None as a default, instead. Then check for whether num_holes is None, and if so, randomize. That's what I generally see, anyway.
More radically different construction methods may warrant a classmethod that returns an instance of cls.

The best answer is the one above about default arguments, but I had fun writing this, and it certainly does fit the bill for "multiple constructors". Use at your own risk.
What about the new method.
"Typical implementations create a new instance of the class by invoking the superclass’s new() method using super(currentclass, cls).new(cls[, ...]) with appropriate arguments and then modifying the newly-created instance as necessary before returning it."
So you can have the new method modify your class definition by attaching the appropriate constructor method.
class Cheese(object):
def __new__(cls, *args, **kwargs):
obj = super(Cheese, cls).__new__(cls)
num_holes = kwargs.get('num_holes', random_holes())
if num_holes == 0:
cls.__init__ = cls.foomethod
else:
cls.__init__ = cls.barmethod
return obj
def foomethod(self, *args, **kwargs):
print "foomethod called as __init__ for Cheese"
def barmethod(self, *args, **kwargs):
print "barmethod called as __init__ for Cheese"
if __name__ == "__main__":
parm = Cheese(num_holes=5)

I'd use inheritance. Especially if there are going to be more differences than number of holes. Especially if Gouda will need to have different set of members then Parmesan.
class Gouda(Cheese):
def __init__(self):
super(Gouda).__init__(num_holes=10)
class Parmesan(Cheese):
def __init__(self):
super(Parmesan).__init__(num_holes=15)

Since my initial answer was criticised on the basis that my special-purpose constructors did not call the (unique) default constructor, I post here a modified version that honours the wishes that all constructors shall call the default one:
class Cheese:
def __init__(self, *args, _initialiser="_default_init", **kwargs):
"""A multi-initialiser.
"""
getattr(self, _initialiser)(*args, **kwargs)
def _default_init(self, ...):
"""A user-friendly smart or general-purpose initialiser.
"""
...
def _init_parmesan(self, ...):
"""A special initialiser for Parmesan cheese.
"""
...
def _init_gouda(self, ...):
"""A special initialiser for Gouda cheese.
"""
...
#classmethod
def make_parmesan(cls, *args, **kwargs):
return cls(*args, **kwargs, _initialiser="_init_parmesan")
#classmethod
def make_gouda(cls, *args, **kwargs):
return cls(*args, **kwargs, _initialiser="_init_gouda")

This is how I solved it for a YearQuarter class I had to create. I created an __init__ which is very tolerant to a wide variety of input.
You use it like this:
>>> from datetime import date
>>> temp1 = YearQuarter(year=2017, month=12)
>>> print temp1
2017-Q4
>>> temp2 = YearQuarter(temp1)
>>> print temp2
2017-Q4
>>> temp3 = YearQuarter((2017, 6))
>>> print temp3
2017-Q2
>>> temp4 = YearQuarter(date(2017, 1, 18))
>>> print temp4
2017-Q1
>>> temp5 = YearQuarter(year=2017, quarter = 3)
>>> print temp5
2017-Q3
And this is how the __init__ and the rest of the class looks like:
import datetime
class YearQuarter:
def __init__(self, *args, **kwargs):
if len(args) == 1:
[x] = args
if isinstance(x, datetime.date):
self._year = int(x.year)
self._quarter = (int(x.month) + 2) / 3
elif isinstance(x, tuple):
year, month = x
self._year = int(year)
month = int(month)
if 1 <= month <= 12:
self._quarter = (month + 2) / 3
else:
raise ValueError
elif isinstance(x, YearQuarter):
self._year = x._year
self._quarter = x._quarter
elif len(args) == 2:
year, month = args
self._year = int(year)
month = int(month)
if 1 <= month <= 12:
self._quarter = (month + 2) / 3
else:
raise ValueError
elif kwargs:
self._year = int(kwargs["year"])
if "quarter" in kwargs:
quarter = int(kwargs["quarter"])
if 1 <= quarter <= 4:
self._quarter = quarter
else:
raise ValueError
elif "month" in kwargs:
month = int(kwargs["month"])
if 1 <= month <= 12:
self._quarter = (month + 2) / 3
else:
raise ValueError
def __str__(self):
return '{0}-Q{1}'.format(self._year, self._quarter)

class Cheese:
def __init__(self, *args, **kwargs):
"""A user-friendly initialiser for the general-purpose constructor.
"""
...
def _init_parmesan(self, *args, **kwargs):
"""A special initialiser for Parmesan cheese.
"""
...
def _init_gauda(self, *args, **kwargs):
"""A special initialiser for Gauda cheese.
"""
...
#classmethod
def make_parmesan(cls, *args, **kwargs):
new = cls.__new__(cls)
new._init_parmesan(*args, **kwargs)
return new
#classmethod
def make_gauda(cls, *args, **kwargs):
new = cls.__new__(cls)
new._init_gauda(*args, **kwargs)
return new

I do not see a straightforward answer with an example yet. The idea is simple:
use __init__ as the "basic" constructor as python only allows one __init__ method
use #classmethod to create any other constructors and call the basic constructor
Here is a new try.
class Person:
def __init__(self, name, age):
self.name = name
self.age = age
#classmethod
def fromBirthYear(cls, name, birthYear):
return cls(name, date.today().year - birthYear)
Usage:
p = Person('tim', age=18)
p = Person.fromBirthYear('tim', birthYear=2004)

Here (drawing on this earlier answer, the pure Python version of classmethod in the docs, and as suggested by this comment) is a decorator that can be used to create multiple constructors.
from types import MethodType
from functools import wraps
class constructor:
def __init__(self, func):
#wraps(func)
def wrapped(cls, *args, **kwargs):
obj = cls.__new__(cls) # Create new instance but don't init
super(cls, obj).__init__() # Init any classes it inherits from
func(obj, *args, **kwargs) # Run the constructor with obj as self
return obj
self.wrapped = wrapped
def __get__(self, _, cls):
return MethodType(self.wrapped, cls) # Bind this constructor to the class
class Test:
def __init__(self, data_sequence):
""" Default constructor, initiates with data sequence """
self.data = [item ** 2 for item in data_sequence]
#constructor
def zeros(self, size):
""" Initiates with zeros """
self.data = [0 for _ in range(size)]
a = Test([1,2,3])
b = Test.zeros(100)
This seems the cleanest way in some cases (see e.g. multiple dataframe constructors in Pandas), where providing multiple optional arguments to a single constructor would be inconvenient: for example cases where it would require too many parameters, be unreadable, be slower or use more memory than needed. However, as earlier comments have argued, in most cases it is probably more Pythonic to route through a single constructor with optional parameters, adding class methods where needed.

Related

Return a custom value when a class method is accessed as an attribute, but still allow for it to perform a computation when called?

Specifically, I would want MyClass.my_method to be used for lookup of a value in the class dictionary, but MyClass.my_method() to be a method that accepts arguments and performs a computation to update an attribute in MyClass and then returns MyClass with all its attributes (including the updated one).
I am thinking that this might be doable with Python's descriptors (maybe overriding __get__ or __call__), but I can't figure out how this would look. I understand that the behavior might be confusing, but I am interested if it is possible (and if there are any other major caveats).
I have seen that you can do something similar for classes and functions by overriding __repr__, but I can't find a similar way for a method within a class. My returned value will also not always be a string, which seems to prohibit the __repr__-based approaches mentioned in these two questions:
Possible to change a function's repr in python?
How to create a custom string representation for a class object?
Thank you Joel for the minimal implementation. I found that the remaining problem is the lack of initialization of the parent, since I did not find a generic way of initializing it, I need to check for attributes in the case of list/dict, and add the initialization values to the parent accordingly.
This addition to the code should make it work for lists/dicts:
def classFactory(parent, init_val, target):
class modifierClass(parent):
def __init__(self, init_val):
super().__init__()
dict_attr = getattr(parent, "update", None)
list_attr = getattr(parent, "extend", None)
if callable(dict_attr): # parent is dict
self.update(init_val)
elif callable(list_attr): # parent is list
self.extend(init_val)
self.target = target
def __call__(self, *args):
self.target.__init__(*args)
return modifierClass(init_val)
class myClass:
def __init__(self, init_val=''):
self.method = classFactory(init_val.__class__, init_val, self)
Unfortunately, we need to add case by case, but this works as intended.
A slightly less verbose way to write the above is the following:
def classFactory(parent, init_val, target):
class modifierClass(parent):
def __init__(self, init_val):
if isinstance(init_val, list):
self.extend(init_val)
elif isinstance(init_val, dict):
self.update(init_val)
self.target = target
def __call__(self, *args):
self.target.__init__(*args)
return modifierClass(init_val)
class myClass:
def __init__(self, init_val=''):
self.method = classFactory(init_val.__class__, init_val, self)
As jasonharper commented,
MyClass.my_method() works by looking up MyClass.my_method, and then attempting to call that object. So the result of MyClass.my_method cannot be a plain string, int, or other common data type [...]
The trouble comes specifically from reusing the same name for this two properties, which is very confusing just as you said. So, don't do it.
But for the sole interest of it you could try to proxy the value of the property with an object that would return the original MyClass instance when called, use an actual setter to perform any computation you wanted, and also forward arbitrary attributes to the proxied value.
class MyClass:
_my_method = whatever
#property
def my_method(self):
my_class = self
class Proxy:
def __init__(self, value):
self.__proxied = value
def __call__(self, value):
my_class.my_method = value
return my_class
def __getattr__(self, name):
return getattr(self.__proxied, name)
def __str__(self):
return str(self.__proxied)
def __repr__(self):
return repr(self.__proxied)
return Proxy(self._my_method)
#my_method.setter
def my_method(self, value):
# your computations
self._my_method = value
a = MyClass()
b = a.my_method('do not do this at home')
a is b
# True
a.my_method.split(' ')
# ['do', 'not', 'do', 'this', 'at', 'home']
And today, duck typing will abuse you, forcing you to delegate all kinds of magic methods to the proxied value in the proxy class, until the poor codebase where you want to inject this is satisfied with how those values quack.
This is a minimal implementation of Guillherme's answer that updates the method instead of a separate modifiable parameter:
def classFactory(parent, init_val, target):
class modifierClass(parent):
def __init__(self, init_val):
self.target = target
def __call__(self, *args):
self.target.__init__(*args)
return modifierClass(init_val)
class myClass:
def __init__(self, init_val=''):
self.method = classFactory(init_val.__class__, init_val, self)
This and the original answer both work well for single values, but it seems like lists and dictionaries are returned as empty instead of with the expected values and I am not sure why so help is appreciated here:

How to create a class with multiple constructors that take different number of parameters in Python? [duplicate]

I know that Python does not support method overloading, but I've run into a problem that I can't seem to solve in a nice Pythonic way.
I am making a game where a character needs to shoot a variety of bullets, but how do I write different functions for creating these bullets? For example suppose I have a function that creates a bullet travelling from point A to B with a given speed. I would write a function like this:
def add_bullet(sprite, start, headto, speed):
# Code ...
But I want to write other functions for creating bullets like:
def add_bullet(sprite, start, direction, speed):
def add_bullet(sprite, start, headto, spead, acceleration):
def add_bullet(sprite, script): # For bullets that are controlled by a script
def add_bullet(sprite, curve, speed): # for bullets with curved paths
# And so on ...
And so on with many variations. Is there a better way to do it without using so many keyword arguments cause its getting kinda ugly fast. Renaming each function is pretty bad too because you get either add_bullet1, add_bullet2, or add_bullet_with_really_long_name.
To address some answers:
No I can't create a Bullet class hierarchy because thats too slow. The actual code for managing bullets is in C and my functions are wrappers around C API.
I know about the keyword arguments but checking for all sorts of combinations of parameters is getting annoying, but default arguments help allot like acceleration=0
What you are asking for is called multiple dispatch. See Julia language examples which demonstrates different types of dispatches.
However, before looking at that, we'll first tackle why overloading is not really what you want in Python.
Why Not Overloading?
First, one needs to understand the concept of overloading and why it's not applicable to Python.
When working with languages that can discriminate data types at
compile-time, selecting among the alternatives can occur at
compile-time. The act of creating such alternative functions for
compile-time selection is usually referred to as overloading a
function. (Wikipedia)
Python is a dynamically typed language, so the concept of overloading simply does not apply to it. However, all is not lost, since we can create such alternative functions at run-time:
In programming languages that defer data type identification until
run-time the selection among alternative
functions must occur at run-time, based on the dynamically determined
types of function arguments. Functions whose alternative
implementations are selected in this manner are referred to most
generally as multimethods. (Wikipedia)
So we should be able to do multimethods in Python—or, as it is alternatively called: multiple dispatch.
Multiple dispatch
The multimethods are also called multiple dispatch:
Multiple dispatch or multimethods is the feature of some
object-oriented programming languages in which a function or method
can be dynamically dispatched based on the run time (dynamic) type of
more than one of its arguments. (Wikipedia)
Python does not support this out of the box1, but, as it happens, there is an excellent Python package called multipledispatch that does exactly that.
Solution
Here is how we might use multipledispatch2 package to implement your methods:
>>> from multipledispatch import dispatch
>>> from collections import namedtuple
>>> from types import * # we can test for lambda type, e.g.:
>>> type(lambda a: 1) == LambdaType
True
>>> Sprite = namedtuple('Sprite', ['name'])
>>> Point = namedtuple('Point', ['x', 'y'])
>>> Curve = namedtuple('Curve', ['x', 'y', 'z'])
>>> Vector = namedtuple('Vector', ['x','y','z'])
>>> #dispatch(Sprite, Point, Vector, int)
... def add_bullet(sprite, start, direction, speed):
... print("Called Version 1")
...
>>> #dispatch(Sprite, Point, Point, int, float)
... def add_bullet(sprite, start, headto, speed, acceleration):
... print("Called version 2")
...
>>> #dispatch(Sprite, LambdaType)
... def add_bullet(sprite, script):
... print("Called version 3")
...
>>> #dispatch(Sprite, Curve, int)
... def add_bullet(sprite, curve, speed):
... print("Called version 4")
...
>>> sprite = Sprite('Turtle')
>>> start = Point(1,2)
>>> direction = Vector(1,1,1)
>>> speed = 100 #km/h
>>> acceleration = 5.0 #m/s**2
>>> script = lambda sprite: sprite.x * 2
>>> curve = Curve(3, 1, 4)
>>> headto = Point(100, 100) # somewhere far away
>>> add_bullet(sprite, start, direction, speed)
Called Version 1
>>> add_bullet(sprite, start, headto, speed, acceleration)
Called version 2
>>> add_bullet(sprite, script)
Called version 3
>>> add_bullet(sprite, curve, speed)
Called version 4
1. Python 3 currently supports single dispatch
2. Take care not to use multipledispatch in a multi-threaded environment or you will get weird behavior.
Python does support "method overloading" as you present it. In fact, what you just describe is trivial to implement in Python, in so many different ways, but I would go with:
class Character(object):
# your character __init__ and other methods go here
def add_bullet(self, sprite=default, start=default,
direction=default, speed=default, accel=default,
curve=default):
# do stuff with your arguments
In the above code, default is a plausible default value for those arguments, or None. You can then call the method with only the arguments you are interested in, and Python will use the default values.
You could also do something like this:
class Character(object):
# your character __init__ and other methods go here
def add_bullet(self, **kwargs):
# here you can unpack kwargs as (key, values) and
# do stuff with them, and use some global dictionary
# to provide default values and ensure that ``key``
# is a valid argument...
# do stuff with your arguments
Another alternative is to directly hook the desired function directly to the class or instance:
def some_implementation(self, arg1, arg2, arg3):
# implementation
my_class.add_bullet = some_implementation_of_add_bullet
Yet another way is to use an abstract factory pattern:
class Character(object):
def __init__(self, bfactory, *args, **kwargs):
self.bfactory = bfactory
def add_bullet(self):
sprite = self.bfactory.sprite()
speed = self.bfactory.speed()
# do stuff with your sprite and speed
class pretty_and_fast_factory(object):
def sprite(self):
return pretty_sprite
def speed(self):
return 10000000000.0
my_character = Character(pretty_and_fast_factory(), a1, a2, kw1=v1, kw2=v2)
my_character.add_bullet() # uses pretty_and_fast_factory
# now, if you have another factory called "ugly_and_slow_factory"
# you can change it at runtime in python by issuing
my_character.bfactory = ugly_and_slow_factory()
# In the last example you can see abstract factory and "method
# overloading" (as you call it) in action
You can use "roll-your-own" solution for function overloading. This one is copied from Guido van Rossum's article about multimethods (because there is little difference between multimethods and overloading in Python):
registry = {}
class MultiMethod(object):
def __init__(self, name):
self.name = name
self.typemap = {}
def __call__(self, *args):
types = tuple(arg.__class__ for arg in args) # a generator expression!
function = self.typemap.get(types)
if function is None:
raise TypeError("no match")
return function(*args)
def register(self, types, function):
if types in self.typemap:
raise TypeError("duplicate registration")
self.typemap[types] = function
def multimethod(*types):
def register(function):
name = function.__name__
mm = registry.get(name)
if mm is None:
mm = registry[name] = MultiMethod(name)
mm.register(types, function)
return mm
return register
The usage would be
from multimethods import multimethod
import unittest
# 'overload' makes more sense in this case
overload = multimethod
class Sprite(object):
pass
class Point(object):
pass
class Curve(object):
pass
#overload(Sprite, Point, Direction, int)
def add_bullet(sprite, start, direction, speed):
# ...
#overload(Sprite, Point, Point, int, int)
def add_bullet(sprite, start, headto, speed, acceleration):
# ...
#overload(Sprite, str)
def add_bullet(sprite, script):
# ...
#overload(Sprite, Curve, speed)
def add_bullet(sprite, curve, speed):
# ...
Most restrictive limitations at the moment are:
methods are not supported, only functions that are not class members;
inheritance is not handled;
kwargs are not supported;
registering new functions should be done at import time thing is not thread-safe
A possible option is to use the multipledispatch module as detailed here:
http://matthewrocklin.com/blog/work/2014/02/25/Multiple-Dispatch
Instead of doing this:
def add(self, other):
if isinstance(other, Foo):
...
elif isinstance(other, Bar):
...
else:
raise NotImplementedError()
You can do this:
from multipledispatch import dispatch
#dispatch(int, int)
def add(x, y):
return x + y
#dispatch(object, object)
def add(x, y):
return "%s + %s" % (x, y)
With the resulting usage:
>>> add(1, 2)
3
>>> add(1, 'hello')
'1 + hello'
In Python 3.4 PEP-0443. Single-dispatch generic functions was added.
Here is a short API description from PEP.
To define a generic function, decorate it with the #singledispatch decorator. Note that the dispatch happens on the type of the first argument. Create your function accordingly:
from functools import singledispatch
#singledispatch
def fun(arg, verbose=False):
if verbose:
print("Let me just say,", end=" ")
print(arg)
To add overloaded implementations to the function, use the register() attribute of the generic function. This is a decorator, taking a type parameter and decorating a function implementing the operation for that type:
#fun.register(int)
def _(arg, verbose=False):
if verbose:
print("Strength in numbers, eh?", end=" ")
print(arg)
#fun.register(list)
def _(arg, verbose=False):
if verbose:
print("Enumerate this:")
for i, elem in enumerate(arg):
print(i, elem)
The #overload decorator was added with type hints (PEP 484).
While this doesn't change the behaviour of Python, it does make it easier to understand what is going on, and for mypy to detect errors.
See: Type hints and PEP 484
This type of behaviour is typically solved (in OOP languages) using polymorphism. Each type of bullet would be responsible for knowing how it travels. For instance:
class Bullet(object):
def __init__(self):
self.curve = None
self.speed = None
self.acceleration = None
self.sprite_image = None
class RegularBullet(Bullet):
def __init__(self):
super(RegularBullet, self).__init__()
self.speed = 10
class Grenade(Bullet):
def __init__(self):
super(Grenade, self).__init__()
self.speed = 4
self.curve = 3.5
add_bullet(Grendade())
def add_bullet(bullet):
c_function(bullet.speed, bullet.curve, bullet.acceleration, bullet.sprite, bullet.x, bullet.y)
void c_function(double speed, double curve, double accel, char[] sprite, ...) {
if (speed != null && ...) regular_bullet(...)
else if (...) curved_bullet(...)
//..etc..
}
Pass as many arguments to the c_function that exist, and then do the job of determining which c function to call based on the values in the initial c function. So, Python should only ever be calling the one c function. That one c function looks at the arguments, and then can delegate to other c functions appropriately.
You're essentially just using each subclass as a different data container, but by defining all the potential arguments on the base class, the subclasses are free to ignore the ones they do nothing with.
When a new type of bullet comes along, you can simply define one more property on the base, change the one python function so that it passes the extra property, and the one c_function that examines the arguments and delegates appropriately. It doesn't sound too bad I guess.
It is impossible by definition to overload a function in python (read on for details), but you can achieve something similar with a simple decorator
class overload:
def __init__(self, f):
self.cases = {}
def args(self, *args):
def store_function(f):
self.cases[tuple(args)] = f
return self
return store_function
def __call__(self, *args):
function = self.cases[tuple(type(arg) for arg in args)]
return function(*args)
You can use it like this
#overload
def f():
pass
#f.args(int, int)
def f(x, y):
print('two integers')
#f.args(float)
def f(x):
print('one float')
f(5.5)
f(1, 2)
Modify it to adapt it to your use case.
A clarification of concepts
function dispatch: there are multiple functions with the same name. Which one should be called? two strategies
static/compile-time dispatch (aka. "overloading"). decide which function to call based on the compile-time type of the arguments. In all dynamic languages, there is no compile-time type, so overloading is impossible by definition
dynamic/run-time dispatch: decide which function to call based on the runtime type of the arguments. This is what all OOP languages do: multiple classes have the same methods, and the language decides which one to call based on the type of self/this argument. However, most languages only do it for the this argument only. The above decorator extends the idea to multiple parameters.
To clear up, assume that we define, in a hypothetical static language, the functions
void f(Integer x):
print('integer called')
void f(Float x):
print('float called')
void f(Number x):
print('number called')
Number x = new Integer('5')
f(x)
x = new Number('3.14')
f(x)
With static dispatch (overloading) you will see "number called" twice, because x has been declared as Number, and that's all overloading cares about. With dynamic dispatch you will see "integer called, float called", because those are the actual types of x at the time the function is called.
By passing keyword args.
def add_bullet(**kwargs):
#check for the arguments listed above and do the proper things
Python 3.8 added functools.singledispatchmethod
Transform a method into a single-dispatch generic function.
To define a generic method, decorate it with the #singledispatchmethod
decorator. Note that the dispatch happens on the type of the first
non-self or non-cls argument, create your function accordingly:
from functools import singledispatchmethod
class Negator:
#singledispatchmethod
def neg(self, arg):
raise NotImplementedError("Cannot negate a")
#neg.register
def _(self, arg: int):
return -arg
#neg.register
def _(self, arg: bool):
return not arg
negator = Negator()
for v in [42, True, "Overloading"]:
neg = negator.neg(v)
print(f"{v=}, {neg=}")
Output
v=42, neg=-42
v=True, neg=False
NotImplementedError: Cannot negate a
#singledispatchmethod supports nesting with other decorators such as
#classmethod. Note that to allow for dispatcher.register,
singledispatchmethod must be the outer most decorator. Here is the
Negator class with the neg methods being class bound:
from functools import singledispatchmethod
class Negator:
#singledispatchmethod
#staticmethod
def neg(arg):
raise NotImplementedError("Cannot negate a")
#neg.register
def _(arg: int) -> int:
return -arg
#neg.register
def _(arg: bool) -> bool:
return not arg
for v in [42, True, "Overloading"]:
neg = Negator.neg(v)
print(f"{v=}, {neg=}")
Output:
v=42, neg=-42
v=True, neg=False
NotImplementedError: Cannot negate a
The same pattern can be used for other similar decorators:
staticmethod, abstractmethod, and others.
I think your basic requirement is to have a C/C++-like syntax in Python with the least headache possible. Although I liked Alexander Poluektov's answer it doesn't work for classes.
The following should work for classes. It works by distinguishing by the number of non-keyword arguments (but it doesn't support distinguishing by type):
class TestOverloading(object):
def overloaded_function(self, *args, **kwargs):
# Call the function that has the same number of non-keyword arguments.
getattr(self, "_overloaded_function_impl_" + str(len(args)))(*args, **kwargs)
def _overloaded_function_impl_3(self, sprite, start, direction, **kwargs):
print "This is overload 3"
print "Sprite: %s" % str(sprite)
print "Start: %s" % str(start)
print "Direction: %s" % str(direction)
def _overloaded_function_impl_2(self, sprite, script):
print "This is overload 2"
print "Sprite: %s" % str(sprite)
print "Script: "
print script
And it can be used simply like this:
test = TestOverloading()
test.overloaded_function("I'm a Sprite", 0, "Right")
print
test.overloaded_function("I'm another Sprite", "while x == True: print 'hi'")
Output:
This is overload 3
Sprite: I'm a Sprite
Start: 0
Direction: Right
This is overload 2
Sprite: I'm another Sprite
Script:
while x == True: print 'hi'
You can achieve this with the following Python code:
#overload
def test(message: str):
return message
#overload
def test(number: int):
return number + 1
Either use multiple keyword arguments in the definition, or create a Bullet hierarchy whose instances are passed to the function.
I think a Bullet class hierarchy with the associated polymorphism is the way to go. You can effectively overload the base class constructor by using a metaclass so that calling the base class results in the creation of the appropriate subclass object. Below is some sample code to illustrate the essence of what I mean.
Updated
The code has been modified to run under both Python 2 and 3 to keep it relevant. This was done in a way that avoids the use Python's explicit metaclass syntax, which varies between the two versions.
To accomplish that objective, a BulletMetaBase instance of the BulletMeta class is created by explicitly calling the metaclass when creating the Bullet baseclass (rather than using the __metaclass__= class attribute or via a metaclass keyword argument depending on the Python version).
class BulletMeta(type):
def __new__(cls, classname, bases, classdict):
""" Create Bullet class or a subclass of it. """
classobj = type.__new__(cls, classname, bases, classdict)
if classname != 'BulletMetaBase':
if classname == 'Bullet': # Base class definition?
classobj.registry = {} # Initialize subclass registry.
else:
try:
alias = classdict['alias']
except KeyError:
raise TypeError("Bullet subclass %s has no 'alias'" %
classname)
if alias in Bullet.registry: # unique?
raise TypeError("Bullet subclass %s's alias attribute "
"%r already in use" % (classname, alias))
# Register subclass under the specified alias.
classobj.registry[alias] = classobj
return classobj
def __call__(cls, alias, *args, **kwargs):
""" Bullet subclasses instance factory.
Subclasses should only be instantiated by calls to the base
class with their subclass' alias as the first arg.
"""
if cls != Bullet:
raise TypeError("Bullet subclass %r objects should not to "
"be explicitly constructed." % cls.__name__)
elif alias not in cls.registry: # Bullet subclass?
raise NotImplementedError("Unknown Bullet subclass %r" %
str(alias))
# Create designated subclass object (call its __init__ method).
subclass = cls.registry[alias]
return type.__call__(subclass, *args, **kwargs)
class Bullet(BulletMeta('BulletMetaBase', (object,), {})):
# Presumably you'd define some abstract methods that all here
# that would be supported by all subclasses.
# These definitions could just raise NotImplementedError() or
# implement the functionality is some sub-optimal generic way.
# For example:
def fire(self, *args, **kwargs):
raise NotImplementedError(self.__class__.__name__ + ".fire() method")
# Abstract base class's __init__ should never be called.
# If subclasses need to call super class's __init__() for some
# reason then it would need to be implemented.
def __init__(self, *args, **kwargs):
raise NotImplementedError("Bullet is an abstract base class")
# Subclass definitions.
class Bullet1(Bullet):
alias = 'B1'
def __init__(self, sprite, start, direction, speed):
print('creating %s object' % self.__class__.__name__)
def fire(self, trajectory):
print('Bullet1 object fired with %s trajectory' % trajectory)
class Bullet2(Bullet):
alias = 'B2'
def __init__(self, sprite, start, headto, spead, acceleration):
print('creating %s object' % self.__class__.__name__)
class Bullet3(Bullet):
alias = 'B3'
def __init__(self, sprite, script): # script controlled bullets
print('creating %s object' % self.__class__.__name__)
class Bullet4(Bullet):
alias = 'B4'
def __init__(self, sprite, curve, speed): # for bullets with curved paths
print('creating %s object' % self.__class__.__name__)
class Sprite: pass
class Curve: pass
b1 = Bullet('B1', Sprite(), (10,20,30), 90, 600)
b2 = Bullet('B2', Sprite(), (-30,17,94), (1,-1,-1), 600, 10)
b3 = Bullet('B3', Sprite(), 'bullet42.script')
b4 = Bullet('B4', Sprite(), Curve(), 720)
b1.fire('uniform gravity')
b2.fire('uniform gravity')
Output:
creating Bullet1 object
creating Bullet2 object
creating Bullet3 object
creating Bullet4 object
Bullet1 object fired with uniform gravity trajectory
Traceback (most recent call last):
File "python-function-overloading.py", line 93, in <module>
b2.fire('uniform gravity') # NotImplementedError: Bullet2.fire() method
File "python-function-overloading.py", line 49, in fire
raise NotImplementedError(self.__class__.__name__ + ".fire() method")
NotImplementedError: Bullet2.fire() method
You can easily implement function overloading in Python. Here is an example using floats and integers:
class OverloadedFunction:
def __init__(self):
self.router = {int : self.f_int ,
float: self.f_float}
def __call__(self, x):
return self.router[type(x)](x)
def f_int(self, x):
print('Integer Function')
return x**2
def f_float(self, x):
print('Float Function (Overloaded)')
return x**3
# f is our overloaded function
f = OverloadedFunction()
print(f(3 ))
print(f(3.))
# Output:
# Integer Function
# 9
# Float Function (Overloaded)
# 27.0
The main idea behind the code is that a class holds the different (overloaded) functions that you would like to implement, and a Dictionary works as a router, directing your code towards the right function depending on the input type(x).
PS1. In case of custom classes, like Bullet1, you can initialize the internal dictionary following a similar pattern, such as self.D = {Bullet1: self.f_Bullet1, ...}. The rest of the code is the same.
PS2. The time/space complexity of the proposed solution is fairly good as well, with an average cost of O(1) per operation.
Use keyword arguments with defaults. E.g.
def add_bullet(sprite, start=default, direction=default, script=default, speed=default):
In the case of a straight bullet versus a curved bullet, I'd add two functions: add_bullet_straight and add_bullet_curved.
Overloading methods is tricky in Python. However, there could be usage of passing the dict, list or primitive variables.
I have tried something for my use cases, and this could help here to understand people to overload the methods.
Let's take your example:
A class overload method with call the methods from different class.
def add_bullet(sprite=None, start=None, headto=None, spead=None, acceleration=None):
Pass the arguments from the remote class:
add_bullet(sprite = 'test', start=Yes,headto={'lat':10.6666,'long':10.6666},accelaration=10.6}
Or
add_bullet(sprite = 'test', start=Yes, headto={'lat':10.6666,'long':10.6666},speed=['10','20,'30']}
So, handling is being achieved for list, Dictionary or primitive variables from method overloading.
Try it out for your code.
Plum supports it in a straightforward pythonic way. Copying an example from the README below.
from plum import dispatch
#dispatch
def f(x: str):
return "This is a string!"
#dispatch
def f(x: int):
return "This is an integer!"
>>> f("1")
'This is a string!'
>>> f(1)
'This is an integer!'
You can also try this code. We can try any number of arguments
# Finding the average of given number of arguments
def avg(*args): # args is the argument name we give
sum = 0
for i in args:
sum += i
average = sum/len(args) # Will find length of arguments we given
print("Avg: ", average)
# call function with different number of arguments
avg(1,2)
avg(5,6,4,7)
avg(11,23,54,111,76)

How to make instance specific methods in python

So I've come across this problem, it's kind of hard to explain so i'll try with a pizza analogy:
We have the following classes:
class Storage:
# this seems like i should use a dict, but let's assume there is more functionality to it
def __init__(self, **kwargs):
self.storage = kwargs
# use like: Storage(tomato_cans=50, mozzarella_slices=200, ready_dough=20)
def new_item(self, item_name: str, number: int):
self.storage[item_name] = number
def use(self, item_name: str, number: int):
self.storage[item_name] = self.storage.get(item_name) - number
def buy(self, item_name: str, number: int):
self.storage[item_name] = self.storage.get(item_name) + number
class Oven:
def __init__(self, number_parallel):
# number of parallel pizzas possible
self.timers = [0] * number_parallel
def ready(self):
return 0 in self.timers
def use(for_mins):
for i, timer in enumerate(self.timers):
if timer == 0:
self.timers[i] = for_mins
break
def pass_time(mins):
for i in range(len(self.timers)):
self.timers[i] = max(0, self.timers[i]-mins)
class Pizza:
def __init__(self, minutes=6, dough=1, tomato_cans=1, mozzarella_slices=8, **kwargs):
self.ingredients = kwargs
self.ingredients['dough'] = dough
self.ingredients['tomato_cans'] = tomato_cans
self.ingredients['mozzarella_slices'] = mozzarella_slices
self.minutes = minutes
def possible(self, oven, storage):
if not oven.ready():
return False
for key, number in self.ingredients:
if number > storage.storage.get(key, 0):
return False
return True
def put_in_oven(self, oven, storage):
oven.use(self.minutes)
for key, number in self.ingredients:
storage.use(key, number)
We can make Pizzas now, e.g.:
storage = Storage()
oven = Oven(2)
margherita = Pizza()
prosciutto = Pizza(ham_slices=7)
if margherita.possible():
margherita.put_in_oven()
storage.new_item('ham_slices', 20)
if prosciutto.possible():
prosciutto.put_in_oven()
And now my question (sorry if this was too detailed):
Can I create a Pizza instance and change it's put_in_oven method?
Like for example a Pizza where you'd have to cook some vegetables first or check if it's the right season in the possible method.
I imagine something like:
vegetariana = Pizza(paprika=1, arugula=5) # something like that i'm not a pizzaiolo
def vegetariana.put_in_oven(self, oven, storage):
cook_vegetables()
super().put_in_oven() # call Pizza.put_in_oven
I hope this question is not too cumbersome!
Edit:
So let's suppose we would use inheritance:
class VeggiePizza(Pizza):
def put_in_oven(self, oven, storage):
self.cut_veggies()
super().put_in_oven(oven, storage)
def cut_veggies(self):
# serves purpose of explaining
# analogy has its limits
pass
class SeasonalPizza(Pizza):
def __init__(self, season_months, minutes=6, dough=1, tomato_cans=1, mozzarella_slices=8, **kwargs):
self.season_months # list of month integers (1 - 12)
super().__init__()
def possible(self, oven, storage):
return super().possible(oven, storage) and datetime.datetime.now().month in self.season_months
My Problem with that is, because I might make a Seasonal Veggie Pizza or other Subclasses or again different combinations of them or even Subclasses which may serve only one instance.
E.g. For a PizzaAmericano (has French Fries on top), I'd use a Subclass like VeggiePizza and put fry_french_fries() in front of super().put_in_oven() and I'd definitely not use that Subclass for any other instance than the pizza_americano (unlike the VeggiePizza, where you can make different vegetarian pizze).
Is that ok? For me it seems to contradict to the principle of classes.
EDIT:
Okay, thanks to your answers and this recommended question I now know how to add/change a method of an instance. But before I close this question as a duplicate; Is that generally something that's totally fine or rather advised against? I mean it seems pretty unnatural for the simplicity of it's nature, having an instance specific method, just like instance specific variables.
You can define per instance "methods" indeed (nb: py3 example) - python's "methods" are basically just functions - the only trick is to make sure the function has access to the current instance. Two possible solutions here: use a closure, or explicitely invoke the descriptor protocol on the function.
1/ : with a closure
class Foo:
def __init__(self, x):
self.x = x
def foo(self, bar):
return bar * self.x
def somefunc():
f = Foo(42)
def myfoo(bar):
# myfoo will keep a reference to `f`
return bar * (f.x % 2)
f.foo = myfoo
return f
2/ with the descriptor protocol
# same definition of class Foo
def somefunc()
f = Foo()
def myfoo(self, bar):
return bar * (self.x % 2)
# cf the link above
f.foo = myfoo.__get__(f, type(f))
return f
but the more general solution to your issue are the strategy pattern and possibly the state pattern for the case of SeasonalPizza.possible()
Since your example is a toy exemple I won't bother giving an example with those solution, but they are very straightforward to implement in Python.
Also note that since the goal is mainly to encapsulate those details so the client code doesn't have to bother about which kind of pizza it's dealing with, you'll need some [creational pattern] to deal with this. Note that python classes are already factories, due to the two-stages instanciation process - the constructor __new__() creates an empty uninitialized instance, which is then passed to the initializer __init__(). This means that you can override __new__() to return whatever you want... And since Python's classes are objects themselves, you can extend this further by using a custom metaclass
As a last note: just make sure you keep compatible signatures and return types for all your methods, else you'll break the Liskov subsitution principle and loose the first and main benefit of OO which is to replace conditionals by polymorphic dispatch (IOW: if you break LSP, your client code can no more handle all pizzas type uniformly and ends up full of typechecks and conditionals, which is exactly what OO tries to avoid).
2 possibilities:
either create a case like structure using dicts:
def put_in_oven1(self, *args):
# implementation 1
print('method 1')
def put_in_oven2(self, *args):
# implementation 2
print('method 2')
class pizza:
def __init__(self, method, *args):
self.method = method
pass
def put_in_oven(self, *args):
handles = {
1: put_in_oven1,
2: put_in_oven2}
handles[self.method](self, *args)
my_pizza1 = pizza(1) # uses put_in_oven1
my_pizza1.put_in_oven()
my_pizza2 = pizza(2) # uses put_in_oven2
my_pizza1.put_in_oven()
my_pizza2.put_in_oven()
Or you can change methods dynamically with the setattr
so for example:
from functools import partial
def put_in_oven1(self, *args):
# implementation 1
print('method 1')
def put_in_oven2(self, *args):
# implementation 2
print('method 2')
class pizza:
def __init__(self, *args, **kwargs):
# init
pass
def put_in_oven(self, *args):
# default method
print('default')
pizza1 = pizza()
setattr(pizza1, 'put_in_oven', partial(put_in_oven, self=pizza1))
pizza2 = pizza()
setattr(pizza2, 'put_in_oven', partial(put_in_oven, self=pizza2))
pizza1.put_in_oven()
pizza2.put_in_oven()
or without using partial and defining the methods inside the pizza class
#!/usr/bin/env python
# -*- coding: utf-8 -*-
class pizza:
def put_in_oven1(self, *args):
# implementation 1
print('method 1')
def put_in_oven2(self, *args):
# implementation 2
print('method 2')
def __init__(self, *args, **kwargs):
pass
def put_in_oven(self, *args):
# default
print('default')
pizza1 = pizza()
setattr(pizza1, 'put_in_oven', pizza1.put_in_oven1)
pizza1.put_in_oven()
pizza2 = pizza()
setattr(pizza2, 'put_in_oven', pizza2.put_in_oven2)
pizza2.put_in_oven()

Python function overloading

I know that Python does not support method overloading, but I've run into a problem that I can't seem to solve in a nice Pythonic way.
I am making a game where a character needs to shoot a variety of bullets, but how do I write different functions for creating these bullets? For example suppose I have a function that creates a bullet travelling from point A to B with a given speed. I would write a function like this:
def add_bullet(sprite, start, headto, speed):
# Code ...
But I want to write other functions for creating bullets like:
def add_bullet(sprite, start, direction, speed):
def add_bullet(sprite, start, headto, spead, acceleration):
def add_bullet(sprite, script): # For bullets that are controlled by a script
def add_bullet(sprite, curve, speed): # for bullets with curved paths
# And so on ...
And so on with many variations. Is there a better way to do it without using so many keyword arguments cause its getting kinda ugly fast. Renaming each function is pretty bad too because you get either add_bullet1, add_bullet2, or add_bullet_with_really_long_name.
To address some answers:
No I can't create a Bullet class hierarchy because thats too slow. The actual code for managing bullets is in C and my functions are wrappers around C API.
I know about the keyword arguments but checking for all sorts of combinations of parameters is getting annoying, but default arguments help allot like acceleration=0
What you are asking for is called multiple dispatch. See Julia language examples which demonstrates different types of dispatches.
However, before looking at that, we'll first tackle why overloading is not really what you want in Python.
Why Not Overloading?
First, one needs to understand the concept of overloading and why it's not applicable to Python.
When working with languages that can discriminate data types at
compile-time, selecting among the alternatives can occur at
compile-time. The act of creating such alternative functions for
compile-time selection is usually referred to as overloading a
function. (Wikipedia)
Python is a dynamically typed language, so the concept of overloading simply does not apply to it. However, all is not lost, since we can create such alternative functions at run-time:
In programming languages that defer data type identification until
run-time the selection among alternative
functions must occur at run-time, based on the dynamically determined
types of function arguments. Functions whose alternative
implementations are selected in this manner are referred to most
generally as multimethods. (Wikipedia)
So we should be able to do multimethods in Python—or, as it is alternatively called: multiple dispatch.
Multiple dispatch
The multimethods are also called multiple dispatch:
Multiple dispatch or multimethods is the feature of some
object-oriented programming languages in which a function or method
can be dynamically dispatched based on the run time (dynamic) type of
more than one of its arguments. (Wikipedia)
Python does not support this out of the box1, but, as it happens, there is an excellent Python package called multipledispatch that does exactly that.
Solution
Here is how we might use multipledispatch2 package to implement your methods:
>>> from multipledispatch import dispatch
>>> from collections import namedtuple
>>> from types import * # we can test for lambda type, e.g.:
>>> type(lambda a: 1) == LambdaType
True
>>> Sprite = namedtuple('Sprite', ['name'])
>>> Point = namedtuple('Point', ['x', 'y'])
>>> Curve = namedtuple('Curve', ['x', 'y', 'z'])
>>> Vector = namedtuple('Vector', ['x','y','z'])
>>> #dispatch(Sprite, Point, Vector, int)
... def add_bullet(sprite, start, direction, speed):
... print("Called Version 1")
...
>>> #dispatch(Sprite, Point, Point, int, float)
... def add_bullet(sprite, start, headto, speed, acceleration):
... print("Called version 2")
...
>>> #dispatch(Sprite, LambdaType)
... def add_bullet(sprite, script):
... print("Called version 3")
...
>>> #dispatch(Sprite, Curve, int)
... def add_bullet(sprite, curve, speed):
... print("Called version 4")
...
>>> sprite = Sprite('Turtle')
>>> start = Point(1,2)
>>> direction = Vector(1,1,1)
>>> speed = 100 #km/h
>>> acceleration = 5.0 #m/s**2
>>> script = lambda sprite: sprite.x * 2
>>> curve = Curve(3, 1, 4)
>>> headto = Point(100, 100) # somewhere far away
>>> add_bullet(sprite, start, direction, speed)
Called Version 1
>>> add_bullet(sprite, start, headto, speed, acceleration)
Called version 2
>>> add_bullet(sprite, script)
Called version 3
>>> add_bullet(sprite, curve, speed)
Called version 4
1. Python 3 currently supports single dispatch
2. Take care not to use multipledispatch in a multi-threaded environment or you will get weird behavior.
Python does support "method overloading" as you present it. In fact, what you just describe is trivial to implement in Python, in so many different ways, but I would go with:
class Character(object):
# your character __init__ and other methods go here
def add_bullet(self, sprite=default, start=default,
direction=default, speed=default, accel=default,
curve=default):
# do stuff with your arguments
In the above code, default is a plausible default value for those arguments, or None. You can then call the method with only the arguments you are interested in, and Python will use the default values.
You could also do something like this:
class Character(object):
# your character __init__ and other methods go here
def add_bullet(self, **kwargs):
# here you can unpack kwargs as (key, values) and
# do stuff with them, and use some global dictionary
# to provide default values and ensure that ``key``
# is a valid argument...
# do stuff with your arguments
Another alternative is to directly hook the desired function directly to the class or instance:
def some_implementation(self, arg1, arg2, arg3):
# implementation
my_class.add_bullet = some_implementation_of_add_bullet
Yet another way is to use an abstract factory pattern:
class Character(object):
def __init__(self, bfactory, *args, **kwargs):
self.bfactory = bfactory
def add_bullet(self):
sprite = self.bfactory.sprite()
speed = self.bfactory.speed()
# do stuff with your sprite and speed
class pretty_and_fast_factory(object):
def sprite(self):
return pretty_sprite
def speed(self):
return 10000000000.0
my_character = Character(pretty_and_fast_factory(), a1, a2, kw1=v1, kw2=v2)
my_character.add_bullet() # uses pretty_and_fast_factory
# now, if you have another factory called "ugly_and_slow_factory"
# you can change it at runtime in python by issuing
my_character.bfactory = ugly_and_slow_factory()
# In the last example you can see abstract factory and "method
# overloading" (as you call it) in action
You can use "roll-your-own" solution for function overloading. This one is copied from Guido van Rossum's article about multimethods (because there is little difference between multimethods and overloading in Python):
registry = {}
class MultiMethod(object):
def __init__(self, name):
self.name = name
self.typemap = {}
def __call__(self, *args):
types = tuple(arg.__class__ for arg in args) # a generator expression!
function = self.typemap.get(types)
if function is None:
raise TypeError("no match")
return function(*args)
def register(self, types, function):
if types in self.typemap:
raise TypeError("duplicate registration")
self.typemap[types] = function
def multimethod(*types):
def register(function):
name = function.__name__
mm = registry.get(name)
if mm is None:
mm = registry[name] = MultiMethod(name)
mm.register(types, function)
return mm
return register
The usage would be
from multimethods import multimethod
import unittest
# 'overload' makes more sense in this case
overload = multimethod
class Sprite(object):
pass
class Point(object):
pass
class Curve(object):
pass
#overload(Sprite, Point, Direction, int)
def add_bullet(sprite, start, direction, speed):
# ...
#overload(Sprite, Point, Point, int, int)
def add_bullet(sprite, start, headto, speed, acceleration):
# ...
#overload(Sprite, str)
def add_bullet(sprite, script):
# ...
#overload(Sprite, Curve, speed)
def add_bullet(sprite, curve, speed):
# ...
Most restrictive limitations at the moment are:
methods are not supported, only functions that are not class members;
inheritance is not handled;
kwargs are not supported;
registering new functions should be done at import time thing is not thread-safe
A possible option is to use the multipledispatch module as detailed here:
http://matthewrocklin.com/blog/work/2014/02/25/Multiple-Dispatch
Instead of doing this:
def add(self, other):
if isinstance(other, Foo):
...
elif isinstance(other, Bar):
...
else:
raise NotImplementedError()
You can do this:
from multipledispatch import dispatch
#dispatch(int, int)
def add(x, y):
return x + y
#dispatch(object, object)
def add(x, y):
return "%s + %s" % (x, y)
With the resulting usage:
>>> add(1, 2)
3
>>> add(1, 'hello')
'1 + hello'
In Python 3.4 PEP-0443. Single-dispatch generic functions was added.
Here is a short API description from PEP.
To define a generic function, decorate it with the #singledispatch decorator. Note that the dispatch happens on the type of the first argument. Create your function accordingly:
from functools import singledispatch
#singledispatch
def fun(arg, verbose=False):
if verbose:
print("Let me just say,", end=" ")
print(arg)
To add overloaded implementations to the function, use the register() attribute of the generic function. This is a decorator, taking a type parameter and decorating a function implementing the operation for that type:
#fun.register(int)
def _(arg, verbose=False):
if verbose:
print("Strength in numbers, eh?", end=" ")
print(arg)
#fun.register(list)
def _(arg, verbose=False):
if verbose:
print("Enumerate this:")
for i, elem in enumerate(arg):
print(i, elem)
The #overload decorator was added with type hints (PEP 484).
While this doesn't change the behaviour of Python, it does make it easier to understand what is going on, and for mypy to detect errors.
See: Type hints and PEP 484
This type of behaviour is typically solved (in OOP languages) using polymorphism. Each type of bullet would be responsible for knowing how it travels. For instance:
class Bullet(object):
def __init__(self):
self.curve = None
self.speed = None
self.acceleration = None
self.sprite_image = None
class RegularBullet(Bullet):
def __init__(self):
super(RegularBullet, self).__init__()
self.speed = 10
class Grenade(Bullet):
def __init__(self):
super(Grenade, self).__init__()
self.speed = 4
self.curve = 3.5
add_bullet(Grendade())
def add_bullet(bullet):
c_function(bullet.speed, bullet.curve, bullet.acceleration, bullet.sprite, bullet.x, bullet.y)
void c_function(double speed, double curve, double accel, char[] sprite, ...) {
if (speed != null && ...) regular_bullet(...)
else if (...) curved_bullet(...)
//..etc..
}
Pass as many arguments to the c_function that exist, and then do the job of determining which c function to call based on the values in the initial c function. So, Python should only ever be calling the one c function. That one c function looks at the arguments, and then can delegate to other c functions appropriately.
You're essentially just using each subclass as a different data container, but by defining all the potential arguments on the base class, the subclasses are free to ignore the ones they do nothing with.
When a new type of bullet comes along, you can simply define one more property on the base, change the one python function so that it passes the extra property, and the one c_function that examines the arguments and delegates appropriately. It doesn't sound too bad I guess.
It is impossible by definition to overload a function in python (read on for details), but you can achieve something similar with a simple decorator
class overload:
def __init__(self, f):
self.cases = {}
def args(self, *args):
def store_function(f):
self.cases[tuple(args)] = f
return self
return store_function
def __call__(self, *args):
function = self.cases[tuple(type(arg) for arg in args)]
return function(*args)
You can use it like this
#overload
def f():
pass
#f.args(int, int)
def f(x, y):
print('two integers')
#f.args(float)
def f(x):
print('one float')
f(5.5)
f(1, 2)
Modify it to adapt it to your use case.
A clarification of concepts
function dispatch: there are multiple functions with the same name. Which one should be called? two strategies
static/compile-time dispatch (aka. "overloading"). decide which function to call based on the compile-time type of the arguments. In all dynamic languages, there is no compile-time type, so overloading is impossible by definition
dynamic/run-time dispatch: decide which function to call based on the runtime type of the arguments. This is what all OOP languages do: multiple classes have the same methods, and the language decides which one to call based on the type of self/this argument. However, most languages only do it for the this argument only. The above decorator extends the idea to multiple parameters.
To clear up, assume that we define, in a hypothetical static language, the functions
void f(Integer x):
print('integer called')
void f(Float x):
print('float called')
void f(Number x):
print('number called')
Number x = new Integer('5')
f(x)
x = new Number('3.14')
f(x)
With static dispatch (overloading) you will see "number called" twice, because x has been declared as Number, and that's all overloading cares about. With dynamic dispatch you will see "integer called, float called", because those are the actual types of x at the time the function is called.
By passing keyword args.
def add_bullet(**kwargs):
#check for the arguments listed above and do the proper things
Python 3.8 added functools.singledispatchmethod
Transform a method into a single-dispatch generic function.
To define a generic method, decorate it with the #singledispatchmethod
decorator. Note that the dispatch happens on the type of the first
non-self or non-cls argument, create your function accordingly:
from functools import singledispatchmethod
class Negator:
#singledispatchmethod
def neg(self, arg):
raise NotImplementedError("Cannot negate a")
#neg.register
def _(self, arg: int):
return -arg
#neg.register
def _(self, arg: bool):
return not arg
negator = Negator()
for v in [42, True, "Overloading"]:
neg = negator.neg(v)
print(f"{v=}, {neg=}")
Output
v=42, neg=-42
v=True, neg=False
NotImplementedError: Cannot negate a
#singledispatchmethod supports nesting with other decorators such as
#classmethod. Note that to allow for dispatcher.register,
singledispatchmethod must be the outer most decorator. Here is the
Negator class with the neg methods being class bound:
from functools import singledispatchmethod
class Negator:
#singledispatchmethod
#staticmethod
def neg(arg):
raise NotImplementedError("Cannot negate a")
#neg.register
def _(arg: int) -> int:
return -arg
#neg.register
def _(arg: bool) -> bool:
return not arg
for v in [42, True, "Overloading"]:
neg = Negator.neg(v)
print(f"{v=}, {neg=}")
Output:
v=42, neg=-42
v=True, neg=False
NotImplementedError: Cannot negate a
The same pattern can be used for other similar decorators:
staticmethod, abstractmethod, and others.
I think your basic requirement is to have a C/C++-like syntax in Python with the least headache possible. Although I liked Alexander Poluektov's answer it doesn't work for classes.
The following should work for classes. It works by distinguishing by the number of non-keyword arguments (but it doesn't support distinguishing by type):
class TestOverloading(object):
def overloaded_function(self, *args, **kwargs):
# Call the function that has the same number of non-keyword arguments.
getattr(self, "_overloaded_function_impl_" + str(len(args)))(*args, **kwargs)
def _overloaded_function_impl_3(self, sprite, start, direction, **kwargs):
print "This is overload 3"
print "Sprite: %s" % str(sprite)
print "Start: %s" % str(start)
print "Direction: %s" % str(direction)
def _overloaded_function_impl_2(self, sprite, script):
print "This is overload 2"
print "Sprite: %s" % str(sprite)
print "Script: "
print script
And it can be used simply like this:
test = TestOverloading()
test.overloaded_function("I'm a Sprite", 0, "Right")
print
test.overloaded_function("I'm another Sprite", "while x == True: print 'hi'")
Output:
This is overload 3
Sprite: I'm a Sprite
Start: 0
Direction: Right
This is overload 2
Sprite: I'm another Sprite
Script:
while x == True: print 'hi'
You can achieve this with the following Python code:
#overload
def test(message: str):
return message
#overload
def test(number: int):
return number + 1
Either use multiple keyword arguments in the definition, or create a Bullet hierarchy whose instances are passed to the function.
I think a Bullet class hierarchy with the associated polymorphism is the way to go. You can effectively overload the base class constructor by using a metaclass so that calling the base class results in the creation of the appropriate subclass object. Below is some sample code to illustrate the essence of what I mean.
Updated
The code has been modified to run under both Python 2 and 3 to keep it relevant. This was done in a way that avoids the use Python's explicit metaclass syntax, which varies between the two versions.
To accomplish that objective, a BulletMetaBase instance of the BulletMeta class is created by explicitly calling the metaclass when creating the Bullet baseclass (rather than using the __metaclass__= class attribute or via a metaclass keyword argument depending on the Python version).
class BulletMeta(type):
def __new__(cls, classname, bases, classdict):
""" Create Bullet class or a subclass of it. """
classobj = type.__new__(cls, classname, bases, classdict)
if classname != 'BulletMetaBase':
if classname == 'Bullet': # Base class definition?
classobj.registry = {} # Initialize subclass registry.
else:
try:
alias = classdict['alias']
except KeyError:
raise TypeError("Bullet subclass %s has no 'alias'" %
classname)
if alias in Bullet.registry: # unique?
raise TypeError("Bullet subclass %s's alias attribute "
"%r already in use" % (classname, alias))
# Register subclass under the specified alias.
classobj.registry[alias] = classobj
return classobj
def __call__(cls, alias, *args, **kwargs):
""" Bullet subclasses instance factory.
Subclasses should only be instantiated by calls to the base
class with their subclass' alias as the first arg.
"""
if cls != Bullet:
raise TypeError("Bullet subclass %r objects should not to "
"be explicitly constructed." % cls.__name__)
elif alias not in cls.registry: # Bullet subclass?
raise NotImplementedError("Unknown Bullet subclass %r" %
str(alias))
# Create designated subclass object (call its __init__ method).
subclass = cls.registry[alias]
return type.__call__(subclass, *args, **kwargs)
class Bullet(BulletMeta('BulletMetaBase', (object,), {})):
# Presumably you'd define some abstract methods that all here
# that would be supported by all subclasses.
# These definitions could just raise NotImplementedError() or
# implement the functionality is some sub-optimal generic way.
# For example:
def fire(self, *args, **kwargs):
raise NotImplementedError(self.__class__.__name__ + ".fire() method")
# Abstract base class's __init__ should never be called.
# If subclasses need to call super class's __init__() for some
# reason then it would need to be implemented.
def __init__(self, *args, **kwargs):
raise NotImplementedError("Bullet is an abstract base class")
# Subclass definitions.
class Bullet1(Bullet):
alias = 'B1'
def __init__(self, sprite, start, direction, speed):
print('creating %s object' % self.__class__.__name__)
def fire(self, trajectory):
print('Bullet1 object fired with %s trajectory' % trajectory)
class Bullet2(Bullet):
alias = 'B2'
def __init__(self, sprite, start, headto, spead, acceleration):
print('creating %s object' % self.__class__.__name__)
class Bullet3(Bullet):
alias = 'B3'
def __init__(self, sprite, script): # script controlled bullets
print('creating %s object' % self.__class__.__name__)
class Bullet4(Bullet):
alias = 'B4'
def __init__(self, sprite, curve, speed): # for bullets with curved paths
print('creating %s object' % self.__class__.__name__)
class Sprite: pass
class Curve: pass
b1 = Bullet('B1', Sprite(), (10,20,30), 90, 600)
b2 = Bullet('B2', Sprite(), (-30,17,94), (1,-1,-1), 600, 10)
b3 = Bullet('B3', Sprite(), 'bullet42.script')
b4 = Bullet('B4', Sprite(), Curve(), 720)
b1.fire('uniform gravity')
b2.fire('uniform gravity')
Output:
creating Bullet1 object
creating Bullet2 object
creating Bullet3 object
creating Bullet4 object
Bullet1 object fired with uniform gravity trajectory
Traceback (most recent call last):
File "python-function-overloading.py", line 93, in <module>
b2.fire('uniform gravity') # NotImplementedError: Bullet2.fire() method
File "python-function-overloading.py", line 49, in fire
raise NotImplementedError(self.__class__.__name__ + ".fire() method")
NotImplementedError: Bullet2.fire() method
You can easily implement function overloading in Python. Here is an example using floats and integers:
class OverloadedFunction:
def __init__(self):
self.router = {int : self.f_int ,
float: self.f_float}
def __call__(self, x):
return self.router[type(x)](x)
def f_int(self, x):
print('Integer Function')
return x**2
def f_float(self, x):
print('Float Function (Overloaded)')
return x**3
# f is our overloaded function
f = OverloadedFunction()
print(f(3 ))
print(f(3.))
# Output:
# Integer Function
# 9
# Float Function (Overloaded)
# 27.0
The main idea behind the code is that a class holds the different (overloaded) functions that you would like to implement, and a Dictionary works as a router, directing your code towards the right function depending on the input type(x).
PS1. In case of custom classes, like Bullet1, you can initialize the internal dictionary following a similar pattern, such as self.D = {Bullet1: self.f_Bullet1, ...}. The rest of the code is the same.
PS2. The time/space complexity of the proposed solution is fairly good as well, with an average cost of O(1) per operation.
Use keyword arguments with defaults. E.g.
def add_bullet(sprite, start=default, direction=default, script=default, speed=default):
In the case of a straight bullet versus a curved bullet, I'd add two functions: add_bullet_straight and add_bullet_curved.
Overloading methods is tricky in Python. However, there could be usage of passing the dict, list or primitive variables.
I have tried something for my use cases, and this could help here to understand people to overload the methods.
Let's take your example:
A class overload method with call the methods from different class.
def add_bullet(sprite=None, start=None, headto=None, spead=None, acceleration=None):
Pass the arguments from the remote class:
add_bullet(sprite = 'test', start=Yes,headto={'lat':10.6666,'long':10.6666},accelaration=10.6}
Or
add_bullet(sprite = 'test', start=Yes, headto={'lat':10.6666,'long':10.6666},speed=['10','20,'30']}
So, handling is being achieved for list, Dictionary or primitive variables from method overloading.
Try it out for your code.
Plum supports it in a straightforward pythonic way. Copying an example from the README below.
from plum import dispatch
#dispatch
def f(x: str):
return "This is a string!"
#dispatch
def f(x: int):
return "This is an integer!"
>>> f("1")
'This is a string!'
>>> f(1)
'This is an integer!'
You can also try this code. We can try any number of arguments
# Finding the average of given number of arguments
def avg(*args): # args is the argument name we give
sum = 0
for i in args:
sum += i
average = sum/len(args) # Will find length of arguments we given
print("Avg: ", average)
# call function with different number of arguments
avg(1,2)
avg(5,6,4,7)
avg(11,23,54,111,76)

What's an example use case for a Python classmethod?

I've read What are Class methods in Python for? but the examples in that post are complex. I am looking for a clear, simple, bare-bones example of a particular use case for classmethods in Python.
Can you name a small, specific example use case where a Python classmethod would be the right tool for the job?
Helper methods for initialization:
class MyStream(object):
#classmethod
def from_file(cls, filepath, ignore_comments=False):
with open(filepath, 'r') as fileobj:
for obj in cls(fileobj, ignore_comments):
yield obj
#classmethod
def from_socket(cls, socket, ignore_comments=False):
raise NotImplemented # Placeholder until implemented
def __init__(self, iterable, ignore_comments=False):
...
Well __new__ is a pretty important classmethod. It's where instances usually come from
so dict() calls dict.__new__ of course, but there is another handy way to make dicts sometimes which is the classmethod dict.fromkeys()
eg.
>>> dict.fromkeys("12345")
{'1': None, '3': None, '2': None, '5': None, '4': None}
I don't know, something like named constructor methods?
class UniqueIdentifier(object):
value = 0
def __init__(self, name):
self.name = name
#classmethod
def produce(cls):
instance = cls(cls.value)
cls.value += 1
return instance
class FunkyUniqueIdentifier(UniqueIdentifier):
#classmethod
def produce(cls):
instance = super(FunkyUniqueIdentifier, cls).produce()
instance.name = "Funky %s" % instance.name
return instance
Usage:
>>> x = UniqueIdentifier.produce()
>>> y = FunkyUniqueIdentifier.produce()
>>> x.name
0
>>> y.name
Funky 1
The biggest reason for using a #classmethod is in an alternate constructor that is intended to be inherited. This can be very useful in polymorphism. An example:
class Shape(object):
# this is an abstract class that is primarily used for inheritance defaults
# here is where you would define classmethods that can be overridden by inherited classes
#classmethod
def from_square(cls, square):
# return a default instance of cls
return cls()
Notice that Shape is an abstract class that defines a classmethod from_square, since Shape is not really defined, it does not really know how to derive itself from a Square so it simply returns a default instance of the class.
Inherited classes are then allowed to define their own versions of this method:
class Square(Shape):
def __init__(self, side=10):
self.side = side
#classmethod
def from_square(cls, square):
return cls(side=square.side)
class Rectangle(Shape):
def __init__(self, length=10, width=10):
self.length = length
self.width = width
#classmethod
def from_square(cls, square):
return cls(length=square.side, width=square.side)
class RightTriangle(Shape):
def __init__(self, a=10, b=10):
self.a = a
self.b = b
self.c = ((a*a) + (b*b))**(.5)
#classmethod
def from_square(cls, square):
return cls(a=square.length, b=square.width)
class Circle(Shape):
def __init__(self, radius=10):
self.radius = radius
#classmethod
def from_square(cls, square):
return cls(radius=square.length/2)
The usage allows you to treat all of these uninstantiated classes polymorphically
square = Square(3)
for polymorphic_class in (Square, Rectangle, RightTriangle, Circle):
this_shape = polymorphic_class.from_square(square)
This is all fine and dandy you might say, but why couldn't I just use as #staticmethod to accomplish this same polymorphic behavior:
class Circle(Shape):
def __init__(self, radius=10):
self.radius = radius
#staticmethod
def from_square(square):
return Circle(radius=square.length/2)
The answer is that you could, but you do not get the benefits of inheritance because Circle has to be called out explicitly in the method. Meaning if I call it from an inherited class without overriding, I would still get Circle every time.
Notice what is gained when I define another shape class that does not really have any custom from_square logic:
class Hexagon(Shape):
def __init__(self, side=10):
self.side = side
# note the absence of classmethod here, this will use from_square it inherits from shape
Here you can leave the #classmethod undefined and it will use the logic from Shape.from_square while retaining who cls is and return the appropriate shape.
square = Square(3)
for polymorphic_class in (Square, Rectangle, RightTriangle, Circle, Hexagon):
this_shape = polymorphic_class.from_square(square)
I find that I most often use #classmethod to associate a piece of code with a class, to avoid creating a global function, for cases where I don't require an instance of the class to use the code.
For example, I might have a data structure which only considers a key valid if it conforms to some pattern. I may want to use this from inside and outside of the class. However, I don't want to create yet another global function:
def foo_key_is_valid(key):
# code for determining validity here
return valid
I'd much rather group this code with the class it's associated with:
class Foo(object):
#classmethod
def is_valid(cls, key):
# code for determining validity here
return valid
def add_key(self, key, val):
if not Foo.is_valid(key):
raise ValueError()
..
# lets me reuse that method without an instance, and signals that
# the code is closely-associated with the Foo class
Foo.is_valid('my key')
Another useful example of classmethod is in extending enumerated types. A classic Enum provides symbolic names which can be used later in the code for readability, grouping, type-safety, etc. This can be extended to add useful features using a classmethod. In the example below, Weekday is an enuerated type for the days of the week. It has been extended using classmethod so that instead of keeping track of the weekday ourselves, the enumerated type can extract the date and return the related enum member.
from enum import Enum
from datetime import date
class Weekday(Enum):
MONDAY = 1
TUESDAY = 2
WEDNESDAY = 3
THURSDAY = 4
FRIDAY = 5
SATURDAY = 6
SUNDAY = 7
#
#classmethod
def from_date(cls, date):
return cls(date.isoweekday())
Weekday.from_date(date.today())
<Weekday.TUESDAY: 2>
Source: https://docs.python.org/3/howto/enum.html
in class MyClass(object):
'''
classdocs
'''
obj=0
x=classmethod
def __init__(self):
'''
Constructor
'''
self.nom='lamaizi'
self.prenom='anas'
self.age=21
self.ville='Casablanca'
if __name__:
ob=MyClass()
print(ob.nom)
print(ob.prenom)
print(ob.age)
print(ob.ville)

Categories

Resources