I really like the syntax of the "magic methods" or whatever they are called in Python, like
class foo:
def __add__(self,other): #It can be called like c = a + b
pass
The call
c = a + b
is then translated to
a.__add__(b)
Is it possible to mimic such behaviour for "non-magic" functions? In numerical computations I need the Kronecker product, and am eager to have "kron" function such that
kron(a,b)
is in fact
a.kron(b)?
The use case is: I have two similar classes, say, matrix and vector, both having Kronecker product. I would like to call them
a = matrix()
b = matrix()
c = kron(a,b)
a = vector()
b = vector()
c = kron(a,b)
matrix and vector classes are defined in one .py file, thus share the common namespace. So, what is the best (Pythonic?) way to implement functions like above? Possible solutions:
1) Have one kron() functions and do type check
2) Have different namespaces
3) ?
The python default operator methods (__add__ and such) are hard-wired; python will look for them because the operator implementations look for them.
However, there is nothing stopping you from defining a kron function that does the same thing; look for __kron__ or __rkron__ on the objects passed to it:
def kron(a, b):
if hasattr(a, '__kron__'):
return a.__kron__(b)
if hasattr(b, '__rkron__'):
return b.__rkron__(a)
# Default kron implementation here
return complex_operation_on_a_and_b(a, b)
What you're describing is multiple dispatch or multimethods. Magic methods is one way to implement them, but it's actually more usual to have an object that you can register type-specific implementations on.
For example, http://pypi.python.org/pypi/multimethod/ will let you write
#multimethod(matrix, matrix)
def kron(lhs, rhs):
pass
#multimethod(vector, vector)
def kron(lhs, rhs):
pass
It's quite easy to write a multimethod decorator yourself; the BDFL describes a typical implementation in an article. The idea is that the multimethod decorator associates the type signature and method with the method name in a registry, and replaces the method with a generated method that performs type lookup to find the best match.
Technically speaking, implementing something similar to the "standard" operator (and operator-like - think len() etc) behaviour is not difficult:
def kron(a, b):
if hasattr(a, '__kron__'):
return a.__kron__(b)
elif hasattr(b, '__kron__'):
return b.__kron__(a)
else:
raise TypeError("your error message here")
Now you just have to add a __kron__(self, other) method on the relevant types (assuming you have control over these types or they don't use slots or whatever else that would prevent adding methods outside the class statement's body).
Now I'd not use a __magic__ naming scheme as in my above snippet since this is supposed to be reserved for the language itself.
Another solution would be to maintain a type:specifici function mapping and have the "generic" kron function looking up the mapping, ie:
# kron.py
from somewhere import Matrix, Vector
def matrix_kron(a, b):
# code here
def vector_kron(a, b):
# code here
KRON_IMPLEMENTATIONS = dict(
Matrix=matrix_kron,
Vector=vector_kron,
)
def kron(a, b):
for typ in (type(a), type(b)):
implementation = KRON_IMPLEMENTATION.get(typ, None)
if implementation:
return implementation(a, b)
else:
raise TypeError("your message here")
This solution doesn't work well with inheritance but it "less surprinsing" - doesn't require monkeypatching nor __magic__ name etc.
I think having one single function that delegate the actual computation is a nice way to do it. If the Kronecker product only works on two similar classes, you can even do the type checking in the function :
def kron(a, b):
if type(a) != type(b):
raise TypeError('expected two instances of the same class, got %s and %s'%(type(a), type(b)))
return a._kron_(b)
Then, you just need to define a _kron_ method on the class. This is only some basic example, you might want to improve it to handle more gracefully the cases where a class doesn't have the _kron_ method, or to handle subclasses.
Binary operations in the standart libary usually have a reverse dual (__add__ and __radd__), but since your operator only work for same type objects, it isn't useful here.
Related
As I currently understand it, arithmetic operands like '+' and '-' are a special kind of methods, belonging to the integer class. They seem different to me because you don't have to format arithmetic operations like so: x.__add__(y) but that is what happens behind the scenes when you write x + y.
My first question is: am I right so far?
My second question is: What happens in the __add__ method? I can't find this in in any documentation. I want to understand how this doesn't lead to infinite regression, as I can only picture this method as something like this:
def __add__(a,b):
return a + b
but then ofcourse, you didn't explain the '+' away, which leads to the infinite regression.
I hope my question is clear, as it's all a bit fuzzy in my head. Basically I'm trying to get a good understanding of what the fundamentals of Python are. (and maybe in other languages?)
Python does, indeed, translate the + and - operators to .__add__() calls, but also will use __radd__() method on the second operand for the reverse. This allows for custom types to hook into the operand when used with standard types.
What happens for x + y is:
If y is a subclass of x, try y.__radd__(x) first; this lets you override behaviour with more specific classes.
Try to use x.__add__(y), if that succeeds, that is the outcome of the expression. If this call returns the special NotImplemented singleton, move on to the next step.
Try to use y.__radd__(x); if that succeeds, that is the outcome of the expression. If it returns NotImplemented too, raise a TypeError exception, the operator failed.
Because the Python built-in types are implemented in C code, the actual implementation of __add__ doesn't trigger a race condition. The C code for int.__add__ takes the C integer values and the C + operator, which just adds the numbers together.
In custom Python objects, you usually express adding in terms of adding up attributes or other values:
def __add__(self, other):
if not isinstance(other, type(self)):
return NotImplemented # cannot handle other types
return type(self)(self.foobar + other.foobar + self.summation_margin)
where the attributes have their own __add__ implementations, perhaps.
Regarding the __add__(a, b) for numbers:
I am no Python expert, but my guess is that this subsequently calls a native code which performs the actual computation. It is implemented in the language in which the python implementation you are using is written in. For example, if you are using CPython, it would call a (compiled) function from Python's source code written in C.
the __add__ method for number types is more than likely implemented in native code so infinite recursion is not a likely scenario, so your return a+b would actually be a native code call
Well the + sign is an operator so it's a basic building block of any programming language. What Python and most other OOP languages allows you to do is to define + operators for custom classes. This is done by defining __add__ methods in your new class.
Hope this helps your understanding
You are correct that:
class Test(object):
def __add__(self, other):
return self + other
would cause problems:
>>> a = Test()
>>> b = Test()
>>> a + b
Traceback (most recent call last):
File "<pyshell#30>", line 1, in <module>
a + b
File "<pyshell#27>", line 4, in __add__
return self + other
...
File "<pyshell#27>", line 4, in __add__
return self + other
RuntimeError: maximum recursion depth exceeded
However, that is not how class addition is implemented. Usually, you would define addition of instances as being an addition over the attributes, e.g.:
class Money(object):
def __init__(self, amount):
self.amount = amount
def __add__(self, other):
return Money(self.amount + other.amount)
The addition of amount attributes within __add__ will depend on the implementation of __add__ for whatever type amount is, but as:
>>> 1 + 2
3
works you can assume it isn't turtles all the way down!
Same example from the same book: Python deep nesting factory functions
def maker(N):
def action(X):
return X ** N
return action
I understand the concept behind it and i think it's really neat but I cant seem to envision when I could use this approach.
I could have easily implement the above by having maker() take both N and X as an argument instead.
Has anyone use this type of factory function and explain to me why you went this approach instead of just taking multiple arguments?
Is it just user preference?
squarer = maker(2)
print(squarer(2)) # outputs 4
print(squarer(4)) # outputs 16
print(squarer(8)) # outputs 64
Essentially, it means you only have to enter in the N value once and then you can't change it later.
I think it's mostly programming style as there are multiple ways of doing the same thing. However, this way you can only enter the N value once so you could add code to test that it's a valid value once instead of checking each time you called the function.
EDIT
just thought of a possible example (though it's usually handled by using a class):
writer = connectmaker("127.0.0.1")
writer("send this text")
writer("send this other text")
The "maker" method would then connect to the address once and then maintain that value for each call to writer(). But as I said, something like this is usually a class where the __init__ would store the values.
In a certain way, you can see some of the operator function as these as well.
For example, operator.itemgetter() works this way:
import operator
get1 = operator.itemgetter(1) # creates a function which gets the item #1 of the given object
get1([5,4,3,2,1]) # gives 4
This is often used e. g. as a key= function of sorting functions and such.
Similiar, more dedicated use cases are easily imaginable if you have a concrete problem which you can solve with that.
In the same league you have these "decorator creators":
def indirect_deco(outer_param):
def real_deco(func):
def wrapper(*a, **k):
return func(outer_param, *a, **k)
return wrapper
return real_deco
#indirect_deco(1)
def function(a, b, c):
print (((a, b, c))
function(234, 432)
Here as well, the outer function is a factory function which creates the "real deco" function. This, in turn, even creates another oner which replaces the originally given one.
I figured out that with deriving from str and overwriting __new__ you can overwrite strings. Do you know any magic that would create a lazily initialized string?
Therefore
def f(a, b):
print("f called")
return a+b
s=f("a", "b")
print("Starting")
print(s)
how can I add a decorator to the function f such that this function is executed only after "Starting" was printed (basically on first access)? Seems tricky... :)
I can do it when objects are returned, because there I intercept attribute access. However, string doesn't use attribute access?
There may be simpler ways of doing what you want --
However, I once wrote a generic "lazy decorator" for generic functions that does exactly what you are asking for -- perceive it is more complicated exactly because it would work for almost any kind of object returned by the functions.
The basic idea is: for a given exiting object, Python does not actually "use" its value but for callingone of the "dunder' (magic double "__" ) methods in the object's class -
be it for representing it ( calls either __repr__ __str__ __unicode__) getting attributes from it, making calls, usiogn it as an operator in an arithmetic operation and so on.
So, this decorator, when the function is called, basically stores the parameters and wait for any of these magic methods to be called, whereupon it does make the originall call and caches the return value -
The soruce code is here:
https://github.com/jsbueno/metapython/blob/main/lazy_decorator.py
The attributes you're looking for are __str__(), __repr__(), and __unicode__().
Try using the LazyString class from stringlike, like this
from stringlike.lazy import LazyString
def f(a, b):
print("f called")
return a+b
s = LazyString(lambda: f("a", "b"))
print("Starting")
print(s)
class foo(object):
def __init__(self,f):
self.f = f
def __call__(self,args_list):
def wrapped_f(args_list):
return [self.f(*args) for args in args_list]
return wrapped_f(args_list)
if __name__=='__main__':
class abc(object):
#foo
def f(a,b,c):
return a+b+c
a = range(5)
b = range(5)
c = range(5)
data = list(zip(a,b,c))
print(abc.f(data))
I wrote this a few years back. When you decorate any function f(X) with #foo it becomes
f(list of Xs).
What is this process called? What is it? What is its functional programming name?
Its not currying. I know simple map9(f,list of Xs) could have done it.
What are decorators/operation of decorating called mathematically?
There are two transformations performed on your original function:
it is converted from a function of three arguments to a function that takes a 3-tuple
conversion from a function of a 3-tuple to a function that takes a list of 3-tuples
First transformation
In Haskell, there is a function called uncurry, documented here. (This is a two-argument version; 3-, 4-, ... versions could be easily created, too).
Second transformation
Also in Haskell, there are sets of functions with lift in their names. Here's a page on the Haskell wiki about lifting. I think that page explains it better than I could:
Lifting is a concept which allows you to transform a function into a
corresponding function within another (usually more general) setting.
So in your case, you're lifting a function from operating on tuples to operating on a list of tuples.
Notes:
the OP asked for the mathematical name for decorators. I don't know what that would be, but I've heard that Haskell is supposed to be like executable mathematics, so I think Haskell's terminology is a good starting point. YMMV.
the OP asked for the FP name of these processes. Again, I don't know, but I assume that Haskell's terminology is acceptable.
Decorators just have special syntax, but there are no rules what decorators can return and no mathematical description. They can be any callable after all.
Your function is just a partially applied starmap:
from functools import partial
from itertools import starmap
def foo(f):
return partial(starmap, f)
In a functional language like Haskell, you would do this by partially applying the map function to a function which takes a tuple of arguments, resulting in a function which takes a list of argument tuples. As Jochen Ritzel pointed out in another answer, even in Python you can implement this pretty trivially using functools.partial.
Therefore I suppose this process is called "partial application of map", or some such thing. I'm not aware of any particular name for this special case.
They are simply called Decorators. What it does can be called function chaining or function annotation, but I looked around quite a bit and found no special functional/mathmatical name for this process besides those 2 (chaining/annotation).
PEP Index > PEP 318 -- Decorators for Functions and Methods
On the name 'Decorator'
There's been a number of complaints about the choice of the name
'decorator' for this feature. The major one is that the name is not
consistent with its use in the GoF book [11]. The name 'decorator'
probably owes more to its use in the compiler area -- a syntax tree is
walked and annotated. It's quite possible that a better name may turn
up.
A generic function is dispatched based on the type of all its arguments. The programmer defines several implementations of a function. The correct one is chosen at call time based on the types of its arguments. This is useful for object adaptation among other things. Python has a few generic functions including len().
These packages tend to allow code that looks like this:
#when(int)
def dumbexample(a):
return a * 2
#when(list)
def dumbexample(a):
return [("%s" % i) for i in a]
dumbexample(1) # calls first implementation
dumbexample([1,2,3]) # calls second implementation
A less dumb example I've been thinking about lately would be a web component that requires a User. Instead of requiring a particular web framework, the integrator would just need to write something like:
class WebComponentUserAdapter(object):
def __init__(self, guest):
self.guest = guest
def canDoSomething(self):
return guest.member_of("something_group")
#when(my.webframework.User)
componentNeedsAUser(user):
return WebComponentUserAdapter(user)
Python has a few generic functions implementations. Why would I chose one over the others? How is that implementation being used in applications?
I'm familiar with Zope's zope.component.queryAdapter(object, ISomething). The programmer registers a callable adapter that takes a particular class of object as its argument and returns something compatible with the interface. It's a useful way to allow plugins. Unlike monkey patching, it works even if an object needs to adapt to multiple interfaces with the same method names.
I'd recommend the PEAK-Rules library by P. Eby. By the same author (deprecated though) is the RuleDispatch package (the predecessor of PEAK-Rules). The latter being no longer maintained IIRC.
PEAK-Rules has a lot of nice features, one being, that it is (well, not easily, but) extensible. Besides "classic" dispatch on types ony, it features dispatch on arbitrary expressions as "guardians".
The len() function is not a true generic function (at least in the sense of the packages mentioned above, and also in the sense, this term is used in languages like Common Lisp, Dylan or Cecil), as it is simply a convenient syntax for a call to specially named (but otherwise regular) method:
len(s) == s.__len__()
Also note, that this is single-dispatch only, that is, the actual receiver (s in the code above) determines the method implementation called. And even a hypothetical
def call_special(receiver, *args, **keys):
return receiver.__call_special__(*args, **keys)
is still a single-dispatch function, as only the receiver is used when the method to be called is resolved. The remaining arguments are simply passed on, but they don't affect the method selection.
This is different from multiple-dispatch, where there is no dedicated receiver, and all arguments are used in order to find the actual method implementation to call. This is, what actually makes the whole thing worthwhile. If it were only some odd kind of syntactic sugar, nobody would bother with using it, IMHO.
from peak.rules import abstract, when
#abstract
def serialize_object(object, target):
pass
#when(serialize_object, (MyStuff, BinaryStream))
def serialize_object(object, target):
target.writeUInt32(object.identifier)
target.writeString(object.payload)
#when(serialize_object, (MyStuff, XMLStream))
def serialize_object(object, target):
target.openElement("my-stuff")
target.writeAttribute("id", str(object.identifier))
target.writeText(object.payload)
target.closeElement()
In this example, a call like
serialize_object(MyStuff(10, "hello world"), XMLStream())
considers both arguments in order to decide, which method must actually be called.
For a nice usage scenario of generic functions in Python I'd recommend reading the refactored code of the peak.security which gives a very elegant solution to access permission checking using generic functions (using RuleDispatch).
You can use a construction like this:
def my_func(*args, **kwargs):
pass
In this case args will be a list of any unnamed arguments, and kwargs will be a dictionary of the named ones. From here you can detect their types and act as appropriate.
I'm unable to see the point in these "generic" functions. It sounds like simple polymorphism.
Your "generic" features can be implemented like this without resorting to any run-time type identification.
class intWithDumbExample( int ):
def dumbexample( self ):
return self*2
class listWithDumbExmaple( list ):
def dumpexample( self ):
return [("%s" % i) for i in self]
def dumbexample( a ):
return a.dumbexample()