Convert partial function to method in python - python

Consider the following (broken) code:
import functools
class Foo(object):
def __init__(self):
def f(a,self,b):
print a+b
self.g = functools.partial(f,1)
x=Foo()
x.g(2)
What I want to do is take the function f and partially apply it, resulting in a function g(self,b). I would like to use this function as a method, however this does not currently work and instead I get the error
Traceback (most recent call last):
File "test.py", line 8, in <module>
x.g(2)
TypeError: f() takes exactly 3 arguments (2 given)
Doing x.g(x,2) however works, so it seem the issue is that g is considered a "normal" function instead of a method of the class. Is there a way to get x.g to behave like a method (i.e implicitly pass the self parameter) instead of a function?

There are two issues at hand here. First, for a function to be turned into a method it must be stored on the class, not the instance. A demonstration:
class Foo(object):
def a(*args):
print 'a', args
def b(*args):
print 'b', args
Foo.b = b
x = Foo()
def c(*args):
print 'c', args
x.c = c
So a is a function defined in the class definition, b is a function assigned to the class afterwards, and c is a function assigned to the instance. Take a look at what happens when we call them:
>>> x.a('a will have "self"')
a (<__main__.Foo object at 0x100425ed0>, 'a will have "self"')
>>> x.b('as will b')
b (<__main__.Foo object at 0x100425ed0>, 'as will b')
>>> x.c('c will only recieve this string')
c ('c will only recieve this string',)
As you can see there is little difference between a function defined along with the class, and one assigned to it later. I believe there is actually no difference as long as there is no metaclass involved, but that is for another time.
The second problem comes from how a function is actually turned into a method in the first place; the function type implements the descriptor protocol. (See the docs for details.) In a nutshell, the function type has a special __get__ method which is called when you perform an attribute lookup on the class itself. Instead of you getting the function object, the __get__ method of that function object is called, and that returns a bound method object (which is what supplies the self argument).
Why is this a problem? Because the functools.partial object is not a descriptor!
>>> import functools
>>> def f(*args):
... print 'f', args
...
>>> g = functools.partial(f, 1, 2, 3)
>>> g
<functools.partial object at 0x10042f2b8>
>>> g.__get__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'functools.partial' object has no attribute '__get__'
There are a number of options you have at this point. You can explicitly supply the self argument to the partial:
import functools
class Foo(object):
def __init__(self):
def f(self, a, b):
print a + b
self.g = functools.partial(f, self, 1)
x = Foo()
x.g(2)
...or you would imbed the self and value of a in a closure:
class Foo(object):
def __init__(self):
a = 1
def f(b):
print a + b
self.g = f
x = Foo()
x.g(2)
These solutions are of course assuming that there is an as yet unspecified reason for assigning a method to the class in the constructor like this, as you can very easily just define a method directly on the class to do what you are doing here.
Edit: Here is an idea for a solution assuming the functions may be created for the class, instead of the instance:
class Foo(object):
pass
def make_binding(name):
def f(self, *args):
print 'Do %s with %s given %r.' % (name, self, args)
return f
for name in 'foo', 'bar', 'baz':
setattr(Foo, name, make_binding(name))
f = Foo()
f.foo(1, 2, 3)
f.bar('some input')
f.baz()
Gives you:
Do foo with <__main__.Foo object at 0x10053e3d0> given (1, 2, 3).
Do bar with <__main__.Foo object at 0x10053e3d0> given ('some input',).
Do baz with <__main__.Foo object at 0x10053e3d0> given ().

This will work. But I'm not sure if this is what you are looking for
class Foo(object):
def __init__(self):
def f(a,self,b):
print a+b
self.g = functools.partial(f,1, self) # <= passing `self` also.
x = Foo()
x.g(2)

this is simply a concrete example of what i believe is the most correct (and therefore pythonic :) way to solve -- as the best solution (definition on a class!) was never revealed -- #MikeBoers explanations are otherwise solid.
i've used this pattern quite a bit (recently for an proxied API), and it's survived untold production hours without the slightest irregularity.
from functools import update_wrapper
from functools import partial
from types import MethodType
class Basic(object):
def add(self, **kwds):
print sum(kwds.values())
Basic.add_to_one = MethodType(
update_wrapper(partial(Basic.add, a=1), Basic.add),
None,
Basic,
)
x = Basic()
x.add(a=1, b=9)
x.add_to_one(b=9)
...yields:
10
10
...the key take-home-point here is MethodType(func, inst, cls), which creates an unbound method from another callable (you can even use this to chain/bind instance methods to unrelated classes... when instantiated+called the original instance method will receive BOTH self objects!)
note the exclusive use of keyword arguments! while there might be a better way to handle, args are generally a PITA because the placement of self becomes less predictable. also, IME anyway, using *args, and **kwds in the bottom-most function has proven very useful later on.

functools.partialmethod() is available since python 3.4 for this purpose.
import functools
class Foo(object):
def __init__(self):
def f(a,self,b):
print a+b
self.g = functools.partialmethod(f,1)
x=Foo()
x.g(2)

Related

What is the pythonic way of instantiating a class, calling one of its methods, and returning it from a lambda function?

I am dealing with widgets and signals and I want to bind a signal to a certain callback. Since I don't really need to create a named callback function in the case of interest, I am defining it as a lambda function. However, the way it integrates with other classes is best described by the following minimal working example:
class Foo():
def parse(self, what):
self.bar = what
foo = lambda x = Foo(): (x.parse("All set"), x)[-1]
print(foo().bar)
'All set'
The lambda function needs to instantiate a class, call one of its members to parse a string and change its internal state, and return the instantiated class. The only way to do this that I can think of at the moment is as shown in the above example: pass the instance as the default argument, create a list where the first element is the call to the method and the second is the instance itself, then select the last element.
Is there a more pythonic and elegant way of obtaining the same result?
EDIT: A few caveats: In the actual code the class Foo is defined in other modules, and I'm passing the lambda as an argument to another function, hence why I don't really need to name the callback. Indeed, what I actually have is something that looks like
widget.bind( 'some_signal', lambda t, x = Foo(): (x.parse(t), x)[-1] )
The most pythonic solution is to not use a lambda:
def foo():
x = Foo()
x.parse("All set")
return x
print(foo().bar)
Lambdas in python are a syntactic convenience and are strictly less powerful than named functions.
A factory function achieves the goal of avoiding a separate named function in the code that wires the callback. I would consider this pythonic. Using a lambda function that does what you have to do ist definitely not pythonic.
def create_callback(data):
def callback():
x = Foo()
x.parse(data)
return x
return callback
What about
def callback():
x = Foo()
x.parse("All set")
return x
widget.bind('some_signal', callback)
Note that your lambda will instanciate Foo() only the first time the code is interpreted.
Indeed,
foo = lambda d=dict(): d
d = foo()
d['hello'] = 'world'
print(foo()) # This will print {'hello': 'world'} instead of {}
based on #Vaibhav Sagar answer, a bit modified:
class Foo():
def parse(self, what):
self.bar = what
def foo_factory(what):
instance = Foo()
instance.parse(what)
return instance
all_set = foo_factory('All set')
ok = foo_factory('Ok')
ready = foo_factory('ready')
print(all_set)
print(all_set.bar)
print(ok)
print(ok.bar)
print(ready)
print(ready.bar)
Output:
<__main__.Foo object at 0x7f56a176cc50>
All set
<__main__.Foo object at 0x7f56a176cc88>
Ok
<__main__.Foo object at 0x7f56a176ccf8>
ready

what gets returned when you return self?

what gets returned when you return 'self' inside a python class? where do we exactly use return 'self'? In the below example what does self exactly returns
class Fib:
'''iterator that yields numbers in the Fibonacci sequence'''
def __init__(self, max):
self.max = max
def __iter__(self):
self.a = 0
self.b = 1
return self
def __next__(self):
fib = self.a
if fib > self.max:
raise StopIteration
self.a, self.b = self.b, self.a + self.b
print(self.a,self.b,self.c)
return fib
Python treats method calls like object.method() approximately like method(object). The docs say that "call x.f() is exactly equivalent to MyClass.f(x)". This means that a method will receive the object as the first argument. By convention in the definition of methods, this first argument is called self.
So self is the conventional name of the object owning the method.
Now, why would we want to return self? In your particular example, it is because the object implements the iterator protocol, which basically means it has __iter__ and __next__ methods. The __iter__ method must (according to the docs) "Return the iterator object itself", which is exactly what is happening here.
As an aside, another common reason for returning self is to support method chaining, where you would want to do object.method1().method2().method3() where all those methods are defined in the same class. This pattern is quite common in libraries like pandas.
The keyword self is used to refer to the instance that you are calling the method from.
This is particularly useful for chaining. In your example, let's say we want to call __next__() on an initialized Fib instance. Since __iter__() returns self, the following are equivalent :
obj = Fib(5)
obj.__iter__() # Initialize obj
obj.__next__()
And
obj = Fib(5).__iter__() # Create AND initialize obj
obj.__next__()
In your particular example, the self keyword returns the instance of the Fib class from which you are calling __iter__() (called obj in my small snippet).
Hope it'll be helpful.
Partial Answer:
When you return self, you return the class instance. For example:
class Foo:
def __init__(self, a):
self.a = a
def ret_self(self):
return self
If I create an instance and run ret_self, you will see that they both refer to the same instance:
>>> x = Foo("a")
>>> x
<__main__.Foo instance at 0x0000000002823D48>
>>> x.ret_self()
<__main__.Foo instance at 0x0000000002823D48>
In other words, both x and x.ret_self() return the same reference to that class instance.
self is actually another way of saying "this instance of Foo". Hence, instance variables are self.a in the class.
When will you need this? I don't have the experience to tell you and I do not want to give possibly misleading information that I am unsure of. I will leave it to someone else to expound on this answer.
Please do not accept this answer.

Method inside a method in Python

I have seen source code where more than one methods are called on an object eg x.y().z() Can someone please explain this to me, does this mean that z() is inside y() or what?
This calls the method y() on object x, then the method z() is called on the result of y() and that entire line is the result of method z().
For example
friendsFavePizzaToping = person.getBestFriend().getFavoritePizzaTopping()
This would result in friendsFavePizzaTopping would be the person's best friend's favorite pizza topping.
Important to note: getBestFriend() must return an object that has the method getFavoritePizzaTopping(). If it does not, an AttributeError will be thrown.
Each method is evaluated in turn, left to right. Consider:
>>> s='HELLO'
>>> s.lower()
'hello'
>>> s='HELLO '
>>> s.lower()
'hello '
>>> s.lower().strip()
'hello'
>>> s.lower().strip().upper()
'HELLO'
>>> s.lower().strip().upper().replace('H', 'h')
'hELLO'
The requirement is that the object to the left in the chain has to have availability of the method on the right. Often that means that the objects are similar types -- or at least share compatible methods or an understood cast.
As an example, consider this class:
class Foo:
def __init__(self, name):
self.name=name
def m1(self):
return Foo(self.name+'=>m1')
def m2(self):
return Foo(self.name+'=>m2')
def __repr__(self):
return '{}: {}'.format(id(self), self.name)
def m3(self):
return .25 # return is no longer a Foo
Notice that as a type of immutable, each return from Foo is a new object (either a new Foo for m1, m2 or a new float). Now try those methods:
>>> foo
4463545376: init
>>> foo.m1()
4463545304: init=>m1
^^^^ different object id
>>> foo
4463545376: init
^^^^ foo still the same because you need to assign it to change
Now assign:
>>> foo=foo.m1().m2()
>>> foo
4464102576: init=>m1=>m2
Now use m3() and it will be a float; not a Foo anymore:
>>> foo=foo.m1().m2().m3()
>>> foo
.25
Now a float -- can't use foo methods anymore:
>>> foo.m1()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'float' object has no attribute 'm1'
But you can use float methods:
>>> foo.as_integer_ratio()
(1, 4)
In the case of:
x.y().z()
You're almost always looking at immutable objects. Mutable objects don't return anything that would HAVE a function like that (for the most part, but I'm simplifying). For instance...
class x:
def __init__(self):
self.y_done = False
self.z_done = False
def y(self):
new_x = x()
new_x.y_done = True
return new_x
def z(self):
new_x = x()
new_x.z_done = True
return new_x
You can see that each of x.y and x.z returns an x object. That object is used to make the consecutive call, e.g. in x.y().z(), x.z is not called on x, but on x.y().
x.y().z() =>
tmp = x.y()
result = tmp.z()
In #dawg's excellent example, he's using strings (which are immutable in Python) whose methods return strings.
string = 'hello'
string.upper() # returns a NEW string with value "HELLO"
string.upper().replace("E","O") # returns a NEW string that's based off "HELLO"
string.upper().replace("E","O") + "W"
# "HOLLOW"
The . "operator" is Python syntax for attribute access. x.y is (nearly) identical to
getattr(x, 'y')
so x.y() is (nearly) identical to
getattr(x, 'y')()
(I say "nearly identical" because it's possible to customize attribute access for a user-defined class. From here on out, I'll assume no such customization is done, and you can assume that x.y is in fact identical to getattr(x, 'y').)
If the thing that x.y() returns has an attribute z such that
foo = getattr(x, 'y')
bar = getattr(foo(), 'z')
is legal, then you can chain the calls together without needing the name foo in the middle:
bar = getattr(getattr(x, 'y')(), 'z')
Converting back to dot notation gives you
bar = getattr(x.y(), 'z')
or simply
bar = x.y().z()
x.y().z() means that the x object has the method y() and the result of x.y() object has the method z() . Now if you first want to apply the method y() on x and then on the result want to apply the z() method, you will write x.y().z(). This is like,
val = x.y()
result = val.z()
Example:
my_dict = {'key':'value'}
my_dict is a dict type object. my_dict.get('key') returns 'value' which is a str type object. now I can apply any method of str type object on it. which will be like,
my_dict.get('key').upper()
This will return 'VALUE'.
That is (sometimes a sign of) bad code.
It violates The law of Demeter. Here is a quote from Wikipedia explaining what is meant:
Each unit should have only limited knowledge about other units: only units "closely" related to the current unit.
Each unit should only talk to its friends; don't talk to strangers.
Only talk to your immediate friends.
Suppose you have a car, which itself has an engine:
class Car:
def __init__(self):
self._engine=None
#property
def engine(self):
return self._engine
#engine.setter
def engine(self, value):
self._engine = value
class Porsche_engine:
def start(self):
print("starting")
So if you make a new car and set the engine to Porsche you could do the following:
>>> from car import *
>>> c=Car()
>>> e=Porsche_engine()
>>> c.engine=e
>>> c.engine.start()
starting
If you are maing this call from an Object, it has not only knowledge of a Car object, but has too knowledge of Engine, which is bad design.
Additionally: if you do not know whether a Car has an engine, calling directly start
>>> c=Car()
>>> c.engine.start()
May result in an Error
AttributeError: 'NoneType' object has no attribute 'start'
Edit:
To avoid (further) misunterstandings and misreadings, from what I am saying.
There are two usages:
1) as I pointed out, Objects calling methods on other objects, returned from a third object is a violation of LoD. This is one way to read the question.
2) an exception to that is method chaining, which is not bad design.
And a better design would be, if the Car itself had a start()-Method which delegates to the engine.

Apply a method to an object of another class

Given two non-related classes A and B, how to call A.method with an object of B as self?
class A:
def __init__(self, x):
self.x = x
def print_x(self):
print self.x
class B:
def __init__(self, x):
self.x = x
a = A('spam')
b = B('eggs')
a.print_x() #<-- spam
<magic>(A.print_x, b) #<-- 'eggs'
In Python 3.x you can simply do what you want:
A.print_x(b) #<-- 'eggs'
If you only have an instance of 'A', then get the class first:
a.__class__.print_x(b) #<-- 'eggs'
In Python 2.x (which the OP uses) this doesn't work, as noted by the OP and explained by Amber in the comments:
This is a difference between Python 2.x and Python 3.x - methods in
3.x don't enforce being passed the same class.
More details (OP edit)
In python 2, A.print_x returns an "unbound method", which cannot be directly applied to other classes' objects:
When an unbound user-defined method object is called, the underlying function (im_func) is called, with the restriction that the first argument must be an instance of the proper class (im_class) or of a derived class thereof. >> http://docs.python.org/reference/datamodel.html
To work around this restriction, we first have to obtain a "raw" function from a method, via im_func or __func__ (2.6+), which then can be called passing an object. This works on both classes and instances:
# python 2.5-
A.print_x.im_func(b)
a.print_x.im_func(b)
# python 2.6+
A.print_x.__func__(b)
a.print_x.__func__(b)
In python 3 there's no such thing anymore as unbound method.
Unbound methods are gone for good. ClassObject.method returns an
ordinary function object, instance.method still returns a bound
method object. >> http://www.python.org/getit/releases/3.0/NEWS.txt
Hence, in python 3, A.print_x is just a function, and can be called right away and a.print_x still has to be unbounded:
# python 3.0+
A.print_x(b)
a.print_x.__func__(b)
You don't (well, it's not that you can't throw enough magic at it to make it work, it's that you just shouldn't). If the function is supposed to work with more than one type, make it... a function.
# behold, the magic and power of duck typing
def fun(obj):
print obj.x
class A:
x = 42
class B:
x = 69
fun(A())
fun(B())
I don't know why you would really want to do this, but it is possible:
>>> class A(object):
... def foo(self):
... print self.a
...
>>> class B(object):
... def __init__(self):
... self.a = "b"
...
>>> x = A()
>>> y = B()
>>> x.foo.im_func(y)
b
>>> A.foo.im_func(y)
b
An instance method (a class instance's bound method) has a property called im_func which refers to the actual function called by the instance method, without the instance/class binding. The class object's version of the method also has this property.

Intercept operator lookup on metaclass

I have a class that need to make some magic with every operator, like __add__, __sub__ and so on.
Instead of creating each function in the class, I have a metaclass which defines every operator in the operator module.
import operator
class MetaFuncBuilder(type):
def __init__(self, *args, **kw):
super().__init__(*args, **kw)
attr = '__{0}{1}__'
for op in (x for x in dir(operator) if not x.startswith('__')):
oper = getattr(operator, op)
# ... I have my magic replacement functions here
# `func` for `__operators__` and `__ioperators__`
# and `rfunc` for `__roperators__`
setattr(self, attr.format('', op), func)
setattr(self, attr.format('r', op), rfunc)
The approach works fine, but I think It would be better if I generate the replacement operator only when needed.
Lookup of operators should be on the metaclass because x + 1 is done as type(x).__add__(x,1) instead of x.__add__(x,1), but it doesn't get caught by __getattr__ nor __getattribute__ methods.
That doesn't work:
class Meta(type):
def __getattr__(self, name):
if name in ['__add__', '__sub__', '__mul__', ...]:
func = lambda:... #generate magic function
return func
Also, the resulting "function" must be a method bound to the instance used.
Any ideas on how can I intercept this lookup? I don't know if it's clear what I want to do.
For those questioning why do I need to this kind of thing, check the full code here.
That's a tool to generate functions (just for fun) that could work as replacement for lambdas.
Example:
>>> f = FuncBuilder()
>>> g = f ** 2
>>> g(10)
100
>>> g
<var [('pow', 2)]>
Just for the record, I don't want to know another way to do the same thing (I won't declare every single operator on the class... that will be boring and the approach I have works pretty fine :). I want to know how to intercept attribute lookup from an operator.
Some black magic let's you achieve your goal:
operators = ["add", "mul"]
class OperatorHackiness(object):
"""
Use this base class if you want your object
to intercept __add__, __iadd__, __radd__, __mul__ etc.
using __getattr__.
__getattr__ will called at most _once_ during the
lifetime of the object, as the result is cached!
"""
def __init__(self):
# create a instance-local base class which we can
# manipulate to our needs
self.__class__ = self.meta = type('tmp', (self.__class__,), {})
# add operator methods dynamically, because we are damn lazy.
# This loop is however only called once in the whole program
# (when the module is loaded)
def create_operator(name):
def dynamic_operator(self, *args):
# call getattr to allow interception
# by user
func = self.__getattr__(name)
# save the result in the temporary
# base class to avoid calling getattr twice
setattr(self.meta, name, func)
# use provided function to calculate result
return func(self, *args)
return dynamic_operator
for op in operators:
for name in ["__%s__" % op, "__r%s__" % op, "__i%s__" % op]:
setattr(OperatorHackiness, name, create_operator(name))
# Example user class
class Test(OperatorHackiness):
def __init__(self, x):
super(Test, self).__init__()
self.x = x
def __getattr__(self, attr):
print "__getattr__(%s)" % attr
if attr == "__add__":
return lambda a, b: a.x + b.x
elif attr == "__iadd__":
def iadd(self, other):
self.x += other.x
return self
return iadd
elif attr == "__mul__":
return lambda a, b: a.x * b.x
else:
raise AttributeError
## Some test code:
a = Test(3)
b = Test(4)
# let's test addition
print(a + b) # this first call to __add__ will trigger
# a __getattr__ call
print(a + b) # this second call will not!
# same for multiplication
print(a * b)
print(a * b)
# inplace addition (getattr is also only called once)
a += b
a += b
print(a.x) # yay!
Output
__getattr__(__add__)
7
7
__getattr__(__mul__)
12
12
__getattr__(__iadd__)
11
Now you can use your second code sample literally by inheriting from my OperatorHackiness base class. You even get an additional benefit: __getattr__ will only be called once per instance and operator and there is no additional layer of recursion involved for the caching. We hereby circumvent the problem of method calls being slow compared to method lookup (as Paul Hankin noticed correctly).
NOTE: The loop to add the operator methods is only executed once in your whole program, so the preparation takes constant overhead in the range of milliseconds.
The issue at hand is that Python looks up __xxx__ methods on the object's class, not on the object itself -- and if it is not found, it does not fall back to __getattr__ nor __getattribute__.
The only way to intercept such calls is to have a method already there. It can be a stub function, as in Niklas Baumstark's answer, or it can be the full-fledged replacement function; either way, however, there must be something already there or you will not be able to intercept such calls.
If you are reading closely, you will have noticed that your requirement for having the final method be bound to the instance is not a possible solution -- you can do it, but Python will never call it as Python is looking at the class of the instance, not the instance, for __xxx__ methods. Niklas Baumstark's solution of making a unique temp class for each instance is as close as you can get to that requirement.
It looks like you are making things too complicated. You can define a mixin class and inherit from it. This is both simpler than using metaclasses and will run faster than using __getattr__.
class OperatorMixin(object):
def __add__(self, other):
return func(self, other)
def __radd__(self, other):
return rfunc(self, other)
... other operators defined too
Then every class you want to have these operators, inherit from OperatorMixin.
class Expression(OperatorMixin):
... the regular methods for your class
Generating the operator methods when they're needed isn't a good idea: __getattr__ is slow compared to regular method lookup, and since the methods are stored once (on the mixin class), it saves almost nothing.
If you want to achieve your goal without metaclasses, you can append the following to your code:
def get_magic_wrapper(name):
def wrapper(self, *a, **kw):
print('Wrapping')
res = getattr(self._data, name)(*a, **kw)
return res
return wrapper
_magic_methods = ['__str__', '__len__', '__repr__']
for _mm in _magic_methods:
setattr(ShowMeList, _mm, get_magic_wrapper(_mm))
It reroutes the methods in _magic_methods to the self._data object, by adding these attributes to the class iteratively. To check if it works:
>>> l = ShowMeList(range(8))
>>> len(l)
Wrapping
8
>>> l
Wrapping
[0, 1, 2, 3, 4, 5, 6, 7]
>>> print(l)
Wrapping
[0, 1, 2, 3, 4, 5, 6, 7]

Categories

Resources