Apply a method to an object of another class - python

Given two non-related classes A and B, how to call A.method with an object of B as self?
class A:
def __init__(self, x):
self.x = x
def print_x(self):
print self.x
class B:
def __init__(self, x):
self.x = x
a = A('spam')
b = B('eggs')
a.print_x() #<-- spam
<magic>(A.print_x, b) #<-- 'eggs'

In Python 3.x you can simply do what you want:
A.print_x(b) #<-- 'eggs'
If you only have an instance of 'A', then get the class first:
a.__class__.print_x(b) #<-- 'eggs'
In Python 2.x (which the OP uses) this doesn't work, as noted by the OP and explained by Amber in the comments:
This is a difference between Python 2.x and Python 3.x - methods in
3.x don't enforce being passed the same class.
More details (OP edit)
In python 2, A.print_x returns an "unbound method", which cannot be directly applied to other classes' objects:
When an unbound user-defined method object is called, the underlying function (im_func) is called, with the restriction that the first argument must be an instance of the proper class (im_class) or of a derived class thereof. >> http://docs.python.org/reference/datamodel.html
To work around this restriction, we first have to obtain a "raw" function from a method, via im_func or __func__ (2.6+), which then can be called passing an object. This works on both classes and instances:
# python 2.5-
A.print_x.im_func(b)
a.print_x.im_func(b)
# python 2.6+
A.print_x.__func__(b)
a.print_x.__func__(b)
In python 3 there's no such thing anymore as unbound method.
Unbound methods are gone for good. ClassObject.method returns an
ordinary function object, instance.method still returns a bound
method object. >> http://www.python.org/getit/releases/3.0/NEWS.txt
Hence, in python 3, A.print_x is just a function, and can be called right away and a.print_x still has to be unbounded:
# python 3.0+
A.print_x(b)
a.print_x.__func__(b)

You don't (well, it's not that you can't throw enough magic at it to make it work, it's that you just shouldn't). If the function is supposed to work with more than one type, make it... a function.
# behold, the magic and power of duck typing
def fun(obj):
print obj.x
class A:
x = 42
class B:
x = 69
fun(A())
fun(B())

I don't know why you would really want to do this, but it is possible:
>>> class A(object):
... def foo(self):
... print self.a
...
>>> class B(object):
... def __init__(self):
... self.a = "b"
...
>>> x = A()
>>> y = B()
>>> x.foo.im_func(y)
b
>>> A.foo.im_func(y)
b
An instance method (a class instance's bound method) has a property called im_func which refers to the actual function called by the instance method, without the instance/class binding. The class object's version of the method also has this property.

Related

What's the difference between the following ways of initializing attributes? [duplicate]

Is there any meaningful distinction between:
class A(object):
foo = 5 # some default value
vs.
class B(object):
def __init__(self, foo=5):
self.foo = foo
If you're creating a lot of instances, is there any difference in performance or space requirements for the two styles? When you read the code, do you consider the meaning of the two styles to be significantly different?
There is a significant semantic difference (beyond performance considerations):
when the attribute is defined on the instance (which is what we usually do), there can be multiple objects referred to. Each gets a totally separate version of that attribute.
when the attribute is defined on the class, there is only one underlying object referred to, so if operations on different instances of that class both attempt to set/(append/extend/insert/etc.) the attribute, then:
if the attribute is a builtin type (like int, float, boolean, string), operations on one object will overwrite (clobber) the value
if the attribute is a mutable type (like a list or a dict), we will get unwanted leakage.
For example:
>>> class A: foo = []
>>> a, b = A(), A()
>>> a.foo.append(5)
>>> b.foo
[5]
>>> class A:
... def __init__(self): self.foo = []
>>> a, b = A(), A()
>>> a.foo.append(5)
>>> b.foo
[]
The difference is that the attribute on the class is shared by all instances. The attribute on an instance is unique to that instance.
If coming from C++, attributes on the class are more like static member variables.
Here is a very good post, and summary it as below.
class Bar(object):
## No need for dot syntax
class_var = 1
def __init__(self, i_var):
self.i_var = i_var
## Need dot syntax as we've left scope of class namespace
Bar.class_var
## 1
foo = MyClass(2)
## Finds i_var in foo's instance namespace
foo.i_var
## 2
## Doesn't find class_var in instance namespace…
## So look's in class namespace (Bar.__dict__)
foo.class_var
## 1
And in visual form
Class attribute assignment
If a class attribute is set by accessing the class, it will override the value for all instances
foo = Bar(2)
foo.class_var
## 1
Bar.class_var = 2
foo.class_var
## 2
If a class variable is set by accessing an instance, it will override the value only for that instance. This essentially overrides the class variable and turns it into an instance variable available, intuitively, only for that instance.
foo = Bar(2)
foo.class_var
## 1
foo.class_var = 2
foo.class_var
## 2
Bar.class_var
## 1
When would you use class attribute?
Storing constants. As class attributes can be accessed as attributes of the class itself, it’s often nice to use them for storing Class-wide, Class-specific constants
class Circle(object):
pi = 3.14159
def __init__(self, radius):
self.radius = radius
def area(self):
return Circle.pi * self.radius * self.radius
Circle.pi
## 3.14159
c = Circle(10)
c.pi
## 3.14159
c.area()
## 314.159
Defining default values. As a trivial example, we might create a bounded list (i.e., a list that can only hold a certain number of elements or fewer) and choose to have a default cap of 10 items
class MyClass(object):
limit = 10
def __init__(self):
self.data = []
def item(self, i):
return self.data[i]
def add(self, e):
if len(self.data) >= self.limit:
raise Exception("Too many elements")
self.data.append(e)
MyClass.limit
## 10
Since people in the comments here and in two other questions marked as dups all appear to be confused about this in the same way, I think it's worth adding an additional answer on top of Alex Coventry's.
The fact that Alex is assigning a value of a mutable type, like a list, has nothing to do with whether things are shared or not. We can see this with the id function or the is operator:
>>> class A: foo = object()
>>> a, b = A(), A()
>>> a.foo is b.foo
True
>>> class A:
... def __init__(self): self.foo = object()
>>> a, b = A(), A()
>>> a.foo is b.foo
False
(If you're wondering why I used object() instead of, say, 5, that's to avoid running into two whole other issues which I don't want to get into here; for two different reasons, entirely separately-created 5s can end up being the same instance of the number 5. But entirely separately-created object()s cannot.)
So, why is it that a.foo.append(5) in Alex's example affects b.foo, but a.foo = 5 in my example doesn't? Well, try a.foo = 5 in Alex's example, and notice that it doesn't affect b.foo there either.
a.foo = 5 is just making a.foo into a name for 5. That doesn't affect b.foo, or any other name for the old value that a.foo used to refer to.* It's a little tricky that we're creating an instance attribute that hides a class attribute,** but once you get that, nothing complicated is happening here.
Hopefully it's now obvious why Alex used a list: the fact that you can mutate a list means it's easier to show that two variables name the same list, and also means it's more important in real-life code to know whether you have two lists or two names for the same list.
* The confusion for people coming from a language like C++ is that in Python, values aren't stored in variables. Values live off in value-land, on their own, variables are just names for values, and assignment just creates a new name for a value. If it helps, think of each Python variable as a shared_ptr<T> instead of a T.
** Some people take advantage of this by using a class attribute as a "default value" for an instance attribute that instances may or may not set. This can be useful in some cases, but it can also be confusing, so be careful with it.
There is one more situation.
Class and instance attributes is Descriptor.
# -*- encoding: utf-8 -*-
class RevealAccess(object):
def __init__(self, initval=None, name='var'):
self.val = initval
self.name = name
def __get__(self, obj, objtype):
return self.val
class Base(object):
attr_1 = RevealAccess(10, 'var "x"')
def __init__(self):
self.attr_2 = RevealAccess(10, 'var "x"')
def main():
b = Base()
print("Access to class attribute, return: ", Base.attr_1)
print("Access to instance attribute, return: ", b.attr_2)
if __name__ == '__main__':
main()
Above will output:
('Access to class attribute, return: ', 10)
('Access to instance attribute, return: ', <__main__.RevealAccess object at 0x10184eb50>)
The same type of instance access through class or instance return different result!
And i found in c.PyObject_GenericGetAttr definition,and a great post.
Explain
If the attribute is found in the dictionary of the classes which make up.
the objects MRO, then check to see if the attribute being looked up points to a Data Descriptor (which is nothing more that a class implementing both the __get__ and the __set__ methods).
If it does, resolve the attribute lookup by calling the __get__ method of the Data Descriptor (lines 28–33).

Trying to implement object-oriented pong game [duplicate]

Is there any meaningful distinction between:
class A(object):
foo = 5 # some default value
vs.
class B(object):
def __init__(self, foo=5):
self.foo = foo
If you're creating a lot of instances, is there any difference in performance or space requirements for the two styles? When you read the code, do you consider the meaning of the two styles to be significantly different?
There is a significant semantic difference (beyond performance considerations):
when the attribute is defined on the instance (which is what we usually do), there can be multiple objects referred to. Each gets a totally separate version of that attribute.
when the attribute is defined on the class, there is only one underlying object referred to, so if operations on different instances of that class both attempt to set/(append/extend/insert/etc.) the attribute, then:
if the attribute is a builtin type (like int, float, boolean, string), operations on one object will overwrite (clobber) the value
if the attribute is a mutable type (like a list or a dict), we will get unwanted leakage.
For example:
>>> class A: foo = []
>>> a, b = A(), A()
>>> a.foo.append(5)
>>> b.foo
[5]
>>> class A:
... def __init__(self): self.foo = []
>>> a, b = A(), A()
>>> a.foo.append(5)
>>> b.foo
[]
The difference is that the attribute on the class is shared by all instances. The attribute on an instance is unique to that instance.
If coming from C++, attributes on the class are more like static member variables.
Here is a very good post, and summary it as below.
class Bar(object):
## No need for dot syntax
class_var = 1
def __init__(self, i_var):
self.i_var = i_var
## Need dot syntax as we've left scope of class namespace
Bar.class_var
## 1
foo = MyClass(2)
## Finds i_var in foo's instance namespace
foo.i_var
## 2
## Doesn't find class_var in instance namespace…
## So look's in class namespace (Bar.__dict__)
foo.class_var
## 1
And in visual form
Class attribute assignment
If a class attribute is set by accessing the class, it will override the value for all instances
foo = Bar(2)
foo.class_var
## 1
Bar.class_var = 2
foo.class_var
## 2
If a class variable is set by accessing an instance, it will override the value only for that instance. This essentially overrides the class variable and turns it into an instance variable available, intuitively, only for that instance.
foo = Bar(2)
foo.class_var
## 1
foo.class_var = 2
foo.class_var
## 2
Bar.class_var
## 1
When would you use class attribute?
Storing constants. As class attributes can be accessed as attributes of the class itself, it’s often nice to use them for storing Class-wide, Class-specific constants
class Circle(object):
pi = 3.14159
def __init__(self, radius):
self.radius = radius
def area(self):
return Circle.pi * self.radius * self.radius
Circle.pi
## 3.14159
c = Circle(10)
c.pi
## 3.14159
c.area()
## 314.159
Defining default values. As a trivial example, we might create a bounded list (i.e., a list that can only hold a certain number of elements or fewer) and choose to have a default cap of 10 items
class MyClass(object):
limit = 10
def __init__(self):
self.data = []
def item(self, i):
return self.data[i]
def add(self, e):
if len(self.data) >= self.limit:
raise Exception("Too many elements")
self.data.append(e)
MyClass.limit
## 10
Since people in the comments here and in two other questions marked as dups all appear to be confused about this in the same way, I think it's worth adding an additional answer on top of Alex Coventry's.
The fact that Alex is assigning a value of a mutable type, like a list, has nothing to do with whether things are shared or not. We can see this with the id function or the is operator:
>>> class A: foo = object()
>>> a, b = A(), A()
>>> a.foo is b.foo
True
>>> class A:
... def __init__(self): self.foo = object()
>>> a, b = A(), A()
>>> a.foo is b.foo
False
(If you're wondering why I used object() instead of, say, 5, that's to avoid running into two whole other issues which I don't want to get into here; for two different reasons, entirely separately-created 5s can end up being the same instance of the number 5. But entirely separately-created object()s cannot.)
So, why is it that a.foo.append(5) in Alex's example affects b.foo, but a.foo = 5 in my example doesn't? Well, try a.foo = 5 in Alex's example, and notice that it doesn't affect b.foo there either.
a.foo = 5 is just making a.foo into a name for 5. That doesn't affect b.foo, or any other name for the old value that a.foo used to refer to.* It's a little tricky that we're creating an instance attribute that hides a class attribute,** but once you get that, nothing complicated is happening here.
Hopefully it's now obvious why Alex used a list: the fact that you can mutate a list means it's easier to show that two variables name the same list, and also means it's more important in real-life code to know whether you have two lists or two names for the same list.
* The confusion for people coming from a language like C++ is that in Python, values aren't stored in variables. Values live off in value-land, on their own, variables are just names for values, and assignment just creates a new name for a value. If it helps, think of each Python variable as a shared_ptr<T> instead of a T.
** Some people take advantage of this by using a class attribute as a "default value" for an instance attribute that instances may or may not set. This can be useful in some cases, but it can also be confusing, so be careful with it.
There is one more situation.
Class and instance attributes is Descriptor.
# -*- encoding: utf-8 -*-
class RevealAccess(object):
def __init__(self, initval=None, name='var'):
self.val = initval
self.name = name
def __get__(self, obj, objtype):
return self.val
class Base(object):
attr_1 = RevealAccess(10, 'var "x"')
def __init__(self):
self.attr_2 = RevealAccess(10, 'var "x"')
def main():
b = Base()
print("Access to class attribute, return: ", Base.attr_1)
print("Access to instance attribute, return: ", b.attr_2)
if __name__ == '__main__':
main()
Above will output:
('Access to class attribute, return: ', 10)
('Access to instance attribute, return: ', <__main__.RevealAccess object at 0x10184eb50>)
The same type of instance access through class or instance return different result!
And i found in c.PyObject_GenericGetAttr definition,and a great post.
Explain
If the attribute is found in the dictionary of the classes which make up.
the objects MRO, then check to see if the attribute being looked up points to a Data Descriptor (which is nothing more that a class implementing both the __get__ and the __set__ methods).
If it does, resolve the attribute lookup by calling the __get__ method of the Data Descriptor (lines 28–33).

what gets returned when you return self?

what gets returned when you return 'self' inside a python class? where do we exactly use return 'self'? In the below example what does self exactly returns
class Fib:
'''iterator that yields numbers in the Fibonacci sequence'''
def __init__(self, max):
self.max = max
def __iter__(self):
self.a = 0
self.b = 1
return self
def __next__(self):
fib = self.a
if fib > self.max:
raise StopIteration
self.a, self.b = self.b, self.a + self.b
print(self.a,self.b,self.c)
return fib
Python treats method calls like object.method() approximately like method(object). The docs say that "call x.f() is exactly equivalent to MyClass.f(x)". This means that a method will receive the object as the first argument. By convention in the definition of methods, this first argument is called self.
So self is the conventional name of the object owning the method.
Now, why would we want to return self? In your particular example, it is because the object implements the iterator protocol, which basically means it has __iter__ and __next__ methods. The __iter__ method must (according to the docs) "Return the iterator object itself", which is exactly what is happening here.
As an aside, another common reason for returning self is to support method chaining, where you would want to do object.method1().method2().method3() where all those methods are defined in the same class. This pattern is quite common in libraries like pandas.
The keyword self is used to refer to the instance that you are calling the method from.
This is particularly useful for chaining. In your example, let's say we want to call __next__() on an initialized Fib instance. Since __iter__() returns self, the following are equivalent :
obj = Fib(5)
obj.__iter__() # Initialize obj
obj.__next__()
And
obj = Fib(5).__iter__() # Create AND initialize obj
obj.__next__()
In your particular example, the self keyword returns the instance of the Fib class from which you are calling __iter__() (called obj in my small snippet).
Hope it'll be helpful.
Partial Answer:
When you return self, you return the class instance. For example:
class Foo:
def __init__(self, a):
self.a = a
def ret_self(self):
return self
If I create an instance and run ret_self, you will see that they both refer to the same instance:
>>> x = Foo("a")
>>> x
<__main__.Foo instance at 0x0000000002823D48>
>>> x.ret_self()
<__main__.Foo instance at 0x0000000002823D48>
In other words, both x and x.ret_self() return the same reference to that class instance.
self is actually another way of saying "this instance of Foo". Hence, instance variables are self.a in the class.
When will you need this? I don't have the experience to tell you and I do not want to give possibly misleading information that I am unsure of. I will leave it to someone else to expound on this answer.
Please do not accept this answer.

Python doesn't allocate new space for objects instantiated outside the constructor of a class-- expected behavior? [duplicate]

Is there any meaningful distinction between:
class A(object):
foo = 5 # some default value
vs.
class B(object):
def __init__(self, foo=5):
self.foo = foo
If you're creating a lot of instances, is there any difference in performance or space requirements for the two styles? When you read the code, do you consider the meaning of the two styles to be significantly different?
There is a significant semantic difference (beyond performance considerations):
when the attribute is defined on the instance (which is what we usually do), there can be multiple objects referred to. Each gets a totally separate version of that attribute.
when the attribute is defined on the class, there is only one underlying object referred to, so if operations on different instances of that class both attempt to set/(append/extend/insert/etc.) the attribute, then:
if the attribute is a builtin type (like int, float, boolean, string), operations on one object will overwrite (clobber) the value
if the attribute is a mutable type (like a list or a dict), we will get unwanted leakage.
For example:
>>> class A: foo = []
>>> a, b = A(), A()
>>> a.foo.append(5)
>>> b.foo
[5]
>>> class A:
... def __init__(self): self.foo = []
>>> a, b = A(), A()
>>> a.foo.append(5)
>>> b.foo
[]
The difference is that the attribute on the class is shared by all instances. The attribute on an instance is unique to that instance.
If coming from C++, attributes on the class are more like static member variables.
Here is a very good post, and summary it as below.
class Bar(object):
## No need for dot syntax
class_var = 1
def __init__(self, i_var):
self.i_var = i_var
## Need dot syntax as we've left scope of class namespace
Bar.class_var
## 1
foo = MyClass(2)
## Finds i_var in foo's instance namespace
foo.i_var
## 2
## Doesn't find class_var in instance namespace…
## So look's in class namespace (Bar.__dict__)
foo.class_var
## 1
And in visual form
Class attribute assignment
If a class attribute is set by accessing the class, it will override the value for all instances
foo = Bar(2)
foo.class_var
## 1
Bar.class_var = 2
foo.class_var
## 2
If a class variable is set by accessing an instance, it will override the value only for that instance. This essentially overrides the class variable and turns it into an instance variable available, intuitively, only for that instance.
foo = Bar(2)
foo.class_var
## 1
foo.class_var = 2
foo.class_var
## 2
Bar.class_var
## 1
When would you use class attribute?
Storing constants. As class attributes can be accessed as attributes of the class itself, it’s often nice to use them for storing Class-wide, Class-specific constants
class Circle(object):
pi = 3.14159
def __init__(self, radius):
self.radius = radius
def area(self):
return Circle.pi * self.radius * self.radius
Circle.pi
## 3.14159
c = Circle(10)
c.pi
## 3.14159
c.area()
## 314.159
Defining default values. As a trivial example, we might create a bounded list (i.e., a list that can only hold a certain number of elements or fewer) and choose to have a default cap of 10 items
class MyClass(object):
limit = 10
def __init__(self):
self.data = []
def item(self, i):
return self.data[i]
def add(self, e):
if len(self.data) >= self.limit:
raise Exception("Too many elements")
self.data.append(e)
MyClass.limit
## 10
Since people in the comments here and in two other questions marked as dups all appear to be confused about this in the same way, I think it's worth adding an additional answer on top of Alex Coventry's.
The fact that Alex is assigning a value of a mutable type, like a list, has nothing to do with whether things are shared or not. We can see this with the id function or the is operator:
>>> class A: foo = object()
>>> a, b = A(), A()
>>> a.foo is b.foo
True
>>> class A:
... def __init__(self): self.foo = object()
>>> a, b = A(), A()
>>> a.foo is b.foo
False
(If you're wondering why I used object() instead of, say, 5, that's to avoid running into two whole other issues which I don't want to get into here; for two different reasons, entirely separately-created 5s can end up being the same instance of the number 5. But entirely separately-created object()s cannot.)
So, why is it that a.foo.append(5) in Alex's example affects b.foo, but a.foo = 5 in my example doesn't? Well, try a.foo = 5 in Alex's example, and notice that it doesn't affect b.foo there either.
a.foo = 5 is just making a.foo into a name for 5. That doesn't affect b.foo, or any other name for the old value that a.foo used to refer to.* It's a little tricky that we're creating an instance attribute that hides a class attribute,** but once you get that, nothing complicated is happening here.
Hopefully it's now obvious why Alex used a list: the fact that you can mutate a list means it's easier to show that two variables name the same list, and also means it's more important in real-life code to know whether you have two lists or two names for the same list.
* The confusion for people coming from a language like C++ is that in Python, values aren't stored in variables. Values live off in value-land, on their own, variables are just names for values, and assignment just creates a new name for a value. If it helps, think of each Python variable as a shared_ptr<T> instead of a T.
** Some people take advantage of this by using a class attribute as a "default value" for an instance attribute that instances may or may not set. This can be useful in some cases, but it can also be confusing, so be careful with it.
There is one more situation.
Class and instance attributes is Descriptor.
# -*- encoding: utf-8 -*-
class RevealAccess(object):
def __init__(self, initval=None, name='var'):
self.val = initval
self.name = name
def __get__(self, obj, objtype):
return self.val
class Base(object):
attr_1 = RevealAccess(10, 'var "x"')
def __init__(self):
self.attr_2 = RevealAccess(10, 'var "x"')
def main():
b = Base()
print("Access to class attribute, return: ", Base.attr_1)
print("Access to instance attribute, return: ", b.attr_2)
if __name__ == '__main__':
main()
Above will output:
('Access to class attribute, return: ', 10)
('Access to instance attribute, return: ', <__main__.RevealAccess object at 0x10184eb50>)
The same type of instance access through class or instance return different result!
And i found in c.PyObject_GenericGetAttr definition,and a great post.
Explain
If the attribute is found in the dictionary of the classes which make up.
the objects MRO, then check to see if the attribute being looked up points to a Data Descriptor (which is nothing more that a class implementing both the __get__ and the __set__ methods).
If it does, resolve the attribute lookup by calling the __get__ method of the Data Descriptor (lines 28–33).

Convert partial function to method in python

Consider the following (broken) code:
import functools
class Foo(object):
def __init__(self):
def f(a,self,b):
print a+b
self.g = functools.partial(f,1)
x=Foo()
x.g(2)
What I want to do is take the function f and partially apply it, resulting in a function g(self,b). I would like to use this function as a method, however this does not currently work and instead I get the error
Traceback (most recent call last):
File "test.py", line 8, in <module>
x.g(2)
TypeError: f() takes exactly 3 arguments (2 given)
Doing x.g(x,2) however works, so it seem the issue is that g is considered a "normal" function instead of a method of the class. Is there a way to get x.g to behave like a method (i.e implicitly pass the self parameter) instead of a function?
There are two issues at hand here. First, for a function to be turned into a method it must be stored on the class, not the instance. A demonstration:
class Foo(object):
def a(*args):
print 'a', args
def b(*args):
print 'b', args
Foo.b = b
x = Foo()
def c(*args):
print 'c', args
x.c = c
So a is a function defined in the class definition, b is a function assigned to the class afterwards, and c is a function assigned to the instance. Take a look at what happens when we call them:
>>> x.a('a will have "self"')
a (<__main__.Foo object at 0x100425ed0>, 'a will have "self"')
>>> x.b('as will b')
b (<__main__.Foo object at 0x100425ed0>, 'as will b')
>>> x.c('c will only recieve this string')
c ('c will only recieve this string',)
As you can see there is little difference between a function defined along with the class, and one assigned to it later. I believe there is actually no difference as long as there is no metaclass involved, but that is for another time.
The second problem comes from how a function is actually turned into a method in the first place; the function type implements the descriptor protocol. (See the docs for details.) In a nutshell, the function type has a special __get__ method which is called when you perform an attribute lookup on the class itself. Instead of you getting the function object, the __get__ method of that function object is called, and that returns a bound method object (which is what supplies the self argument).
Why is this a problem? Because the functools.partial object is not a descriptor!
>>> import functools
>>> def f(*args):
... print 'f', args
...
>>> g = functools.partial(f, 1, 2, 3)
>>> g
<functools.partial object at 0x10042f2b8>
>>> g.__get__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'functools.partial' object has no attribute '__get__'
There are a number of options you have at this point. You can explicitly supply the self argument to the partial:
import functools
class Foo(object):
def __init__(self):
def f(self, a, b):
print a + b
self.g = functools.partial(f, self, 1)
x = Foo()
x.g(2)
...or you would imbed the self and value of a in a closure:
class Foo(object):
def __init__(self):
a = 1
def f(b):
print a + b
self.g = f
x = Foo()
x.g(2)
These solutions are of course assuming that there is an as yet unspecified reason for assigning a method to the class in the constructor like this, as you can very easily just define a method directly on the class to do what you are doing here.
Edit: Here is an idea for a solution assuming the functions may be created for the class, instead of the instance:
class Foo(object):
pass
def make_binding(name):
def f(self, *args):
print 'Do %s with %s given %r.' % (name, self, args)
return f
for name in 'foo', 'bar', 'baz':
setattr(Foo, name, make_binding(name))
f = Foo()
f.foo(1, 2, 3)
f.bar('some input')
f.baz()
Gives you:
Do foo with <__main__.Foo object at 0x10053e3d0> given (1, 2, 3).
Do bar with <__main__.Foo object at 0x10053e3d0> given ('some input',).
Do baz with <__main__.Foo object at 0x10053e3d0> given ().
This will work. But I'm not sure if this is what you are looking for
class Foo(object):
def __init__(self):
def f(a,self,b):
print a+b
self.g = functools.partial(f,1, self) # <= passing `self` also.
x = Foo()
x.g(2)
this is simply a concrete example of what i believe is the most correct (and therefore pythonic :) way to solve -- as the best solution (definition on a class!) was never revealed -- #MikeBoers explanations are otherwise solid.
i've used this pattern quite a bit (recently for an proxied API), and it's survived untold production hours without the slightest irregularity.
from functools import update_wrapper
from functools import partial
from types import MethodType
class Basic(object):
def add(self, **kwds):
print sum(kwds.values())
Basic.add_to_one = MethodType(
update_wrapper(partial(Basic.add, a=1), Basic.add),
None,
Basic,
)
x = Basic()
x.add(a=1, b=9)
x.add_to_one(b=9)
...yields:
10
10
...the key take-home-point here is MethodType(func, inst, cls), which creates an unbound method from another callable (you can even use this to chain/bind instance methods to unrelated classes... when instantiated+called the original instance method will receive BOTH self objects!)
note the exclusive use of keyword arguments! while there might be a better way to handle, args are generally a PITA because the placement of self becomes less predictable. also, IME anyway, using *args, and **kwds in the bottom-most function has proven very useful later on.
functools.partialmethod() is available since python 3.4 for this purpose.
import functools
class Foo(object):
def __init__(self):
def f(a,self,b):
print a+b
self.g = functools.partialmethod(f,1)
x=Foo()
x.g(2)

Categories

Resources