I would like to create a class can be used in in statements, and the condition is passed to the object in __init__. An example:
class Set:
def __init__(self, contains):
self.__contains__ = contains # or setattr; doesn't matter
top = Set(lambda _: True)
bottom = Set(lambda _: False)
The problem with this is that 3 in top returns TypeError: argument of type 'Set' is not iterable, even though top.__contains__(3) returns True as expected.
What's more, if I modify the code as such:
class Set:
def __init__(self, contains):
self.__contains__ = contains
def __contains__(self, x):
return False
top = Set(lambda _: True)
, 3 in top will return False, whereas top.__contains__(3) returns True as expected, again.
What is happening here? I am on Python 3.9.2.
(Note: the same happens with other methods that are part of the data model, such as __gt__, __eq__ , etc.)
That's because magic methods are looked up on the class, not the instance. The interpreter circumvents the usual attribute-getting mechanisms when performing "overloadable" operations.
It seems to be this way because of how it was originally implemented in CPython, for example because of how type slots work (not the __slots__ slots, that's a different thing): how + or * or other operators works on a value is decided by its class, not on per instance basis.
There's a performance benefit to this: looking up a dunder method can involve a dictionary lookup, or worse, some dynamic computations with __getattr__/__getattribute__. However, I don't know if this is the main reason it is this way.
I wasn't able to find a detailed written description, but there's a talk by Armin Ronacher on YouTube going quite in depth on this.
__contains__ is an instance method that takes a self arg.
https://docs.python.org/3/reference/datamodel.html#object.\_\_contains\_\_
For objects that don’t define __contains__(), the membership test first tries iteration via __iter__(), then the old sequence iteration protocol via __getitem__()
So I think what is happening in each case is:
The problem with this is that 3 in top returns TypeError: argument of type 'Set' is not iterable, even though top.__contains__(3) returns True as expected.
class Set:
def __init__(self, contains):
self.__contains__ = contains # or setattr; doesn't matter
top = Set(lambda _: True)
You Set class doesn't have a __contains__ method, only the instance has it. So Python doesn't recognise Set objects as implementing this protocol, so it falls back to trying the search approach via __iter__... but your Set class is not iterable.
3 in top will return False, whereas top.__contains__(3) returns True as expected, again.
class Set:
def __init__(self, contains):
self.__contains__ = contains
def __contains__(self, x):
return False
top = Set(lambda _: True)
This time your Set class does have the __contains__ method, so Python will try to use it. We can see from the behaviour that 3 in top is different from top.__contains__(3). What actually happens for 3 in top is Python does something like Set.__contains__(top, 3)
So depending how you call it you get either the method on the class, or the lambda you overrode on the instance.
Related
I'm trying to change the return type of a function from set to list. For a smooth transition, the idea was to do an in-place deprecation and temporarily return a type that is both a set and a list. But I'm not sure whether it possible to derive from two internal Python types because:
>>> class ListSet(list, set):
... pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: multiple bases have instance lay-out conflict
In general the goal is to have a type that behaves like a sorted set (I think then it would be pretty much the same as a list and set) but also works with type checks (and ideally also works with MyPy):
thing = class.method_with_updated_return_type()
print(isinstance(thing, set)) # True
print(isinstance(thing, list)) # True
My idea would otherwise have been to override the __instancecheck__ method of ListSet's metaclass, to something like
class Meta(type):
def __instancecheck__(self, instance):
return isinstance(instance, (set, list))
class ListSet(metaclass=Meta):
# implement all set and list methods
Is something like this possible in Python?
So, no, nothing short of inheriting directly from set or list will make an instance of a custom class return "True" for an isinstance call. And there are several reasons why one class can't inherit from both at the sametime, and even if you'd code such a class in native code, or modify it with ctypes so that it would return True for such an isinstance check, the resulting class would likely crash your Python interpreter when used.
On the other hand, the mechanisms provided by the Abstract Base Classes and the __instancecheck__, __subclasscheck__, __subclasshook__ and register methods allow one to answer "True" if asked if an instance of anyone class is an instance of the custom class making use of these methods. That is: your custom class could answer, if asked, that an arbitrary list or set is an instance of itself, like in myobj = set((1,2,3)); isinstance(myobj, MySpecialClass) -> True- but not the other way around: isinstance(MySpecialClass(), set) will always return False.
What there is in order to allow similar mechanisms is that the recomendation is that one codes for prococols, not for specific classes. There is, any well-written code should always do isinstance(obj, collections.abc.Set) or collections.abc.Sequence and never isinstance(..., set) (or list). Then anyone can register a custom class as a subclass of those, and the test would be True:
from collections.abc import MutableSet, Sequence
class MyClass(MutableSet):
# <-NB. don't try to inherit _also_ from Sequence here. See bellow.
# code mandatory methods for MutableSet according to
# https://docs.python.org/3/library/collections.abc.html,
# plus customize all mandadory _and_ derivative methods
# for a mutable sequence, in order to have your
# desired "ordered mutable set" behavior here.
# After the class body, do:
Sequence.register(MyClass)
The call to Sequence.register will register MyClass as a virtual subclass of Sequence, and any well behaved code, that tests for the protocol, via collections.abc.Sequence instance check, or, better yet, code that just uses the object as it needs allows incorrect objects to fail at runtime, will just work. I fyou can get rid of any "isinstance" checks, just coding an appropriate implementation of "MutableSet" can give you an "ordered set" that works like you'd like, with no worries about arbitrary checks for the object type.
That is not hard at all: you can just implement the needed methods, initialize a list holding the actial data in your class __init__, update the list on all content-modifying calls for the Set, and iterate on the list
from collections.abc import MutableSet
class OrderedSet(MutableSet):
def __init__(self, initial=()):
self.data = list()
self.update(initial)
def update(self, items):
for item in items:
self.add(item)
def __contains__(self, item):
return item in self.data
def __iter__(self):
return iter(self.data)
def __len__(self):
return len(self.data)
def add(self, item):
if item not in self.data:
self.data.append(item)
def discard(self, item):
self.data.remove(item)
def __repr__(self):
return f"OrderedSet({self.data!r})"
If you can't change hardcoded tests for instances of "set" or "list", however, there is nothing that you can do.
I've stumbled upon really weird python 3 issue, cause of which I do not understand.
I'd like to compare my objects by checking if all their attributes are equal.
Some of the child classes will have fields that contain references to methods bound to self - and that causes RecursionError
Here's the PoC:
class A:
def __init__(self, field):
self.methods = [self.method]
self.field = field
def __eq__(self, other):
if type(self) != type(other):
return False
return self.__dict__ == other.__dict__
def method(self):
pass
first = A(field='foo')
second = A(field='bar')
print(first == second)
Running the code above in python 3 raises RecursionError and I'm not sure why. It seems that the A.__eq__ is used to compare the functions kept in self.methods. So my first question is - why? Why the object's __eq__ is called to compare bound function of that object?
The second question is - What kind of filter on __dict__ should I use to protect the __eq__ from this issue? I mean - in the PoC above the self.method is kept simply in a list, but sometimes it may be in another structure. The filtering would have to include all the possible containers that can hold the self-reference.
One clarification: I do need to keep the self.method function in a self.methods field. The usecase here is similar to unittest.TestCase._cleanups - a stack of methods that are to be called after the test is finished. The framework must be able to run the following code:
# obj is a child instance of the A class
obj.append(obj.child_method)
for method in obj.methods:
method()
Another clarification: the only code I can change is the __eq__ implementation.
"Why the object's __eq__ is called to compare bound function of that object?":
Because bound methods compare by the following algorithm:
Is the self bound to each method equal?
If so, is the function implementing the method the same?
Step 1 causes your infinite recursion; in comparing the __dict__, it eventually ends up comparing the bound methods, and to do so, it has to compare the objects to each other again, and now you're right back where you started, and it continues forever.
The only "solution"s I can come up with off-hand are:
Something like the reprlib.recursive_repr decorator (which would be extremely hacky, since you'd be heuristically determining if you're comparing for bound method related reasons based on whether __eq__ was re-entered), or
A wrapper for any bound methods you store that replaces equality testing of the respective selfs with identity testing.
The wrapper for bound methods isn't terrible at least. You'd basically just make a simple wrapper of the form:
class IdentityComparableMethod:
__slots__ = '_method',
def __new__(cls, method):
# Using __new__ prevents reinitialization, part of immutability contract
# that justifies defining __hash__
self = super().__new__(cls)
self._method = method
return self
def __getattr__(self, name):
'''Attribute access should match bound method's'''
return getattr(self._method, name)
def __eq__(self, other):
'''Comparable to other instances, and normal methods'''
if not isinstance(other, (IdentityComparableMethod, types.MethodType)):
return NotImplemented
return (self.__self__ is other.__self__ and
self.__func__ is other.__func__)
def __hash__(self):
'''Hash identically to the method'''
return hash(self._method)
def __call__(self, *args, **kwargs):
'''Delegate to method'''
return self._method(*args, **kwargs)
def __repr__(self):
return '{0.__class__.__name__}({0._method!r})'.format(self)
then when storing bound methods, wrap them in that class, e.g.:
self.methods = [IdentityComparableMethod(self.method)]
You may want to make methods itself enforce this via additional magic (so it only stores functions or IdentityComparableMethods), but that's the basic idea.
Other answers address more targeted filtering, this is just a way to make that filtering unnecessary.
Performance note: I didn't heavily optimize for performance; __getattr__ is the simplest way of reflecting all the attributes of the underlying method. If you want comparisons to go faster, you can fetch out __self__ during initialization and cache it on self directly to avoid __getattr__ calls, changing the __slots__ and __new__ declaration to:
__slots__ = '_method', '__self__'
def __new__(cls, method):
# Using __new__ prevents reinitialization, part of immutability contract
# that justifies defining __hash__
self = super().__new__(cls)
self._method = method
self.__self__ = method.__self__
return self
That makes a pretty significant difference in comparison speed; in local %timeit tests, the first == second comparison dropped from 2.77 μs to 1.05 μs. You could cache __func__ as well if you like, but since it's the fallback comparison, it's less likely to be checked at all (and you'd slow construction a titch for an optimization you're less likely to use).
Alternatively, instead of caching, you can just manually define #propertys for __self__ and __func__, which are slower than raw attributes (comparison ran in 1.41 μs), but incur no construction time cost at all (so if no comparison is ever run, you don't pay the lookup cost).
The reason why self.methods = [self.method] and then performing __eq__ ends up creating a recursion error is nicely explained in one of the comments in this question by #Aran-Fey
self.getX == other.getX compares two bound methods. Bound methods are considered equal if the method is the same, and the instances they're bound to are equal. So comparing two bound methods also compares the instances, which calls the __eq__ method again, which compares bound methods again, etc
One way to resolve it is to perform key-wise comparison on self.__dict__ and ignore methods key
class A:
def __init__(self, field):
self.methods = [self.method]
self.field = field
def __eq__(self, other):
#Iterate through all keys
for key in self.__dict__:
#Perform comparison on values except the key methods
if key != 'methods':
if self.__dict__[key] != other.__dict__[key]:
return False
return True
def method(self):
pass
first = A(field='foo')
second = A(field='bar')
print(first == second)
The output will be False
Edit:
I think the "==" cause the error. You can install deepdiff and modify your code to:
class A:
def __init__(self, field):
self.methods = [self.method]
self.field = field
def __eq__(self, other):
import deepdiff
if type(self) != type(other):
return False
return deepdiff.DeepDiff(self.__dict__, other.__dict__) == {}
def method(self):
pass
Then,
A(field='foo') == A(field='bar') returns False
and
A(field='foo') == A(field='foo') returns True
Original Answer:
Try replacing
self.methods = [self.method]
with
self.methods = [A.method]
And the result is False
The issue you're running into is being caused by a very old bug in CPython. The good news is that it has already been fixed for Python 3.8 (which will soon be getting its first beta release).
To understand the issue, you need to understand how the equality check for methods from Python 2.5 through 3.7 worked. A bound method has a self and a func attribute. In the versions of Python where this bug is an issue, a comparison of two bound methods would compare both the func and the self values for Python-level equality (using the C-API equivalent to the Python == operator). With your class, this leads to infinite recursion, since the objects want to compare the bound methods stored in their methods lists, and the bound methods need to compare their self attributes.
The fixed code uses an identity comparison, rather than an equality comparison, for the self attribute of bound method objects. This has additional benefits, as methods of "equal" but not identical objects will no longer be considered equal when they shouldn't be. The motivating example was a set of callbacks. You might want your code to avoid calling the same callback several times if it was registered multiple times, but you wouldn't want to incorrectly skip over a callback if it was bound to an equal (but not identical) object. For instance, two empty containers append method registered, and you wouldn't want them to be equal:
class MyContainer(list): # inherits == operator and from list, so empty containers are equal
def append(self, value):
super().append(value)
callbacks = []
def register_callback(cb):
if cb not in callbacks: # this does an == test against all previously registered callbacks
callbacks.append(cb)
def do_callbacks(*args):
for cb in callbacks:
cb(*args)
container1 = MyContainer()
register_callback(container1.append)
container2 = MyContainer()
register_callback(container2.append)
do_callbacks('foo')
print(container1 == container2) # this should be true, if both callbacks got run
The print call at the end of the code will output False in most recent versions, but in Python 3.8 thanks to the bug fix, it will write True, as it should.
I'll post the solution I came up with (inspired by #devesh-kumar-singh answer), however it does seem bitter-sweet.
def __eq__(self, other):
if type(self) != type(other):
return False
for key in self.__dict__:
try:
flag = self.__dict__[key] == other.__dict__[key]
if flag is False:
# if one of the attributes is different, the objects are as well
return False
except RecursionError:
# We stumbled upon an attribute that is somehow bound to self
pass
return flag
The benefit over #tianbo-ji solution is that it's faster if we find a difference in __dict__ values before we stuble upon bound method. But if we don't - it's an order of magnitude slower.
I would like to know if there is a way to create a list that will execute some actions each time I use the method append(or an other similar method).
I know that I could create a class that inherits from list and overwrite append, remove and all other methods that change content of list but I would like to know if there is an other way.
By comparison, if I want to print 'edited' each time I edit an attribute of an object I will not execute print("edited") in all methods of the class of that object. Instead, I will only overwrite __setattribute__.
I tried to create my own type which inherits of list and overwrite __setattribute__ but that doesn't work. When I use myList.append __setattribute__ isn't called. I would like to know what's realy occured when I use myList.append ? Is there some magic methods called that I could overwrite ?
I know that the question have already been asked there : What happens when you call `append` on a list?. The answer given is just, there is no answer... I hope it's a mistake.
I don't know if there is an answer to my request so I will also explain you why I'm confronted to that problem. Maybe I can search in an other direction to do what I want. I have got a class with several attributes. When an attribute is edited, I want to execute some actions. Like I explain before, to do this I am use to overwrite __setattribute__. That works fine for most of attributes. The problem is lists. If the attribute is used like this : myClass.myListAttr.append(something), __setattribute__ isn't called while the value of the attribute have changed.
The problem would be the same with dictionaries. Methods like pop doesn't call __setattribute__.
If I understand correctly, you would want something like Notify_list that would call some method (argument to the constructor in my implementation) every time a mutating method is called, so you could do something like this:
class Test:
def __init__(self):
self.list = Notify_list(self.list_changed)
def list_changed(self,method):
print("self.list.{} was called!".format(method))
>>> x = Test()
>>> x.list.append(5)
self.list.append was called!
>>> x.list.extend([1,2,3,4])
self.list.extend was called!
>>> x.list[1] = 6
self.list.__setitem__ was called!
>>> x.list
[5, 6, 2, 3, 4]
The most simple implementation of this would be to create a subclass and override every mutating method:
class Notifying_list(list):
__slots__ = ("notify",)
def __init__(self,notifying_method, *args,**kw):
self.notify = notifying_method
list.__init__(self,*args,**kw)
def append(self,*args,**kw):
self.notify("append")
return list.append(self,*args,**kw)
#etc.
This is obviously not very practical, writing the entire definition would be very tedious and very repetitive, so we can create the new subclass dynamically for any given class with functions like the following:
import functools
import types
def notify_wrapper(name,method):
"""wraps a method to call self.notify(name) when called
used by notifying_type"""
#functools.wraps(method)
def wrapper(*args,**kw):
self = args[0]
# use object.__getattribute__ instead of self.notify in
# case __getattribute__ is one of the notifying methods
# in which case self.notify will raise a RecursionError
notify = object.__getattribute__(self, "_Notify__notify")
# I'd think knowing which method was called would be useful
# you may want to change the arguments to the notify method
notify(name)
return method(*args,**kw)
return wrapper
def notifying_type(cls, notifying_methods="all"):
"""creates a subclass of cls that adds an extra function call when calling certain methods
The constructor of the subclass will take a callable as the first argument
and arguments for the original class constructor after that.
The callable will be called every time any of the methods specified in notifying_methods
is called on the object, it is passed the name of the method as the only argument
if notifying_methods is left to the special value 'all' then this uses the function
get_all_possible_method_names to create wrappers for nearly all methods."""
if notifying_methods == "all":
notifying_methods = get_all_possible_method_names(cls)
def init_for_new_cls(self,notify_method,*args,**kw):
self._Notify__notify = notify_method
namespace = {"__init__":init_for_new_cls,
"__slots__":("_Notify__notify",)}
for name in notifying_methods:
method = getattr(cls,name) #if this raises an error then you are trying to wrap a method that doesn't exist
namespace[name] = notify_wrapper(name, method)
# I figured using the type() constructor was easier then using a meta class.
return type("Notify_"+cls.__name__, (cls,), namespace)
unbound_method_or_descriptor = ( types.FunctionType,
type(list.append), #method_descriptor, not in types
type(list.__add__),#method_wrapper, also not in types
)
def get_all_possible_method_names(cls):
"""generates the names of nearly all methods the given class defines
three methods are blacklisted: __init__, __new__, and __getattribute__ for these reasons:
__init__ conflicts with the one defined in notifying_type
__new__ will not be called with a initialized instance, so there will not be a notify method to use
__getattribute__ is fine to override, just really annoying in most cases.
Note that this function may not work correctly in all cases
it was only tested with very simple classes and the builtin list."""
blacklist = ("__init__","__new__","__getattribute__")
for name,attr in vars(cls).items():
if (name not in blacklist and
isinstance(attr, unbound_method_or_descriptor)):
yield name
Once we can use notifying_type creating Notify_list or Notify_dict would be as simple as:
import collections
mutating_list_methods = set(dir(collections.MutableSequence)) - set(dir(collections.Sequence))
Notify_list = notifying_type(list, mutating_list_methods)
mutating_dict_methods = set(dir(collections.MutableMapping)) - set(dir(collections.Mapping))
Notify_dict = notifying_type(dict, mutating_dict_methods)
I have not tested this extensively and it quite possibly contains bugs / unhandled corner cases but I do know it worked correctly with list!
This is a two-part query, which broadly relates to class attributes referencing mutable and immutable objects, and how these should be dealt with in code design. I have abstracted away the details to provide an example class below.
In this example, the class is designed for two instances which, through an instance method, can access a class attribute that references a mutable object (a list in this case), each can “take” (by mutating the object) elements of this object into their own instance attribute (by mutating the object it references). If one instance “takes” an element of the class attribute, that element is subsequently unavailable to the other instance, which is the effect I wish to achieve. I find this a convenient way of avoiding the use of class methods, but is it bad practice?
Also in this example, there is a class method that reassigns an immutable object (a Boolean value, in this case) to a class attribute based on the state of an instance attribute. I can achieve this by using a class method with cls as the first argument and self as the second argument, but I’m not sure if this is correct. On the other hand, perhaps this is how I should be dealing with the first part of this query?
class Foo(object):
mutable_attr = ['1', '2']
immutable_attr = False
def __init__(self):
self.instance_attr = []
def change_mutable(self):
self.instance_attr.append(self.mutable_attr[0])
self.mutable_attr.remove(self.mutable_attr[0])
#classmethod
def change_immutable(cls, self):
if len(self.instance_attr) == 1:
cls.immutable_attr = True
eggs = Foo()
spam = Foo()
If you want a class-level attribute (which, as you say, is "visible" to all instances of this class) using a class method like you show is fine. This is, mostly, a question of style and there are no clear answers here. So what you show is fine.
I just want to point out that you don't have to use a class method to accomplish your goal. To accomplish your goal this is also perfectly fine (and in my opinion, more standard):
class Foo(object):
# ... same as it ever was ...
def change_immutable(self):
"""If instance has list length of 1, change immutable_attr for all insts."""
if len(self.instance_attr) == 1:
type(self).immutable_attr = True
Or even:
def change_immutable(self):
"""If instance has list length of 1, change immutable_attr for all insts."""
if len(self.instance_attr) == 1:
Foo.immutable_attr = True
if that's what you want to do. The major point being that you are not forced into using a class method to get/set class level attributes.
The type builtin function (https://docs.python.org/2/library/functions.html#type) simply returns the class of an instance. For new style classes (most classes nowadays, ones that ultimately descend from object) type(self) is the same as self.__class__, but using type is the more idiomatic way to access an object's type.
You use type when you want to write code that gets an object's ultimate type, even if it's subclassed. This may or may not be what you want to do. For example, say you have this:
class Baz(Foo):
pass
bazzer = Baz()
bazzer.change_mutable()
bazzer.change_immutable()
Then the code:
type(self).immutable_attr = True
Changes the immutable_attr on the Baz class, not the Foo class. That may or may not be what you want -- just be aware that only objects that descend from Baz see this. If you want to make it visible to all descendants of Foo, then the more appropriate code is:
Foo.immutable_attr = True
Hope this helps -- this question is a good one but a bit open ended. Again, major point being you are not forced to use class methods to set/get class attrs -- but not that there's anything wrong with that either :)
Just finally note the way you first wrote it:
#classmethod
def change_immutable(cls, self):
if len(self.instance_attr) == 1:
cls.immutable_attr = True
Is like doing the:
type(self).immutable_attr = True
way, because the cls variable will not necessarily be Foo if it's subclassed. If you for sure want to set it for all instances of Foo, then just setting the Foo class directly:
Foo.immutable_attr = True
is the way to go.
This is one possibility:
class Foo(object):
__mutable_attr = ['1', '2']
__immutable_attr = False
def __init__(self):
self.instance_attr = []
def change_mutable(self):
self.instance_attr.append(self.__class__.__mutable_attr.pop(0))
if len(self.instance_attr) == 1:
self.__class__.__immutable_attr = True
#property
def immutable_attr(self):
return self.__class__.__immutable_attr
So a little bit of explanation:
1. I'm making it harder to access class attributes from the outside to protect them from accidental change by prefixing them with double underscore.
2. I'm doing pop() and append() in one line.
3. I'm setting the value for __immutable_attr immediately after modifying __mutable_attr if the condition is met.
4. I'm exposing immutable_attr as read only property to provide easy way to check it's value.
5. I'm using self.__class__ to access class of the instance - it's more readable than type(self) and gives us direct access to attributes with double underscore.
I periodically find myself attempting to slightly abuse some of the dinamicity allowed by python (I'm using python3 here, but there shouldn't be many differences).
In this case, I wanted to split a single test_ method in my unittest.TestCase, to several methods created at runtime.
(this was a Kata about roman numerals, but I actually didn't TDD: I wrote the test later)
This is the test:
class TestNumbers(TestCase):
def test_number(self):
for i in range(3000):
self.assertEqual(i, roman_to_int(int_to_roman(i)))
this is how I tried to write it:
from functools import partial
from types import MethodType
class TestNumbers(TestCase):
pass
def test_number(self, n):
self.assertEqual(n, roman_to_int(int_to_roman(n)))
for i in range(3000):
name = str(i)
test_method = MethodType(partial(test_number, n=i), TestNumbers)
setattr(TestNumbers, "test" + name, test_method)
(alternatively, I also tried to dinamically create lots of TestCase subclasses and setattr(globals(), ...) them)
I know: this doesn't really has much purpose, it's also probably slower, etc.etc. but this is just a POC and I'm trying to understand how I can get it to work
by using MethodType, the test becomes a bound method, but inside, assertEqual apparently becomes a function, and when trying to call it it'll fail with TypeError: assertEqual() takes at least 3 arguments (2 given)
I tried to change test_number to
def test_number(self, n):
self.assertEqual(self, n, roman_to_int(int_to_roman(n)))
but this will only unearth the same problem deeper in hidden TestCase methods: TypeError: _getAssertEqualityFunc() takes exactly 3 arguments (2 given)
I looked here on stackoverflow and found similar questions (like Python: Bind an Unbound Method? ), but none of those deal with binding a method that inside of it calls other bound methods of the target class
I also tried to look into metaclasses ( http://docs.python.org/py3k/reference/datamodel.html#customizing-class-creation ) but it doesn't seem to match with what I'm trying to do
On Python 2 there are functions, unbound and bound methods. Binding a method to a class as a instance doesn't make it an unbound method, is makes it the equivalent of a classmethod or a metaclass-method.
On Python 3 there are no longer bound and unbound methods, just functions and methods, so if you're getting assertEqual as a function, it means your testx method is not being bound to the instance and that's the real problem.
On Python 2 all you have to do is assign None to the instance on the MethodType call and it would work.
So, replace:
test_method = MethodType(partial(test_number, n=i), TestNumbers)
For:
test_method = MethodType(partial(test_number, n=i), None, TestNumbers)
On Python 3 just assigning the function to the class would work, like the other answer suggests, but the real issue in your case is that partial objects don't become methods.
An easy solution for your case is to use lambda instead of partial.
Instead of:
test_method = MethodType(partial(test_number, n=i), TestNumbers)
Use:
test_method = lambda self: test_number(self, i)
And it should work...
The real neat solution would be to rewrite partial in order to return a real function with the parameters you want. You can create a instance of function with everything from the older one and the extra default argument. Something like this:
code = test_number.__code__
globals = test_number.__globals__
closure = test_number.__closure__
testx = FunctionType(code, globals, "test"+name, (i,), closure)
setattr(TestNumbers, "test" + name, testx)
If you're adding the method directly to the class then there's no need to bind it yourself.
class C(object):
def __init__(self):
self.foo = 42
def getfoo(self):
return self.foo
C.getfoo = getfoo
c=C()
print(c.getfoo())