Here is the code I am using
import funcy
#funcy.memoize
class mystery(object):
def __init__(self, num):
self.num = num
feat = mystery(1)
with open('num.pickle', 'wb') as f:
pickle.dump(feat,f)
Which is giving me the following error:
PicklingError: Can't pickle <class '__main__.mystery'>: it's not the
same object as __main__.mystery
I am hoping to 1) understand why this is happening, and 2) find a solution that allows me to pickle the object (without removing the memoization). Ideally the solution would not change the call to pickle.
Running python 3.6 with funcy==1.10
The problem is that you've applied a decorator designed for functions to a class. The result is not a class, but a function that wraps up a call to the class. This causes a number of problems (e.g., as pointed out by Aran-Fey in the comments, you can't isinstance(feat, mystery), because mystery).
But the particular problem you care about is that you can't pickle instances of inaccessible classes.
In fact, that's basically what the error message is telling you:
PicklingError: Can't pickle <class '__main__.mystery'>: it's not the
same object as __main__.mystery
Your feat thinks its type is __main__.mystery, but that isn't a type at all, it's the function returned by the decorator that wraps that type.
The easy way to fix this would be to find a class decorator meant that does what you want. It might be called something like flyweight instead of memoize, but I'm sure plenty of examples exist.
But you can build a flyweight class by just memoizing the constructor, instead of memoizing the class:
class mystery:
#funcy.memoize
def __new__(cls, num):
return super().__new__(cls)
def __init__(self, num):
self.num = num
… although you probably want to move the initialization into the constructor in that case. Otherwise, calling mystery(1) and then mystery(1) will return the same object as before, but also reinitialize it with self.num = 1, which is at best wasteful, and at worst incorrect. So:
class mystery:
#funcy.memoize
def __new__(cls, num):
self = super().__new__(cls)
self.num = num
return self
And now:
>>> feat = mystery(1)
>>> feat
<__main__.mystery at 0x10eeb1278>
>>> mystery(2)
<__main__.mystery at 0x10eeb2c18>
>>> mystery(1)
<__main__.mystery at 0x10eeb1278>
And, because the type of feat is now a class that's accessible under the module-global name mystery, pickle will have no problem with it at all:
>>> pickle.dumps(feat)
b'\x80\x03c__main__\nmystery\nq\x00)\x81q\x01}q\x02X\x03\x00\x00\x00numq\x03K\x01sb.'
You do still want to think about how this class should play with pickling. In particular, do you want unpickling to go through the cache? By default, it doesn't:
>>> pickle.loads(pickle.dumps(feat)) is feat
False
What's happening is that it's using the default __reduce_ex__ for pickling, which defaults to doing the equivalent of (only slightly oversimplified):
result = object.__new__(__main__.mystery)
result.__dict__.update({'num': 1})
If you want it to go through the cache, the simplest solution is this:
class mystery:
#funcy.memoize
def __new__(cls, num):
self = super().__new__(cls)
self.num = num
return self
def __reduce__(self):
return (type(self), (self.num,))
If you plan to do this a lot, you might think of writing your own class decorator:
def memoclass(cls):
#funcy.memoize
def __new__(cls, *args, **kwargs):
return super(cls, cls).__new__(cls)
cls.__new__ = __new__
return cls
But this:
… is kind of ugly,
… only works with classes that don't need to pass constructor arguments to a base class,
… only works with classes that don't have an __init__ (or, at least, that have an idempotent and fast __init__ that's harmless to call repeatedly),
… doesn't provide an easy way to hook pickling, and
… doesn't document or test any of those restrictions.
So, I think you're better off being explicit and just memoizing the __new__ method, or writing (or finding) something a lot fancier that does the introspection needed to make memoizing a class this way fully general. (Or, alternatively, maybe write one that only works with some restricted set of classes—e.g., a #memodataclass that's just like #dataclass but with a memoized constructor would be a lot easier than a fully general #memoclass.)
Another approach is
class _mystery(object):
def __init__(self, num):
self.num = num
#funcy.memoize
def mystery(num):
return _mystery(num)
Related
In a Python class, I would like to automatically assign member variables to be the same as the __init__ function arguments, like this:
class Foo(object):
def __init__(self, arg1, arg2 = 1):
self.arg1 = arg1
self.arg2 = arg2
I would like to explicitly have argument names in the init
function for the sake of code clarity.
I don't want to use decorators for the same reason.
Is it possible to achieve this using a custom metaclass?
First, a disclaimer. Python object creation and initialization can be complicated and highly dynamic. This means that it can be difficult to come up with a solution that works for the corner cases. Moreover, the solutions tend to use some darker magic, and so when they inevitably do go wrong they can be hard to debug.
Second, the fact that your class has so many initialization parameters might be a hint that it has too many parameters. Some of them are probably related and can fit together in a smaller class. For example, if I'm building a car, it's better to have:
class Car:
def __init__(self, tires, engine):
self.tires = tires
self.engine = engine
class Tire:
def __init__(self, radius, winter=False):
self.radius = radius
self.winter = winter
class Engine:
def __init__(self, big=True, loud=True):
self.big = big
self.loud = loud
as opposed to
class Car:
def __init__(self, tire_radius, winter_tires=False,
engine_big=True, engine_loud=True):
self.tire_radius = tire_radius
self.winter_tires winter_tires
self.engine_big = engine_big
self.engine_loud = engine_loud
All of that said, here is a solution. I haven't used this in my own code, so it isn't "battle-tested". But it at least appears to work in the simple case. Use at your own risk.
First, metaclasses aren't necessary here, and we can use a simple decorator on the __init__ method. I think this is more readable, anyways, since it is clear that we are only modifying the behavior of __init__, and not something deeper about class creation.
import inspect
import functools
def unpack(__init__):
sig = inspect.signature(__init__)
#functools.wraps(__init__)
def __init_wrapped__(self, *args, **kwargs):
bound = sig.bind(self, *args, **kwargs)
bound.apply_defaults()
# first entry is the instance, should not be set
# discard it, taking only the rest
attrs = list(bound.arguments.items())[1:]
for attr, value in attrs:
setattr(self, attr, value)
return __init__(self, *args, **kwargs)
return __init_wrapped__
This decorator uses the inspect module to retrieve the signature of the __init__ method. Then we simply loop through the attributes and use setattr to assign them.
In use, it looks like:
class Foo(object):
#unpack
def __init__(self, a, b=88):
print('This still runs!')
so that
>>> foo = Foo(42)
This still runs!
>>> foo.a
42
>>> foo.b
88
I am not certain that every introspection tool will see the right signature of the decorated __init__. In particular, I'm not sure if Sphinx will do the "right thing". But at least the inspect module's signature function will return the signature of the wrapped function, as can be tested.
If you really want a metaclass solution, it's simple enough (but less readable and more magic, IMO). You need only write a class factory that applies the unpack decorator:
def unpackmeta(clsname, bases, dct):
dct['__init__'] = unpack(dct['__init__'])
return type(clsname, bases, dct)
class Foo(metaclass=unpackmeta):
def __init__(self, a, b=88):
print('This still runs!')
The output will be the same as the above example.
Introduction
I have a Python class, which contains a number of methods. I want one of those methods to have a static counterpart—that is, a static method with the same name—which can handle more arguments. After some searching, I have found that I can use the #staticmethod decorator to create a static method.
Problem
For convenience, I have created a reduced test case which reproduces the issue:
class myclass:
#staticmethod
def foo():
return 'static method'
def foo(self):
return 'public method'
obj = myclass()
print(obj.foo())
print(myclass.foo())
I expect that the code above will print the following:
public method
static method
However, the code prints the following:
public method
Traceback (most recent call last):
File "sandbox.py", line 14, in <module>
print(myclass.foo())
TypeError: foo() missing 1 required positional argument: 'self'
From this, I can only assume that calling myclass.foo() tries to call its non-static counterpart with no arguments (which won't work because non-static methods always accept the argument self). This behavior baffles me, because I expect any call to the static method to actually call the static method.
I've tested the issue in both Python 2.7 and 3.3, only to receive the same error.
Questions
Why does this happen, and what can I do to fix my code so it prints:
public method
static method
as I would expect?
While it's not strictly possible to do, as rightly pointed out, you could always "fake" it by redefining the method on instantiation, like this:
class YourClass(object):
def __init__(self):
self.foo = self._instance_foo
#staticmethod
def foo():
print "Static!"
def _instance_foo(self):
print "Instance!"
which would produce the desired result:
>>> YourClass.foo()
Static!
>>> your_instance = YourClass()
>>> your_instance.foo()
Instance!
A similar question is here: override methods with same name in python programming
functions are looked up by name, so you are just redefining foo with an instance method. There is no such thing as an overloaded function in Python. You either write a new function with a separate name, or you provide the arguments in such a way that it can handle the logic for both.
In other words, you can't have a static version and an instance version of the same name. If you look at its vars you'll see one foo.
In [1]: class Test:
...: #staticmethod
...: def foo():
...: print 'static'
...: def foo(self):
...: print 'instance'
...:
In [2]: t = Test()
In [3]: t.foo()
instance
In [6]: vars(Test)
Out[6]: {'__doc__': None, '__module__': '__main__', 'foo': <function __main__.foo>}
Because attribute lookup in Python is something within the programmer's control, this sort of thing is technically possible. If you put any value into writing code in a "pythonic" way (using the preferred conventions and idioms of the python community), it is very likely the wrong way to frame a problem / design. But if you know how descriptors can allow you to control attribute lookup, and how functions become bound functions (hint: functions are descriptors), you can accomplish code that is roughly what you want.
For a given name, there is only one object that will be looked up on a class, regardless of whether you are looking the name up on an instance of the class, or the class itself. Thus, the thing that you're looking up has to deal with the two cases, and dispatch appropriately.
(Note: this isn't exactly true; if an instance has a name in its attribute namespace that collides with one in the namespace of its class, the value on the instance will win in some circumstances. But even in those circumstances, it won't become a "bound method" in the way that you probably would wish it to.)
I don't recommend designing your program using a technique such as this, but the following will do roughly what you asked. Understanding how this works requires a relatively deep understanding of python as a language.
class StaticOrInstanceDescriptor(object):
def __get__(self, cls, inst):
if cls is None:
return self.instance.__get__(self)
else:
return self.static
def __init__(self, static):
self.static = static
def instance(self, instance):
self.instance = instance
return self
class MyClass(object):
#StaticOrInstanceDescriptor
def foo():
return 'static method'
#foo.instance
def foo(self):
return 'public method'
obj = MyClass()
print(obj.foo())
print(MyClass.foo())
which does print out:
% python /tmp/sandbox.py
static method
public method
Ended up here from google so thought I would post my solution to this "problem"...
class Test():
def test_method(self=None):
if self is None:
print("static bit")
else:
print("instance bit")
This way you can use the method like a static method or like an instance method.
When you try to call MyClass.foo(), Python will complain since you did not pass the one required self argument. #coderpatros's answer has the right idea, where we provide a default value for self, so its no longer required. However, that won't work if there are additional arguments besides self. Here's a function that can handle almost all types of method signatures:
import inspect
from functools import wraps
def class_overload(cls, methods):
""" Add classmethod overloads to one or more instance methods """
for name in methods:
func = getattr(cls, name)
# required positional arguments
pos_args = 1 # start at one, as we assume "self" is positional_only
kwd_args = [] # [name:str, ...]
sig = iter(inspect.signature(func).parameters.values())
next(sig)
for s in sig:
if s.default is s.empty:
if s.kind == s.POSITIONAL_ONLY:
pos_args += 1
continue
elif s.kind == s.POSITIONAL_OR_KEYWORD:
kwd_args.append(s.name)
continue
break
#wraps(func)
def overloaded(*args, **kwargs):
# most common case: missing positional arg or 1st arg is not a cls instance
isclass = len(args) < pos_args or not isinstance(args[0], cls)
# handle ambiguous signatures, func(self, arg:cls, *args, **kwargs);
# check if missing required positional_or_keyword arg
if not isclass:
for i in range(len(args)-pos_args,len(kwd_args)):
if kwd_args[i] not in kwargs:
isclass = True
break
# class method
if isclass:
return func(cls, *args, **kwargs)
# instance method
return func(*args, **kwargs)
setattr(cls, name, overloaded)
class Foo:
def foo(self, *args, **kwargs):
isclass = self is Foo
print("foo {} method called".format(["instance","class"][isclass]))
class_overload(Foo, ["foo"])
Foo.foo() # "foo class method called"
Foo().foo() # "foo instance method called"
You can use the isclass bool to implement the different logic for class vs instance method.
The class_overload function is a bit beefy and will need to inspect the signature when the class is declared. But the actual logic in the runtime decorator (overloaded) should be quite fast.
There's one signature that this solution won't work for: a method with an optional, first, positional argument of type Foo. It's impossible to tell if we are calling the static or instance method just by the signature in this case. For example:
def bad_foo(self, other:Foo=None):
...
bad_foo(f) # f.bad_foo(None) or Foo.bad_foo(f) ???
Note, this solution may also report an incorrect isclass value if you pass in incorrect arguments to the method (a programmer error, so may not be important to you).
We can get a possibly more robust solution by doing the reverse of this: first start with a classmethod, and then create an instance method overload of it. This is essentially the same idea as #Dologan's answer, though I think mine is a little less boilerplatey if you need to do this on several methods:
from types import MethodType
def instance_overload(self, methods):
""" Adds instance overloads for one or more classmethods"""
for name in methods:
setattr(self, name, MethodType(getattr(self, name).__func__, self))
class Foo:
def __init__(self):
instance_overload(self, ["foo"])
#classmethod
def foo(self, *args, **kwargs):
isclass = self is Foo
print("foo {} method called:".format(["instance","class"][isclass]))
Foo.foo() # "foo class method called"
Foo().foo() # "foo instance method called"
Not counting the code for class_overload or instance_overload, the code is equally succinct. Often signature introspection is touted as the "pythonic" way to do these kinds of things. But I think I'd recommend using the instance_method solution instead; isclass will be correct for any method signature, including cases where you call with incorrect arguments (a programmer error).
I'd like to be able to do this:
class A(object):
#staticandinstancemethod
def B(self=None, x, y):
print self is None and "static" or "instance"
A.B(1,2)
A().B(1,2)
This seems like a problem that should have a simple solution, but I can't think of or find one.
It is possible, but please don't. I couldn't help but implement it though:
class staticandinstancemethod(object):
def __init__(self, f):
self.f = f
def __get__(self, obj, klass=None):
def newfunc(*args, **kw):
return self.f(obj, *args, **kw)
return newfunc
...and its use:
>>> class A(object):
... #staticandinstancemethod
... def B(self, x, y):
... print self is None and "static" or "instance"
>>> A.B(1,2)
static
>>> A().B(1,2)
instance
Evil!
Since you'd like the static method case to be used to create a new class anyway, you'd best just make it a normal method and call it at the end of the __init__ method.
Or, if you don't want that, create a separate factory function outside the class that will instantiate a new, empty object, and call the desired method on it.
There probably are ways of making exactly what you are asking for, but they will wander through the inner mechanisms of Python, be confusing, incompatible across python 2.x and 3.x - and I can't see a real need for it.
From what you're saying, is this along the line of what you're looking for?
I'm not sure there is a way to do exactly what you're saying that is "built in"
class Foo(object):
def __init__(self, a=None, b=None):
self.a
self.b
def Foo(self):
if self.a is None and self.b is None:
form = CreationForm()
else:
form = EditingForm()
return form
The answer to your question is no, you can't do that.
What I would do, since Python also supports regular functions, is define a function outside that class, then call that function from a normal method. The caller can decide what which one is needed.
I have a class that dynamically overloads basic arithmetic operators like so...
import operator
class IshyNum:
def __init__(self, n):
self.num=n
self.buildArith()
def arithmetic(self, other, o):
return o(self.num, other)
def buildArith(self):
map(lambda o: setattr(self, "__%s__"%o,lambda f: self.arithmetic(f, getattr(operator, o))), ["add", "sub", "mul", "div"])
if __name__=="__main__":
number=IshyNum(5)
print number+5
print number/2
print number*3
print number-3
But if I change the class to inherit from the dictionary (class IshyNum(dict):) it doesn't work. I need to explicitly def __add__(self, other) or whatever in order for this to work. Why?
The answer is found in the two types of class that Python has.
The first code-snippet you provided uses a legacy "old-style" class (you can tell because it doesn't subclass anything - there's nothing before the colon). Its semantics are peculiar. In particular, you can add a special method to an instance:
class Foo:
def __init__(self, num):
self.num = num
def _fn(other):
return self.num + other.num
self.__add__ = _fn
and get a valid response:
>>> f = Foo(2)
>>> g = Foo(1)
>>> f + g
3
But, subclassing dict means you are generating a new-style class. And the semantics of operator overloading are different:
class Foo (object):
def __init__(self, num):
self.num = num
def _fn(other):
return self.num + other.num
self.__add__ = _fn
>>> f = Foo(2)
>>> g = Foo(1)
>>> f + g
Traceback ...
TypeError: unsupported operand type(s) for +: 'Foo' and 'Foo'
To make this work with new-style classes (which includes subclasses of dict or just about any other type you will find), you have to make sure the special method is defined on the class. You can do this through a metaclass:
class _MetaFoo(type):
def __init__(cls, name, bases, args):
def _fn(self, other):
return self.num + other.num
cls.__add__ = _fn
class Foo(object):
__metaclass__ = _MetaFoo
def __init__(self, num):
self.num = num
>>> f = Foo(2)
>>> g = Foo(1)
>>> f+g
3
Also, the semantic difference means that in the very first case I could define my local add method with one argument (the self it uses is captured from the surrounding scope in which it is defined), but with new-style classes, Python expects to pass in both values explicitly, so the inner function has two arguments.
As a previous commenter mentioned, best to avoid old-style classes if possible and stick with new-style classes (old-style classes are removed in Python 3+). Its unfortunate that the old-style classes happened to work for you in this case, where new-style classes will require more code.
Edit:
You can also do this more in the way you originally tried by setting the method on the class rather than the instance:
class Foo(object):
def __init__(self, num):
self.num = num
setattr(Foo, '__add__', (lambda self, other: self.num + other.num))
>>> f = Foo(2)
>>> g = Foo(1)
>>> f+g
3
I'm afraid I sometimes think in Metaclasses, where simpler solutions would be better :)
In general, never set __ methods on the instance -- they're only supported on the class. (In this instance, the problem is that they happen to work on old-style classes. Don't use old-style classes).
You probably want to use a metaclass, not the weird thing you're doing here.
Here's a metaclass tutorial: http://www.voidspace.org.uk/python/articles/metaclasses.shtml
I do not understand what you are trying to accomplish, but I am almost certain you are going about it in the wrong way. Some of my observations:
I don't see why you're trying to dynamically generate those arithmetic methods. You don't do anything instance-specific with them, so I don't see why you would not just define them on the class.
The only reason they work at all is because IshyNum is an old-style class; this isn't a good thing, since old-style classes are long-deprecated and not as nice as new-style classes. (I'll explain later why you should be especially interested in this.)
If you wanted to automate the process of doing the same thing for multiple methods (probably not worth it in this case), you could just do this right after the class definition block.
Don't use map to do that. map is for making a list; using it for side effects is silly. Just use a normal for loop.
If you want to use composition to refer lots of methods to the same attribute automatedly when using composition, use __getattr__ and redirect to that attribute's methods.
Don't inherit dict. There is nothing much to gain from inheriting built-in types. It turns out it is more confusing than it's worth, and you don't get to re-use much.
If your code above is anything close to the stuff in your post, you really don't want to inherit dict. If it's not, try posting your real use case.
Here is what you really wanted to know:
When you inherit dict, you are making a new-style class. IshyNum is an old-style class because it doesn't inherit object (or one of its subclasses).
New-style classes have been Python's flagship kind of class for a decade and are what you want to use. In this case, they actually cause your technique no longer to work. This is fine, though, since there is no reason in the code you posted to set magic methods on a per-instance level and little reason ever to want to.
For new-style classes, Python does not check the instance for an __add__ method when performing an addition, it checks the class instead. The problem is that you are binding the __add__ method (and all the others) to the instance as a bound method and not to the class as an unbound method. (This is true to other special methods as well, you can attach them only to the class, not to an instance). So, you'll probably want to use a metaclass to achieve this functionality (although I think this is a very awkward thing to do as it is much more readable to spell out these methods explicitly). Anyway, here is an example with metaclasses:
import operator
class OperatorMeta(type):
def __new__(mcs, name, bases, attrs):
for opname in ["add", "sub", "mul", "div"]:
op = getattr(operator, opname)
attrs["__%s__" % opname] = mcs._arithmetic_func_factory(op)
return type.__new__(mcs, name, bases, attrs)
#staticmethod
def _arithmetic_func_factory(op):
def func(self, other):
return op(self.num, other)
return func
class IshyNum(dict):
__metaclass__ = OperatorMeta
def __init__(self, n):
dict.__init__(self)
self.num=n
if __name__=="__main__":
number=IshyNum(5)
print number+5
print number/2
print number*3
print number-3
i want to write a decorator that enables methods of classes to become visible to other parties; the problem i am describing is, however, independent of that detail. the code will look roughly like this:
def CLASS_WHERE_METHOD_IS_DEFINED( method ):
???
def foobar( method ):
print( CLASS_WHERE_METHOD_IS_DEFINED( method ) )
class X:
#foobar
def f( self, x ):
return x ** 2
my problem here is that the very moment that the decorator, foobar(), gets to see the method, it is not yet callable; instead, it gets to see an unbound version of it. maybe this can be resolved by using another decorator on the class that will take care of whatever has to be done to the bound method. the next thing i will try to do is to simply earmark the decorated method with an attribute when it goes through the decorator, and then use a class decorator or a metaclass to do the postprocessing. if i get that to work, then i do not have to solve this riddle, which still puzzles me:
can anyone, in the above code, fill out meaningful lines under CLASS_WHERE_METHOD_IS_DEFINED so that the decorator can actually print out the class where f is defined, the moment it gets defined? or is that possibility precluded in python 3?
When the decorator is called, it's called with a function as its argument, not a method -- therefore it will avail nothing to the decorator to examine and introspect its method as much as it wants to, because it's only a function and carries no information whatsoever about the enclosing class. I hope this solves your "riddle", although in the negative sense!
Other approaches might be tried, such as deep introspection on nested stack frames, but they're hacky, fragile, and sure not to carry over to other implementations of Python 3 such as pynie; I would therefore heartily recommend avoiding them, in favor of the class-decorator solution that you're already considering and is much cleaner and more solid.
As I mentioned in some other answers, since Python 3.6 the solution to this problem is very easy thanks to object.__set_name__ which gets called with the class object that is being defined.
We can use it to define a decorator that has access to the class in the following way:
class class_decorator:
def __init__(self, fn):
self.fn = fn
def __set_name__(self, owner, name):
# do something with "owner" (i.e. the class)
print(f"decorating {self.fn} and using {owner}")
# then replace ourself with the original method
setattr(owner, name, self.fn)
Which can then be used as a normal decorator:
>>> class A:
... #class_decorator
... def hello(self, x=42):
... return x
...
decorating <function A.hello at 0x7f9bedf66bf8> and using <class '__main__.A'>
>>> A.hello
<function __main__.A.hello(self, x=42)>
This is a very old post, but introspection isn't the way to solve this problem, because it can be more easily solved with a metaclass and a bit of clever class construction logic using descriptors.
import types
# a descriptor as a decorator
class foobar(object):
owned_by = None
def __init__(self, func):
self.func = func
def __call__(self, *args, **kwargs):
# a proxy for `func` that gets used when
# `foobar` is referenced from by a class
return self.func(*args, **kwargs)
def __get__(self, inst, cls=None):
if inst is not None:
# return a bound method when `foobar`
# is referenced from by an instance
return types.MethodType(self.func, inst, cls)
else:
return self
def init_self(self, name, cls):
print("I am named '%s' and owned by %r" % (name, cls))
self.named_as = name
self.owned_by = cls
def init_cls(self, cls):
print("I exist in the mro of %r instances" % cls)
# don't set `self.owned_by` here because
# this descriptor exists in the mro of
# many classes, but is only owned by one.
print('')
The key to making this work is the metaclass - it searches through the attributes defined on the classes it creates to find foobar descriptors. Once it does, it passes them information about the classes they are involved in through the descriptor's init_self and init_cls methods.
init_self is called only for the class which the descriptor is defined on. This is where modifications to foobar should be made, because the method is only called once. While init_cls is called for all classes which have access to the decorated method. This is where modifications to the classes foobar can be referenced by should be made.
import inspect
class MetaX(type):
def __init__(cls, name, bases, classdict):
# The classdict contains all the attributes
# defined on **this** class - no attribute in
# the classdict is inherited from a parent.
for k, v in classdict.items():
if isinstance(v, foobar):
v.init_self(k, cls)
# getmembers retrieves all attributes
# including those inherited from parents
for k, v in inspect.getmembers(cls):
if isinstance(v, foobar):
v.init_cls(cls)
example
# for compatibility
import six
class X(six.with_metaclass(MetaX, object)):
def __init__(self):
self.value = 1
#foobar
def f(self, x):
return self.value + x**2
class Y(X): pass
# PRINTS:
# I am named 'f' and owned by <class '__main__.X'>
# I exist in the mro of <class '__main__.X'> instances
# I exist in the mro of <class '__main__.Y'> instances
print('CLASS CONSTRUCTION OVER\n')
print(Y().f(3))
# PRINTS:
# 10