I've been trying to pickle an object which contains references to static class methods.
Pickle fails (for example on module.MyClass.foo) stating it cannot be pickled, as module.foo does not exist.
I have come up with the following solution, using a wrapper object to locate the function upon invocation, saving the container class and function name:
class PicklableStaticMethod(object):
"""Picklable version of a static method.
Typical usage:
class MyClass:
#staticmethod
def doit():
print "done"
# This cannot be pickled:
non_picklable = MyClass.doit
# This can be pickled:
picklable = PicklableStaticMethod(MyClass.doit, MyClass)
"""
def __init__(self, func, parent_class):
self.func_name = func.func_name
self.parent_class = parent_class
def __call__(self, *args, **kwargs):
func = getattr(self.parent_class, self.func_name)
return func(*args, **kwargs)
I am wondering though, is there a better - more standard way - to pickle such an object?
I do not want to make changes to the global pickle process (using copy_reg for example), but the following pattern would be great:
class MyClass(object):
#picklable_staticmethod
def foo():
print "done."
My attempts at this were unsuccessful, specifically because I could not extract the owner class from the foo function. I was even willing to settle for explicit specification (such as #picklable_staticmethod(MyClass)) but I don't know of any way to refer to the MyClass class right where it's being defined.
Any ideas would be great!
Yonatan
This seems to work.
class PickleableStaticMethod(object):
def __init__(self, fn, cls=None):
self.cls = cls
self.fn = fn
def __call__(self, *args, **kwargs):
return self.fn(*args, **kwargs)
def __get__(self, obj, cls):
return PickleableStaticMethod(self.fn, cls)
def __getstate__(self):
return (self.cls, self.fn.__name__)
def __setstate__(self, state):
self.cls, name = state
self.fn = getattr(self.cls, name).fn
The trick is to snag the class when the static method is gotten from it.
Alternatives: You could use metaclassing to give all your static methods a .__parentclass__ attribute. Then you could subclass Pickler and give each subclass instance its own .dispatch table which you can then modify without affecting the global dispatch table (Pickler.dispatch). Pickling, unpickling, and calling the method might then be a little faster.
EDIT: modified after Jason comment.
I think python is correct in not letting pickling a staticmethod object - as it is impossible to pickle instance or class methods! Such an object would make very little sense outside of its context:
Check this: Descriptor Tutorial
import pickle
def dosomething(a, b):
print a, b
class MyClass(object):
dosomething = staticmethod(dosomething)
o = MyClass()
pickled = pickle.dumps(dosomething)
This works, and that's what should be done - define a function, pickle it, and use such function as a staticmethod in a certain class.
If you've got an use case for your need, please write it down and I'll be glad to discuss it.
Related
The Scenario:
class A:
def __init__(self, key, secret):
self.key = key
self.secret = secret
def same_name_method(self):
do_some_staff
def method_a(self):
pass
class B:
def __init__(self, key, secret):
self.key = key
self.secret = secret
def same_name_method(self):
do_another_staff
def method_b(self):
pass
class C(A,B):
def __init__(self, *args, **kwargs):
# I want to init both class A and B's key and secret
## I want to rename class A and B's same method
any_ideas()
...
What I Want:
I want the instance of class C initialize both class A and B, because they are different api key.
And I want rename class A and B's same_name_method, so I will not confused at which same_name_method.
What I Have Done:
For problem one, I have done this:
class C(A,B):
def __init__(self, *args, **kwargs):
A.__init__(self, a_api_key,a_api_secret)
B.__init__(self, b_api_key,b_api_secret)
Comment: I know about super(), but for this situation I do not know how to use it.
For problem two, I add a __new__ for class C
def __new__(cls, *args, **kwargs):
cls.platforms = []
cls.rename_method = []
for platform in cls.__bases__:
# fetch platform module name
module_name = platform.__module__.split('.')[0]
cls.platforms.append(module_name)
# rename attr
for k, v in platform.__dict__.items():
if not k.startswith('__'):
setattr(cls, module_name+'_'+k, v)
cls.rename_method.append(k)
for i in cls.rename_method:
delattr(cls, i) ## this line will raise AttributeError!!
return super().__new__(cls)
Comment: because I rename the new method names and add it to cls attr. I need to delete the old method attr, but do not know how to delattr. Now I just leave them alone, did not delete the old methods.
Question:
Any Suggestions?
So, you want some pretty advanced things, some complicated things, and you don't understand well how classes behave in Python.
So, for your first thing: initializing both classes, and every other method that should run in all classes: the correct solution is to make use of cooperative calls to super() methods.
A call to super() in Python returns you a very special proxy objects that reflects all methods available in the next class, obeying the proper method Resolution Order.
So, if A.__init__ and B.__init__ have to be called, both methods should include a super().__init__ call - and one will call the other's __init__ in the appropriate order, regardless of how they are used as bases in subclasses. As object also have __init__, the last super().__init__ will just call it that is a no-op. If you have more methods in your classes that should be run in all base classes, you'd rather build a proper base class so that the top-most super() call don't try to propagate to a non-existing method.
Otherwise, it is just:
class A:
def __init__(self, akey, asecret, **kwargs):
self.key = akey
self.secret = asecret
super().__init__(**kwargs)
class B:
def __init__(self, bkey, bsecret, **kwargs):
self.key = bkey
self.secret = bsecret
super().__init__(**kwargs)
class C(A,B):
# does not even need an explicit `__init__`.
I think you can get the idea. Of course, the parameter names have to differ - ideally, when writing C you don't have to worry about parameter order - but when calling C you have to worry about suplying all mandatory parameters for C and its bases. If you can't rename the parameters in A or B to be distinct, you could try to use the parameter order for the call, though, with each __init__ consuming two position-parameters - but that will require some extra care in inheritance order.
So - up to this point, it is basic Python multiple-inheritance "howto", and should be pretty straightforward. Now comes your strange stuff.
As for the auto-renaming of methods: first things first -
are you quite sure you need inheritance? Maybe having your granular classes for each external service, and a registry and dispatch class that call the methods on the others by composition would be more sane. (I may come back to this later)
Are you aware that __new__ is called for each instantiation of the class, and all class-attribute mangling you are performing there happens at each new instance of your classes?
So, if the needed method-renaming + shadowing needs to take place at class creation time, you can do that using the special method __init_subclass__ that exists from Python 3.6. It is a special class method that is called once for each derived class of the class it is defined on. So, just create a base class, from which A and B themselves will inherit, and move a properly modified version the thing you are putting in __new__ there. If you are not using Python 3.6, this should be done on the __new__ or __init__ of a metaclass, not on the __new__ of the class itself.
Another approach would be to have a custom __getattribute__ method - this could be crafted to provide namespaces for the base classes. It would owrk ony on instances, not on the classes themselves (but could be made to, again, using a metaclass). __getattribute__ can even hide the same-name-methods.
class Base:
#classmethod
def _get_base_modules(cls):
result = {}
for base in cls.__bases__:
module_name = cls.__module__.split(".")[0]
result[module_name] = base
return result
#classmethod
def _proxy(self, module_name):
class base:
def __dir__(base_self):
return dir(self._base_modules[module_name])
def __getattr__(base_self, attr):
original_value = self._base_modules[module_name].__dict__[attr]
if hasattr(original_value, "__get__"):
original_value = original_value.__get__(self, self.__class__)
return original_value
base.__name__ = module_name
return base()
def __init_subclass__(cls):
cls._base_modules = cls._get_base_modules()
cls._shadowed = {name for module_class in cls._base_modules.values() for name in module_class.__dict__ if not name.startswith("_")}
def __getattribute__(self, attr):
if attr.startswith("_"):
return super().__getattribute__(attr)
cls = self.__class__
if attr in cls._shadowed:
raise AttributeError(attr)
if attr in cls._base_modules:
return cls._proxy(attr)
return super().__getattribute__(attr)
def __dir__(self):
return super().dir() + list(self._base_modules)
class A(Base):
...
class B(Base):
...
class C(A, B):
...
As you can see - this is some fun, but starts getting really complicated - and all the hoola-boops that are needed to retrieve the actual attributes from the superclasses after ading an artificial namespace seem to indicate your problem is not calling for using inheritance after all, as I suggested above.
Since you have your small, functional, atomic classes for each "service" , you could use a plain, simple, non-meta-at-all class that would work as a registry for the various services - and you can even enhance it to call the equivalent method in several of the services it is handling with a single call:
class Services:
def __init__(self):
self.registry = {}
def register(self, cls, key, secret):
name = cls.__module__.split(".")[0]
service= cls(key, secret)
self.registry[name] = service
def __getattr__(self, attr):
if attr in self.registry:
return self.registry[attr]
I'm running into a problem with Python3.2. If a class decorates a function from the parent class and also has a destructor then instances of that class are never garbage collected.
Here's some sample code that illustrates the problem:
def super_simple_decorator(func):
def f(*args, **kwds):
return func(*args, **kwds)
return f
class Parent():
def foo(self):
pass
class Child(Parent):
def __del__(self):
print('In Child.__del__')
def __init__(self):
self.foo = super_simple_decorator(self.foo)
x = Child()
del x
import gc
_ = gc.collect()
print(gc.garbage)
If you are so inclined, you could also monkey-patch in a decorator at run-time and see the same thing:
class Garbage():
def foo(self):
pass
def __del__(self):
print('In Garbage.__del__')
g=Garbage()
g.foo = super_simple_decorator(g.foo)
del g
In each case, there is uncollected garbage, presumably because there is a bound reference to self in the decorated method.
Upgrading to Python3.4 isn't really an option for me at this point, so I'm looking for a way to let objects like these get garbage collected.
It is not the decorator that causes this problem. It is the fact that you store a method on the instance they are bound to. The decorator is only the means here, not the actual cause.
Methods hold a reference to the instance in __self__, and you then created a circular reference by storing the method in a closure with the decorator object, back onto self.foo. Don't do that. Python 3.3 and before won't garbage collect circular references with objects with __del__ methods.
Unwrap the method and store the original function:
self.foo = super_simple_decorator(self.foo.__func__)
foo will no longer be bound however, methods are only bound if looked up on the class, not the instance.
Or actually apply the decorator at the class level:
class Child(Parent):
def __del__(self):
print('In Child.__del__')
foo = super_simple_decorator(Parent.foo)
If neither is an option, use a weak reference to track the instance, rather than reference the method, then rebind as needed:
import weakref
def super_simple_decorator(method):
instance = weakref.ref(method.__self__)
func = method.__func__
def f(*args, **kwds):
self = instance() # can return None
return func(self, *args, **kwds)
return f
I experiment with metaclasses to generate the class with the custom special method - particularly, __call__. The generation of the class depends on the parameters the constructor was called with. I've faced a strange effect, simplified example is below:
def trick(self, *args, **kwargs):
print "Works!"
class Test1Factory(type):
def __new__(mcls, name, bases, namespace):
namespace['__call__'] = trick
return type.__new__(mcls, name, bases, namespace)
class Test1(object):
__metaclass__ = Test1Factory
def __init__(self, value):
self._value = value
t1 = Test1(1)
t1() # "Works!"
It works, but it is not really useful, because there is no access to constructor arguments within __new__. type.__call__ should do the trick:
import types
class Test2Factory(type):
def __call__(self, *args, **kwargs):
obj = type.__call__(self, *args, **kwargs)
setattr(obj, '__call__', types.MethodType(trick, obj, Test2))
return obj
class Test2(object):
__metaclass__ = Test2Factory
def __init__(self, value):
self._value = value
t2 = Test2(2)
t2.__call__() # "Works!"
t2() # TypeError: 'Test2' object is not callable
As far as I understand, instance() is similar to instance.__call__(), but it is not the case here. Using __new__ static method of the class does the same. I have a workaround that does not use metaclasses at all, but just want to understand the phenomena. Python version is 2.7.5
The wrong assumption may be in “instance() is similar to instance.__call__()”, as __call__ is not looked up for in instance, but in instance's type. That is, the __call__ used is not that of instance, but that of instance.__class__ or type(instance).
Any __call__ attribute defined on the instance solely, may be accessed regularly as any other attribute, but will not be used when instance is called as in instance(). That's part of Python's semantic.
Try to define a __call__ both on an instance and on its type, and see what you get.
If I understand the question correctly, the question has the same background as another I had, and which gets an answer (summarized, with demonstrations by experiments, in the question's post) : “How do Python tell “this is called as a function”?”.
EDIT: i found this method decorator and was able to use it to individually wrap the methods (omitting __init__) of ClassA and ClassB. however, instead of manually wrapping individual methods, i'd like to just wrap the class.
i've created my own logging class, MyLogger, which inherits logging.Logger. in this class, (among other things) i have a FileHandler which prints the logger name in its output:
import logging
class MyLogger(logging.Logger):
def __init__(self, name, path="output.log"):
logging.Logger.__init__(self, name, logging.DEBUG)
logpath = path
fh = logging.FileHandler(logpath)
fh.setLevel(logging.DEBUG)
fh.setFormatter(logging.Formatter("%(name)s - %(message)s"))
# stream handler omitted
self.addHandler(fh)
i also have ClassA and ClassB, which both get the same instance of MyLogger:
class ClassA(object):
def __init__(self, mylogger):
self.log = mylogger
def fn1(self):
self.log.debug("message1 from ClassA fn1")
self.fn2()
b = ClassB(self.log)
b.fn1()
self.log.debug("message2 from ClassA fn1")
def fn2(self):
self.log.debug("message1 from ClassA fn2")
# many more functions
class ClassB(object):
def __init__(self, mylogger):
self.log = mylogger
def fn1(self):
self.log.debug("message1 from ClassB fn1")
# many more functions
here's a simple "main" function:
print "inside main"
log = MyLogger("main")
a = ClassA(log)
a.fn1()
because the MyLogger instance is being passed around, i'd like to ensure the log name (i'm just using the class name) is printed correctly by each function. so i'm attempting to decorate all methods of each class so that the the previous log name is remembered, then the log name is set to the name of the class, the method is run, and finally the log name is set back to what it previously was. i'm using the decorator/descriptor from here. for the sake of brevity, i will only post my changes to it. i renamed the decorator setlogger, have added print statements inside each method in the descript class, and have altered make_bound as follows:
def make_bound(self, instance):
print "in __BOUND__"
#functools.wraps(self.f)
def wrapper(*args, **kwargs):
'''This documentation will disapear :)'''
prev = instance.log.name
print "about to wrap %s.%s, prev = %s" % (instance.__class__.__name__, self.f.__name__, prev)
ret = self.f(instance, *args, **kwargs)
instance.log.name = prev
print "done wrapping %s.%s, now = %s" % (instance.__class__.__name__, self.f.__name__, prev)
return ret
# This instance does not need the descriptor anymore,
# let it find the wrapper directly next time:
setattr(instance, self.f.__name__, wrapper)
return wrapper
if i use the setlogger decorator/descriptor to wrap individual methods in ClassA and ClassB, it works fine. however, i'd like to just wrap the two classes. so here's my class decorator:
def setloggerforallmethods(cls):
def decorate(*args, **kwargs):
for name, m in inspect.getmembers(cls, inspect.ismethod):
if name != "__init__":
print "calling setattr on %s.%s" % (cls.__name__, name)
setattr(cls, name, setlogger(m))
return cls
return decorate
if i wrap ClassA and ClassB with #setloggerforallmethods, and run the main function, heres the output:
inside main
calling setattr on ClassA.fn1
in __INIT__: f = fn1
calling setattr on ClassA.fn2
in __INIT__: f = fn2
in __GET__
in __UNBOUND__
Traceback (most recent call last):
File "/ws/maleva-rcd/yacht/classa.py", line 23, in <module>
a.fn1()
File "/ws/maleva-rcd/yacht/yachtlogger.py", line 34, in wrapper
self.f.__name__)
ValueError: zero length field name in format
i dont understand why fn1 is unbound at this time. isnt it bound to a as in a.fn1()?
I think you're trying to solve the wrong problem in the wrong way. But I can explain why your code isn't doing what you're trying to make it do.
First, in your decorator, you do this:
for name, fn in inspect.getmembers(cls, inspect.ismethod):
if name != "__init__":
print "calling setlogger on %s" % cls.__name__ + "." + name
fn = setlogger(fn)
That has no effect. For each bound method fn, you create a wrapper function, then rebind the local variable fn to that function. That has no more effect than doing this:
def foo(a):
a = 3
i = 0
foo(i)
If you want to set an attribute on the class, you have to set an attribute on the class, like this:
setattr(cls, name, setlogger(fn))
Now your wrapper will get called.
Next, cls.log is a class attribute named log—that is, an attribute on the class itself, which is shared by all instances of that class. But all of the code within the classes uses instance attributes, where each instance has its own copy. That's what you get when you assign self.log in your __init__. So, there is no class attribute named log, meaning you'll just get this:
AttributeError: type object 'ClassA' has no attribute 'log'
You could of course create a class attribute… but that won't do any good. The instance attribute of the same name will just shadow it.
You need to access the instance attribute inside inner, which means you need a self to access it off. And you obviously don't have self inside setlogger. But think about what you're doing: you're wrapping a method with another method. Methods get self as their first argument. In fact, if you modify inner to print out its args, you'll see that the first one is always something like <__main__.ClassA object at 0x12345678>. So:
def inner(self, *args, **kwargs):
prevname = self.log.name
self.log.name = cls.__name__
ret = func(self, *args, **kwargs) # don't forget to forward self
self.log.name = prevname
return ret
But if any of these wrapped methods ever raises an exception, they'll leave the name in the wrong state. So really, you need to either create a context manager for stashing and restoring the value, or just a try/finally. Which also happens to make the wrapper a little easier to write:
def inner(self, *args, **kwargs):
prevname = self.log.name
self.log.name = cls.__name__
try:
return func(self, *args, **kwargs)
finally:
self.log.name = prevname
Finally, you need to remove the self.log.name = in each __init__ method. Otherwise, when you construct a B instance in the middle of A.fn1, you're changing the logger's name without going through the wrapper that restores the previous name.
Again, I don't think this is a good solution. But it will do what you're trying to do.
I still don't completely understand the problem you're trying to solve, but I think it's this:
Constructing a MyLogger takes two pieces of information: a name, and a path. You don't want every class to have to know that path. So, you figured you needed to share the MyLogger instance, because there's no other way around that. And then, because the MyLogger stores its name as an attribute, you had to hack up that attribute in wrappers around every method.
But there is a much simpler way around that: Make your classes take a "logger factory"—that is, a callable which constructs an appropriate logger for them—instead of a logger. The MyLogger class itself already is such a callable, since it takes a default value for path and you just use it. But let's pretend that weren't true, and you wanted to use some non-default path. Still easy; you just need to wrap it up:
class ClassA(object):
def __init__(self, log_factory):
self.log_factory = log_factory
self.log = log_factory("ClassA")
def fn1(self):
# ...
b = ClassB(self.log_factory)
# ...
class ClassB(object):
def __init__(self, log_factory):
self.log_factory = log_factory
self.log = log_factory("ClassB")
# ...
# or just log_factory = functools.partial(MyLogger, log="output.log")
def log_factory(name):
return MyLogger(name, "output.log")
a = ClassA(log_factory)
a.fn1()
You may notice that the __init__ method in both classes does the same thing. So, why not extract it into a mixin base class?
class LogUserMixin(object):
def __init__(self, log_factory):
self.log_factory = log_factory
self.log = log_factory(self.__class__.__name__)
Now:
class ClassA(LogUserMixin):
def fn1(self):
# ...
When it's a ClassA being initialized, self.__class__ will be "ClassA", not "LogUserMixin", so this does exactly what you want. It works even if your real classes already have base classes, or a hierarchy of subclasses, or if they do additional stuff in __init__, or take additional arguments; you just need to do a tiny bit more work in some of those cases.
I am trying to write a decorator to do logging:
def logger(myFunc):
def new(*args, **keyargs):
print 'Entering %s.%s' % (myFunc.im_class.__name__, myFunc.__name__)
return myFunc(*args, **keyargs)
return new
class C(object):
#logger
def f():
pass
C().f()
I would like this to print:
Entering C.f
but instead I get this error message:
AttributeError: 'function' object has no attribute 'im_class'
Presumably this is something to do with the scope of 'myFunc' inside 'logger', but I've no idea what.
Claudiu's answer is correct, but you can also cheat by getting the class name off of the self argument. This will give misleading log statements in cases of inheritance, but will tell you the class of the object whose method is being called. For example:
from functools import wraps # use this to preserve function signatures and docstrings
def logger(func):
#wraps(func)
def with_logging(*args, **kwargs):
print "Entering %s.%s" % (args[0].__class__.__name__, func.__name__)
return func(*args, **kwargs)
return with_logging
class C(object):
#logger
def f(self):
pass
C().f()
As I said, this won't work properly in cases where you've inherited a function from a parent class; in this case you might say
class B(C):
pass
b = B()
b.f()
and get the message Entering B.f where you actually want to get the message Entering C.f since that's the correct class. On the other hand, this might be acceptable, in which case I'd recommend this approach over Claudiu's suggestion.
Functions only become methods at runtime. That is, when you get C.f you get a bound function (and C.f.im_class is C). At the time your function is defined it is just a plain function, it is not bound to any class. This unbound and disassociated function is what is decorated by logger.
self.__class__.__name__ will give you the name of the class, but you can also use descriptors to accomplish this in a somewhat more general way. This pattern is described in a blog post on Decorators and Descriptors, and an implementation of your logger decorator in particular would look like:
class logger(object):
def __init__(self, func):
self.func = func
def __get__(self, obj, type=None):
return self.__class__(self.func.__get__(obj, type))
def __call__(self, *args, **kw):
print 'Entering %s' % self.func
return self.func(*args, **kw)
class C(object):
#logger
def f(self, x, y):
return x+y
C().f(1, 2)
# => Entering <bound method C.f of <__main__.C object at 0x...>>
Obviously the output can be improved (by using, for example, getattr(self.func, 'im_class', None)), but this general pattern will work for both methods and functions. However it will not work for old-style classes (but just don't use those ;)
Ideas proposed here are excellent, but have some disadvantages:
inspect.getouterframes and args[0].__class__.__name__ are not suitable for plain functions and static-methods.
__get__ must be in a class, that is rejected by #wraps.
#wraps itself should be hiding traces better.
So, I've combined some ideas from this page, links, docs and my own head,
and finally found a solution, that lacks all three disadvantages above.
As a result, method_decorator:
Knows the class the decorated method is bound to.
Hides decorator traces by answering to system attributes more correctly than functools.wraps() does.
Is covered with unit-tests for bound an unbound instance-methods, class-methods, static-methods, and plain functions.
Usage:
pip install method_decorator
from method_decorator import method_decorator
class my_decorator(method_decorator):
# ...
See full unit-tests for usage details.
And here is just the code of the method_decorator class:
class method_decorator(object):
def __init__(self, func, obj=None, cls=None, method_type='function'):
# These defaults are OK for plain functions
# and will be changed by __get__() for methods once a method is dot-referenced.
self.func, self.obj, self.cls, self.method_type = func, obj, cls, method_type
def __get__(self, obj=None, cls=None):
# It is executed when decorated func is referenced as a method: cls.func or obj.func.
if self.obj == obj and self.cls == cls:
return self # Use the same instance that is already processed by previous call to this __get__().
method_type = (
'staticmethod' if isinstance(self.func, staticmethod) else
'classmethod' if isinstance(self.func, classmethod) else
'instancemethod'
# No branch for plain function - correct method_type for it is already set in __init__() defaults.
)
return object.__getattribute__(self, '__class__')( # Use specialized method_decorator (or descendant) instance, don't change current instance attributes - it leads to conflicts.
self.func.__get__(obj, cls), obj, cls, method_type) # Use bound or unbound method with this underlying func.
def __call__(self, *args, **kwargs):
return self.func(*args, **kwargs)
def __getattribute__(self, attr_name): # Hiding traces of decoration.
if attr_name in ('__init__', '__get__', '__call__', '__getattribute__', 'func', 'obj', 'cls', 'method_type'): # Our known names. '__class__' is not included because is used only with explicit object.__getattribute__().
return object.__getattribute__(self, attr_name) # Stopping recursion.
# All other attr_names, including auto-defined by system in self, are searched in decorated self.func, e.g.: __module__, __class__, __name__, __doc__, im_*, func_*, etc.
return getattr(self.func, attr_name) # Raises correct AttributeError if name is not found in decorated self.func.
def __repr__(self): # Special case: __repr__ ignores __getattribute__.
return self.func.__repr__()
It seems that while the class is being created, Python creates regular function objects. They only get turned into unbound method objects afterwards. Knowing that, this is the only way I could find to do what you want:
def logger(myFunc):
def new(*args, **keyargs):
print 'Entering %s.%s' % (myFunc.im_class.__name__, myFunc.__name__)
return myFunc(*args, **keyargs)
return new
class C(object):
def f(self):
pass
C.f = logger(C.f)
C().f()
This outputs the desired result.
If you want to wrap all the methods in a class, then you probably want to create a wrapClass function, which you could then use like this:
C = wrapClass(C)
Class functions should always take self as their first argument, so you can use that instead of im_class.
def logger(myFunc):
def new(self, *args, **keyargs):
print 'Entering %s.%s' % (self.__class__.__name__, myFunc.__name__)
return myFunc(self, *args, **keyargs)
return new
class C(object):
#logger
def f(self):
pass
C().f()
at first I wanted to use self.__name__ but that doesn't work because the instance has no name. you must use self.__class__.__name__ to get the name of the class.
I found another solution to a very similar problem using the inspect library. When the decorator is called, even though the function is not yet bound to the class, you can inspect the stack and discover which class is calling the decorator. You can at least get the string name of the class, if that is all you need (probably can't reference it yet since it is being created). Then you do not need to call anything after the class has been created.
import inspect
def logger(myFunc):
classname = inspect.getouterframes(inspect.currentframe())[1][3]
def new(*args, **keyargs):
print 'Entering %s.%s' % (classname, myFunc.__name__)
return myFunc(*args, **keyargs)
return new
class C(object):
#logger
def f(self):
pass
C().f()
While this is not necessarily better than the others, it is the only way I can figure out to discover the class name of the future method during the call to the decorator. Make note of not keeping references to frames around in the inspect library documentation.
As shown in Asa Ayers' answer, you don't need to access the class object. It may be worth to know that since Python 3.3, you can also use __qualname__, which gives you the fully qualified name:
>>> def logger(myFunc):
... def new(*args, **keyargs):
... print('Entering %s' % myFunc.__qualname__)
... return myFunc(*args, **keyargs)
...
... return new
...
>>> class C(object):
... #logger
... def f(self):
... pass
...
>>> C().f()
Entering C.f
This has the added advantage of working also in the case of nested classes, as shown in this example taken from PEP 3155:
>>> class C:
... def f(): pass
... class D:
... def g(): pass
...
>>> C.__qualname__
'C'
>>> C.f.__qualname__
'C.f'
>>> C.D.__qualname__
'C.D'
>>> C.D.g.__qualname__
'C.D.g'
Notice also that in Python 3 the im_class attribute is gone, therefore if you really wish to access the class in a decorator, you need an other method. The approach I currently use involves object.__set_name__ and is detailed in my answer to "Can a Python decorator of an instance method access the class?"
You can also use new.instancemethod() to create an instance method (either bound or unbound) from a function.
Instead of injecting decorating code at definition time, when function doesn't know it's class, delay running this code until function is accessed/called. Descriptor object facilitates injecting own code late, at access/call time:
class decorated(object):
def __init__(self, func, type_=None):
self.func = func
self.type = type_
def __get__(self, obj, type_=None):
return self.__class__(self.func.__get__(obj, type_), type_)
def __call__(self, *args, **kwargs):
name = '%s.%s' % (self.type.__name__, self.func.__name__)
print('called %s with args=%s kwargs=%s' % (name, args, kwargs))
return self.func(*args, **kwargs)
class Foo(object):
#decorated
def foo(self, a, b):
pass
Now we can inspect class both at access time (__get__) and at call time (__call__). This mechanism works for plain methods as well as static|class methods:
>>> Foo().foo(1, b=2)
called Foo.foo with args=(1,) kwargs={'b': 2}
Full example at: https://github.com/aurzenligl/study/blob/master/python-robotwrap/Example4.py