My issue is that I am using a metaclass to wrap certain class methods in a timer for logging purposes.
For example:
class MyMeta(type):
#staticmethod
def time_method(method):
def __wrapper(self, *args, **kwargs):
start = time.time()
result = method(self, *args, **kwargs)
finish = time.time()
sys.stdout.write('instancemethod %s took %0.3f s.\n' %(
method.__name__, (finish - start)))
return result
return __wrapper
def __new__(cls, name, bases, attrs):
for attr in ['__init__', 'run']:
if not attr in attrs:
continue
attrs[attr] = cls.time_method(attrs[attr])
return super(MetaBuilderModule, cls).__new__(cls, name, bases, attrs)
The problem I'm having is that my wrapper runs for every '__init__' even though I really only want it for the current module I am instantiating. The same goes for any method want to time. I dont want the timing to run on any inherited methods UNLESS they aren't being overridden.
class MyClass0(object):
__metaclass__ = MyMeta
def __init__(self):
pass
def run(self):
sys.stdout.write('running')
return True
class MyClass1(MyClass0):
def __init__(self): # I want this timed
MyClass0.__init__(self) # But not this.
pass
''' I need the inherited 'run' to be timed. '''
I've tried a few things but so far I've had no success.
Guard the timing code with an attribute. That way, only the outermost decorated method on an object will actually get timed.
#staticmethod
def time_method(method):
def __wrapper(self, *args, **kwargs):
if hasattr(self, '_being_timed'):
# We're being timed already; just run the method
return method(self, *args, **kwargs)
else:
# Not timed yet; run the timing code
self._being_timed = True # remember we're being timed
try:
start = time.time()
result = method(self, *args, **kwargs)
finish = time.time()
sys.stdout.write('instancemethod %s took %0.3f s.\n' %(
method.__name__, (finish - start)))
return result
finally:
# Done timing, reset to original state
del self._being_timed
return __wrapper
Timing only the outermost method is slightly different than “not timing inherited methods unless they aren't being overridden”, but I believe it solves your problem.
I'm not sure this has anything to do with multiple inheritance.
The trouble is that any subclass of MyClass0 has to be an instance of the same metaclass, which means MyClass1 gets created with MyMeta.__new__, so its methods get processed and wrapped in the timing code.
Effectively, what you need is that MyClass0.__init__ somehow returns something different in the two following circumstances:
When called directly (instantiating MyClass0 directly, or when MyClass1 doesn't override it), it needs to return the timed method
When called within a subclass definition, it needs to return the original untimed method
This is impossible, since MyClass0.__init__ doesn't know why it's being called.
I see three options:
Make the metaclass more complex. It can check through the base classes to see if they're already instances of the metaclass; if so it can make a new copy of them that removes the timed wrapper from the methods that are present in the class being constructed. You don't want to mutate the base classes directly, as that will affect all uses of them (including when they're instantiated directly, or when they're subclassed by other classes that override different methods). A downside of this is it really screws up the instanceof relationships; unless you construct the slight variations on the base classes by creating new subclasses of them (ugh!) and caching all the variations so you never construct duplicates (ugh!), you completely void natural assumptions that two classes share a base class (they may only share a template from which two completely independent base classes were generated).
Make the timing code more complex. Have a start_timing and stop_timing method, and if start_timing is called when the method is already being timed you just increment a counter, and stop_timing just decrements a counter and only stops timing when the counter hits zero. Be careful of timed methods that call other timed methods; you'll need to have separate counters per method name.
Give up on metaclasses and just use a decorator on the methods you want timed explicitly, with some way of getting at the undecorated method so that overriding definitions can call it. This will involve a couple of lines of boiler plate per use; that will quite possibly add up to less lines of code than either of the other two options.
Related
The case is such that I have an abstract class and a few child classes implementing it.
class Parent(metaclass=ABCMeta):
#abstract_method
def first_method(self, *args, **kwargs):
raise NotImplementedError()
#abstract_method
def second_method(self, *args, **kwargs):
raise NotImplementedError()
class Child(Parent):
def first_method(self, *args, **kwargs):
print('First method of the child class called!')
def second_method(self, *args, **kwargs):
print('Second method of the child class called!')
My goal is to make some kind of decorator, which will be used on methods of any child of the Parent class. I need this because every method make some kind of preparation before actually doing something, and this preparation is absolutely the same in all methods of all childs of the Parent class. Like:
class Child(Parent):
def first_method(self, *args, **kwargs):
print('Preparation!')
print('First method of the child class called!')
def second_method(self, *args, **kwargs):
print('Preparation!')
print('Second method of the child class called!')
The first thing came to my mind is to use Parent class method implementation: just remove "raise NotImplementedError()" and put some functionality, and then in child classes I would call, for example, super().first_method(self, *args, **kwargs) in the beginning of each method. It is good, but I also would want to return some data from the Parent method, and it would look weird when parent method and child method return something different in declaration. Not to mention that I would probably want to do some post-processing work after the method, so then I would need 2 different functions: for the beginning and after the performing the script.
The next thing I came up with is making MetaClass.
Just implement all the decoration of methods in the new MetaClass during creating a class, and pass the newly generated data which is used in child methods to them in kwargs.
This is the closest solution to my goal, but it feels wrong anyway. Because it is not explicit that some kwargs will be passed to child methods, and if you are new to this code, then you need to do some researches to understand how it works. I feel like I overengineering or so.
So the question: is there any pattern or something along these lines to implement this functionality?
Probably you can advise something better for my case?
Thank you a lot in advance!
So, existing patterns apart: I won't know if this has an specific name, what you need, that would be a "pattern" is the use of "slots": that is - you document special named methods that will be called as part of the execution of another method. This other method then performs its setup code, checks if the slotted method (usually identifiable by name) exists, call them, with a plain simple method call, which will run the most specialized version of it, even if the special method that calls the slots is in the base class, and you are on a big class-inheritance hierarchy.
One plain example of this pattern is the way Python instantiates objects: what one actually invokes calling the class with the same syntax that is used for function calls (MyClass()) is that class's class (its metaclass) __call__ method. (Usally type.__call__). In Python's code for type.__call__ the class' __new__ method is called, then the class' __init__ method is called and finally the value returned by the first call, to __new__ is returned. A custom metaclass can modify __call__ to run whatever code it wants before, between, or after these two calls.
So, if this was not Python, all you'd need is to spec down this, and document that these methods should not be called directly, but rather through an "entry point" method - which could simply feature an "ep_" prefix. These would have to be fixed and hardcoded on a baseclass, and you'd need one for each of the methods you want to prefix/postfix code to.
class Base(ABC):
def ep_first_method(self, *args, **kw);
# prefix code...
ret_val = self.first_method(*args, **kw)
# postfix code...
return ret_val
#abstractmethod
def first_method(self):
pass
class Child(Base):
def first_method(self, ...):
...
This being Python, it is easier to add some more magic to avoid code repetition and keep things concise.
One possible thing is to have a special class that, when detecting a method in a child class that should be called as a slot of a wrapper method, like above, to automatically rename that method: this way the entry point methods can feature the same name as the child methods - and better yet, a simple decorator can mark the methods that are meant to be "entrypoints", and inheritance would even work for them.
Basically, when building a new class we check all methods: if any of them has a correspondent part in the calling hierarchy which is marked as an entrypoint, the renaming takes place.
It is more practical if any entrypoint method will take as second parameter (the first being self), a reference for the slotted method to be called.
After some fiddling: the good news is that a custommetaclass is not needed - the __init_subclass__ special method in a baseclass is enough to enable the decorator.
The bad news: due to re-entry iterations in the entry-point triggered by potential calls to "super()" on the final methods, a somewhat intricate heuristic to call the original method in the intermediate classes is needed. I also took care to put some multi-threading protections - although this is not 100% bullet-proof.
import sys
import threading
from functools import wraps
def entrypoint(func):
name = func.__name__
slotted_name = f"_slotted_{name}"
recursion_control = threading.local()
recursion_control.depth = 0
lock = threading.Lock()
#wraps(func)
def wrapper(self, *args, **kw):
slotted_method = getattr(self, slotted_name, None)
if slotted_method is None:
# this check in place of abstractmethod errors. It is only raised when the method is called, though
raise TypeError("Child class {type(self).__name__} did not implement mandatory method {func.__name__}")
# recursion control logic: also handle when the slotted method calls "super",
# not just straightforward recursion
with lock:
recursion_control.depth += 1
if recursion_control.depth == 1:
normal_course = True
else:
normal_course = False
try:
if normal_course:
# runs through entrypoint
result = func(self, slotted_method, *args, **kw)
else:
# we are within a "super()" call - the only way to get the renamed method
# in the correct subclass is to recreate the callee's super, by fetching its
# implicit "__class__" variable.
try:
callee_super = super(sys._getframe(1).f_locals["__class__"], self)
except KeyError:
# callee did not make a "super" call, rather it likely is a recursive function "for real"
callee_super = type(self)
slotted_method = getattr(callee_super, slotted_name)
result = slotted_method(*args, **kw)
finally:
recursion_control.depth -= 1
return result
wrapper.__entrypoint__ = True
return wrapper
class SlottedBase:
def __init_subclass__(cls, *args, **kw):
super().__init_subclass__(*args, **kw)
for name, child_method in tuple(cls.__dict__.items()):
#breakpoint()
if not callable(child_method) or getattr(child_method, "__entrypoint__", None):
continue
for ancestor_cls in cls.__mro__[1:]:
parent_method = getattr(ancestor_cls, name, None)
if parent_method is None:
break
if not getattr(parent_method, "__entrypoint__", False):
continue
# if the code reaches here, this is a method that
# at some point up has been marked as having an entrypoint method: we rename it.
delattr (cls, name)
setattr(cls, f"_slotted_{name}", child_method)
break
# the chaeegs above are inplace, no need to return anything
class Parent(SlottedBase):
#entrypoint
def meth1(self, slotted, a, b):
print(f"at meth 1 entry, with {a=} and {b=}")
result = slotted(a, b)
print("exiting meth1\n")
return result
class Child(Parent):
def meth1(self, a, b):
print(f"at meth 1 on Child, with {a=} and {b=}")
class GrandChild(Child):
def meth1(self, a, b):
print(f"at meth 1 on grandchild, with {a=} and {b=}")
super().meth1(a,b)
class GrandGrandChild(GrandChild):
def meth1(self, a, b):
print(f"at meth 1 on grandgrandchild, with {a=} and {b=}")
super().meth1(a,b)
c = Child()
c.meth1(2, 3)
d = GrandChild()
d.meth1(2, 3)
e = GrandGrandChild()
e.meth1(2, 3)
I have some class with a field spent_times. spent_times is a list and all methods of this class write some information, which is valuable for logging.
Also, I have a decorator, which calculate execution time for every function and write it to spent_times.
This is realization of my decorator:
def timing(message):
def wrap(function):
def called(*args, **kwargs):
time_start = timer()
spent_time = round(timer() - time_start, 5)
if not args:
return function(*args, **kwargs), spent_time
obj = args[0]
if hasattr(obj, "spent_times"):
obj.spent_times.append("{}={:.5f}".format(message, spent_time))
return function(*args, **kwargs)
else:
logging.warning('Decorator allows to set spent_time attribute!')
return called
return wrap
As you can see in my decorator there is a check, if the calling function has attribute self.
If it has, than I can write needed info in list spent_times on the spot, if it does not have, than decorator returns time spent on execution and function itself.
I am using this decorator in one single module and second case (when no self found) belongs to some other functions in this module, which does not belong to class, where spent_time list is defined, but I execute them inside my class, so I am able to realize for example the following structure:
This is declaration of "outer" function
def calc_users(requests, priority):
# .....
And inside my class I execute it and update my spent_time list this way:
response, spent_time = calc_users(requests, priority)
self.class_obj.spent_times.append("user_calculation={:.5f}".format(spent_time))
which is not very nice, but it is working at least.
Now, I moved a few functions of my class in different new module and I would like to use the same decorator timing.
Can someone help me to implement this realization of timing in new module. I do not know, what can I do to update my spent_times list now.
These two modules will work at the same time and I cannot create object of class and pass it as an argument to new module, because (as far as I understand it) there will be two objects and spent_times will not be updated correctly.
Maybe there is a way to pass a reference to spent_times somehow, but I do not want to change arguments of my functions in new module, since I think in this case principle of shared responsibility will be broken (decorator is responsible for logging, function for its action).
So how to improve decorator or how to pass spent_times list to a new module?
Any help will be greatly appreciate!
P.S.
Maybe make spent_times a global variable? (in the very worst case)
A global list seems fine but you can also use a class and create a singleton by deleting the class after instantiation. This prevents from creating another another instance:
# mymodule.py
from timeit import default_timer as timer
class Timing(object):
def __init__(self):
self.spent_times = []
def __call__(self, message):
def wrap(function):
def called(*args, **kwargs):
time_start = timer()
spent_time = round(timer() - time_start, 5)
self.spent_times.append("{}={:.5f}".format(message, spent_time))
return function(*args, **kwargs)
return called
return wrap
timing = Timing()
del Timing # prevent another instance
Now import in another module:
from mymodule import timing
#timing('Hello')
def add(a, b):
return a + b
The special method __call__ makes an instance of a class behave like a function, i.e. it is callable with ().
The advantage it that you can use self.attr instead of a global variable.
The deletion of the class after instantiation prevents from creating another instance. This is called a singleton. Now all your timings end up in the same list no matter how often you use timing as a decorator.
There are some interesting ways to run a method before every method in a class in questions such as Python: Do something for any method of a class?
However that solution doesn't let us pass arguments.
There's a decorator solution on Catch "before/after function call" events for all functions in class but I don't want to have to go back and decorate all my classes.
Is there a way to run a pre/post operation that's dependent on the arguments passed for every invocation of an object's method?
Example:
class Stuff(object):
def do_stuff(self, stuff):
print(stuff)
a = Stuff()
a.do_stuff('foobar')
"Pre operation for foobar"
"foobar"
"Post operation for foobar"
So I figured it out after a lot of experimentation.
Basically in the metaclass' __new__ you can iterate through every method in the class' namespace and swap out every method in the class being created with a new version that runs the before logic, the function itself, and the after logic. Here's a sample:
class TestMeta(type):
def __new__(mcl, name, bases, nmspc):
def replaced_fnc(fn):
def new_test(*args, **kwargs):
# do whatever for before function run
result = fn(*args, **kwargs)
# do whatever for after function run
return result
return new_test
for i in nmspc:
if callable(nmspc[i]):
nmspc[i] = replaced_fnc(nmspc[i])
return (super(TestMeta, mcl).__new__(mcl, name, bases, nmspc))
Note that if you use this code as is it will run the pre/post operation for init and other builtin functions as well.
I have the following code, where most of the code seem to look awkward, confusing and/or circumstantial, but most of it is to demonstrate the parts of the much larger code where I have a problem with. Please read carefully
# The following part is just to demonstrate the behavior AND CANNOT BE CHANGED UNDER NO CIRCUMSTANCES
# Just define something so you can access something like derived.obj.foo(x)
class Basic(object):
def foo(self, x=10):
return x*x
class Derived(object):
def info(self, x):
return "Info of Derived: "+str(x)
def set(self, obj):
self.obj = obj
# The following piece of code might be changed, but I would rather not
class DeviceProxy(object):
def __init__(self):
# just to set up something that somewhat behaves as the real code in question
self.proxy = Derived()
self.proxy.set(Basic())
# crucial part: I want any attributes forwarded to the proxy object here, without knowing beforehand what the names will be
def __getattr__(self, attr):
return getattr(self.proxy, attr)
# ======================================
# This code is the only I want to change to get things work
# Original __getattr__ function
original = DeviceProxy.__getattr__
# wrapper for the __getattr__ function to log/print out any attribute/parameter/argument/...
def mygetattr(device, key):
attr = original(device, key)
if callable(attr):
def wrapper(*args, **kw):
print('%r called with %r and %r' % (attr, args, kw))
return attr(*args, **kw)
return wrapper
else:
print "not callable: ", attr
return attr
DeviceProxy.__getattr__ = mygetattr
# make an instance of the DeviceProxy class and call the double-dotted function
dev = DeviceProxy()
print dev.info(1)
print dev.obj.foo(3)
What I want is to catch all method calls to DeviceProxy to be able to print all arguments/parameters and so on. In the given example, this works great when calling info(1), all of the information is printed.
But when I call the double-dotted function dev.obj.foo(3), I only get the message that this is not a callable.
How can I modify the above code so I also get my information in the second case? Only the code below the === can be modified.
You have just a __getattr__ on dev and you want, from within this __getattr__, to have access to foo when you do dev.obj.foo. This isn't possible. The attribute accesses are not a "dotted function" that is accessed as a whole. The sequence of attribute accesses (the dots) is evaluated one at a time, left to right. At the time that you access dev.obj, there is no way to know that you will later access foo. The method dev.__getattr__ only knows what attributes you are accessing on dev, not what attributes of that result you may later access.
The only way to achieve what you want would be to also include some wrapping behavior in obj. You say you can't modify the "Base"/"Derived" classes, so you can't do it that way. You could, in theory, have DeviceProxy.__getattr__ not return the actual value of the accessed attribute, but instead wrap that object in another proxy and return the proxy. However, that could get a bit tricky and make your code more difficult to understand and debug, since you could wind up with tons of objects being wrapped in thin proxies.
I'm writing a test suite for firefox 5.1 and selenium webdrive v.2 on os x 10.6 with Python
2.7.
Everything is working fine except the creation of a singleton class, which should guarantee
only one instance of firefox:
def singleton(cls):
instances = {}
def getinstance():
if cls not in instances:
instances[cls] = cls()
return instances[cls]
return getinstance
#singleton
class Fire(object):
def __init__(self):
self.driver = webdriver.Firefox()
def getdriver(self):
return self.driver
def close_(self):
self.driver.close()
def get(self, url):
self.driver.get(url)
return self.driver.page_source
f = Fire()
f.close_()
at this point if I call f=Fire() again nothing happens. No new instance will be created.
My question is why do I see that behavior?
How I do that right?
My second question, if I type:
isinstance(f, Fire)
I get this error:
TypeError: isinstance() arg 2 must be a class, type, or tuple of classes and types
This is strange to me ... from my understanding it should return True
A final question:
when I have a singleton class I should be able to do:
f = Fire()
f2 = Fire()
f2.get('http://www.google.com')
up to here works, but if I say
f.close_()//then
URLError: urlopen error [Errno 61] Connection refused
I can't understand this.
Your decorator seems to work OK for me as far as creating a single instance of a class, so I don't see your issue #1. It isn't doing quite what you think it is: each time you use the decorator there's a fresh instances dictionary, and there's only ever one item in it, so there's not actually any reason to use a dictionary there -- you need a mutable container so you can modify it, but I'd use a list, or, in Python 3, perhaps a nonlocal variable. However, it does perform its intended function of making sure there's only one instance of the decorated class.
If you're asking why you can't create a new instance of the object after closing it, well, you didn't write any code to allow another instance to be created in that situation, and Python is incapable of guessing that you want that to happen. A singleton means there is only ever a single instance of the class. You have created that instance; you can't create another.
As for #2, your #singleton decorator returns a function, which instantiates (or returns a previously-created instance) of the class. Therefore Fire is a function, not a class, once decorated, which is why your isinstance() doesn't work.
The most straightforward approach to singletons, in my opinion, is to put the smarts in a class rather than in a decorator, then inherit from that class. This even makes sense from an inheritance point of view, since a singleton is a kind of object.
class Singleton(object):
_instance = None
def __new__(cls, *args, **kwargs):
if not cls._instance:
cls._instance = object.__new__(cls, *args, **kwargs)
return cls._instance
class Fire(Singleton):
pass
f1 = Fire()
f2 = Fire()
f1 is f2 # True
isinstance(f1, Fire) # True
If you still want to do it with a decorator, the simplest approach there would be to create an intermediate class in the decorator and return that:
def singleton(D):
class C(D):
_instance = None
def __new__(cls, *args, **kwargs):
if not cls._instance:
cls._instance = D.__new__(cls, *args, **kwargs)
return cls._instance
C.__name__ = D.__name__
return C
#singleton
class Fire(object):
pass
You could inject the desired behavior into the existing class object, but this is, in my opinion, needlessly complex, as it requires (in Python 2.x) creating a method wrapper, and you also have to deal with the situation in which the class being decorated already has a __new__() method yourself.
You might think that you could write a __del__() method to allow a new singleton to be created when there are no references to the existing instance. This won't work because there is always a class-internal reference to the instance (e.g., Fire._instance) so __del__() is never called. Once you have a singleton, it's there to stay. If you want a new singleton after you close the old one, you probably don't actually want a singleton but rather something else. A context manager might be a possibility.
A "singleton" that can be re-instantiated under certain circumstances would be, to me, really weird and unexpected behavior, and I would advise against it. Still, if that's what you really want, you could do self.__class__._instance = None in your close_() method. Or you could write a separate method to do this. It looks ugly, which is fitting because it is ugly. :-)
I think your third question also arises from the fact that you expect the singleton to somehow go away after you call close_() on it, when you have not programmed that behavior.
The issue is your use of that singleton class as a decorator. It isn't a decorator at all, so using it like one doesn't make sense.
A decorator needs to actually return the decorated object - usually a function, but in your case, the class. You're just returning a function. So obviously, when you try and use it in isinstance, Fire no longer refers to a class.