Python warn if method has not been called - python

Suppose you want to call a method foo on object bar, but somehow while typing the method invocation you intuitively treated foo as a property and you typed bar.foo instead of bar.foo() (with parenthesis). Now, both are syntactically correct, so no error is raised, but semantically very different. It happened to me several times already (my experience in Ruby makes it even worse) and caused me dearly in terms of long and confusing debugging sessions.
Is there a way to make Python interpreter print a warning in such cases - whenever you access an attribute which is callable, but you haven't actually called it?
For the record - I thought about overriding __getattribute__ but it's messy and ultimately won't achieve the goal since function invocation via () happens after __getattribute__ has returned.

This can't be done in all cases because sometimes you don't want to call the method, e.g. you might want to store it as a callable to be used later, like callback = object.method.
But you can use static analysis tools such as pylint or PyCharm (my recommendation) that warn you if you write a statement that looks pointless, e.g. object.method without any assignment.
Furthermore if you write x = obj.get_x but meant get_x(), then later when you try to use x a static analysis tool may be able to warn you (if you're lucky) that x is a method but an instance of X is expected.

It was quite challenging, but I think i get it done! My code isn't very complicated, but you need to be well aware of metaclasses.
Metaclass and wrapper (WarnIfNotCalled.py):
class Notifier:
def __init__(self, name, obj, callback):
self.callback = callback
self.name = name
self.obj = obj
self.called = False
def __call__(self, *args, **kwargs):
self.callback(self.obj, *args, **kwargs)
self.called = True
def __del__(self):
if not self.called:
print("Warning! {} function hasn't been called!".format(self.name))
class WarnIfNotCalled(type):
def __new__(cls, name, bases, dct):
dct_func = {}
for name, val in dct.copy().items():
if name.startswith('__') or not callable(val):
continue
else:
dct_func[name] = val
del dct[name]
def getattr(self, name):
if name in dct_func:
return Notifier(name, self, dct_func[name])
dct['__getattr__'] = getattr
return super(WarnIfNotCalled, cls).__new__(cls, name, bases, dct)
It's very easy to use - just specify a metaclass
from WarnIfNotCalled import WarnIfNotCalled
class A(metaclass = WarnIfNotCalled):
def foo(self):
print("foo has been called")
def bar(self, x):
print("bar has been called and x =", x)
If you didn't forget to call these functions, everything works as usual
a = A()
a.foo()
a.bar(5)
Output:
foo has been called
bar has been called and x = 5
But if you DID forget:
a = A()
a.foo
a.bar
You see the following
Warning! foo function hasn't been called!
Warning! bar function hasn't been called!
Happy debugging!

Related

Dynamically add function to class through decorator

I'm trying to find a way to dynamically add methods to a class through decorator.
The decorator i have look like:
def deco(target):
def decorator(function):
#wraps(function)
def wrapper(self, *args, **kwargs):
return function(*args, id=self.id, **kwargs)
setattr(target, function.__name__, wrapper)
return function
return decorator
class A:
pass
# in another module
#deco(A)
def compute(id: str):
return do_compute(id)
# in another module
#deco(A)
def compute2(id: str):
return do_compute2(id)
# **in another module**
a = A()
a.compute() # this should work
a.compute2() # this should work
My hope is the decorator should add the compute() function to class A, any object of A should have the compute() method.
However, in my test, this only works if i explicitly import compute into where an object of A is created. I think i'm missing something obvious, but don't know how to fix it. appreciate any help!
I think this will be quite simpler using a decorator implemented as a class:
class deco:
def __init__(self, cls):
self.cls = cls
def __call__(self, f):
setattr(self.cls, f.__name__, f)
return self.cls
class A:
def __init__(self, val):
self.val = val
#deco(A)
def compute(a_instance):
print(a_instance.val)
A(1).compute()
A(2).compute()
outputs
1
2
But just because you can do it does not mean you should. This can become a debugging nightmare, and will probably give a hard time to any static code analyser or linter (PyCharm for example "complains" with Unresolved attribute reference 'compute' for class 'A')
Why doesn't it work out of the box when we split it to different modules (more specifically, when compute is defined in another module)?
Assume the following:
a.py
print('importing deco and A')
class deco:
def __init__(self, cls):
self.cls = cls
def __call__(self, f):
setattr(self.cls, f.__name__, f)
return self.cls
class A:
def __init__(self, val):
self.val = val
b.py
print('defining compute')
from a import A, deco
#deco(A)
def compute(a_instance):
print(a_instance.val)
main.py
from a import A
print('running main')
A(1).compute()
A(2).compute()
If we execute main.py we get the following:
importing deco and A
running main
Traceback (most recent call last):
A(1).compute()
AttributeError: 'A' object has no attribute 'compute'
Something is missing. defining compute is not outputted. Even worse, compute is never defined, let alone getting bound to A.
Why? because nothing triggered the execution of b.py. Just because it sits there does not mean it gets executed.
We can force its execution by importing it. Feels kind of abusive to me, but it works because importing a file has a side-effect: it executes every piece of code that is not guarded by if __name__ == '__main__, much like importing a module executes its __init__.py file.
main.py
from a import A
import b
print('running main')
A(1).compute()
A(2).compute()
outputs
importing deco and A
defining compute
running main
1
2

python mocking check if a method of an object was accessed(not called)

class A():
def tmp(self):
print("hi")
def b(a):
a.tmp # note that a.tmp() is not being called. In the project I am working on, a.tmp is being passed as a lambda to a spark executor. And as a.tmp is being invoked in an executor(which is a different process), I can't assert the call of tmp
I want to test whether a.tmp was ever invoked. How do I do that? Note that I still don't want to mock away the tmp() method and would prefer something on the lines of python check if a method is called without mocking it away
Not tested, and there's probably a much better way with Mock but anyway:
def mygetattr(self, name):
if name == "tmp":
self._tmp_was_accessed = True
return super(A, self).__getattribute__(name)
real_getattr = A.__getattribute__
A.__getattribute__ = mygetattr
try:
a = A()
a._tmp_was_accessed = False
b(a)
finally:
A.__getattribute__ real_getattr
print(a._tmp_was_accessed)

Override a function' sub-function from a decorator?

Let's consider this piece of code where I would like to create bar dynamically with a decorator
def foo():
def bar():
print "I am bar from foo"
print bar()
def baz():
def bar():
print "I am bar from baz"
print bar()
I thought I could create bar from the outside with a decorator:
def bar2():
print "I am super bar from foo"
setattr(foo, 'bar', bar2)
But the result is not what I was expecting (I would like to get I am super bar from foo:
>>> foo()
I am bar from foo
Is it possible to override a sub-function on an existing function with a decorator?
The actual use case
I am writing a wrapper for a library and to avoid boilerplate code I would like to simplify my work.
Each library function has a prefix lib_ and returns an error code. I would like to add the prefix to the current function and treat the error code. This could be as simple as this:
def call():
fname = __libprefix__ + inspect.stack()[1][3]
return_code = getattr(__lib__, fname)(*args)
if return_code < 0: raise LibError(fname, return_code)
def foo():
call()
The problem is that call might act differently in certain cases. Some library functions do not return an error_code so it would be easier to write it like
this:
def foo():
call(check_status=True)
Or much better in my opinion (this is the point where I started thinking about decorators):
#LibFunc(check_status=True)
def foo():
call()
In this last example I should declare call inside foo as a sub-function created dynamically by the decorator itself.
The idea was to use something like this:
class LibFunc(object):
def __init__(self,**kwargs):
self.kwargs = kwargs
def __call__(self, original_func):
decorator_self = self
def wrappee( *args, **kwargs):
def call(*args):
fname = __libprefix__ + original_func.__name__
return_code = getattr(__lib__, fname)(*args)
if return_code < 0: raise LibError(fname, return_code)
print original_func
print call
# <<<< The part that does not work
setattr(original_func, 'call', call)
# <<<<
original_func(*args,**kwargs)
return wrappee
Initially I was tempted to call the call inside the decorator itself to minimize the writing:
#LibFunc():
foo(): pass
Unfortunately, this is not an option since other things should sometime be done before and after the call:
#LibFunc():
foo(a,b):
value = c_float()
call(a, pointer(value), b)
return value.value
Another option that I thought about was to use SWIG, but again this is not an option because I will need to rebuild the existing library with the SWIG wrapping functions.
And last but not least, I may get inspiration from SWIG typemaps and declare my wrapper as this:
#LibFunc(check_exit = true, map = ('<a', '>c_float', '<c_int(b)')):
foo(a,b): pass
This looks like the best solution to me, but this is another topic and another question...
Are you married to the idea of a decorator? Because if your goal is bunch of module-level functions each of which wraps somelib.lib_somefunctionname, I don't see why you need one.
Those module-level names don't have to be functions, they just have to be callable. They could be a bunch of class instances, as long as they have a __call__ method.
I used two different subclasses to determine how to treat the return value:
#!/usr/bin/env python3
import libtowrap # Replace with the real library name.
class Wrapper(object):
'''
Parent class for all wrapped functions in libtowrap.
'''
def __init__(self, name):
self.__name__ = str(name)
self.wrapped_name = 'lib_' + self.__name__
self.wrapped_func = getattr(libtowrap, self.wrapped_name)
self.__doc__ = self.wrapped_func.__doc__
return
class CheckedWrapper(Wrapper):
'''
Wraps functions in libtowrap that return an error code that must
be checked. Negative return values indicate an error, and will
raise a LibError. Successful calls return None.
'''
def __call__(self, *args, **kwargs):
error_code = self.wrapped_func(*args, **kwargs)
if error_code < 0:
raise LibError(self.__name__, error_code)
return
class UncheckedWrapper(Wrapper):
'''
Wraps functions in libtowrap that return a useful value, as
opposed to an error code.
'''
def __call__(self, *args, **kwargs):
return self.wrapped_func(*args, **kwargs)
strict = CheckedWrapper('strict')
negative_means_failure = CheckedWrapper('negative_means_failure')
whatever = UncheckedWrapper('whatever')
negative_is_ok = UncheckedWrapper('negative_is_ok')
Note that the wrapper "functions" are assigned while the module is being imported. They are in the top-level module namespace, and not hidden by any if __name__ == '__main__' test.
They will behave like functions for most purposes, but there will be minor differences. For example, I gave each instance a __name__ that matches the name they're assigned to, not the lib_-prefixed name used in libtowrap... but I copied the original __doc__, which might refer to a prefixed name like lib_some_other_function. Also, testing them with isinstance will probably surprise people.
For more about decorators, and for many more annoying little discrepancies like the ones I mentioned above, see Graham Dumpleton's half-hour lecture "Advanced Methods for Creating Decorators" (PyCon US 2014; slides). He is the author of the wrapt module (Python Package Index; Git Hub; Read the Docs), which corrects all(?) of the usual decorator inconsistencies. It might solve your problem entirely (except for the old lib_-style names showing up in __doc__).

Is it possible to get access to class-level attributes from a nose plugin?

Say I have the following test class:
# file tests.py
class MyTests(object):
nose_use_this = True
def test_something(self):
assert 1
I can easily write a plugin that is run before that test:
class HelloWorld(Plugin):
# snip
def startTest(self, test):
import ipdb; ipdb.set_trace()
The test is what I want it to be, but the type of test is nose.case.Test:
ipdb> str(test)
'tests.MyTests.test_something'
ipdb> type(test)
<class 'nose.case.Test'>
And I can't see anything that will allow me to get at the nose_use_this attribute that I defined in my TestCase-ish class.
EDIT:
I think probably the best way to do this is to get access to the context with a startContext/stopContext method pair, and to set attributes on the instance there:
class MyPlugin(Plugin):
def __init__(self, *args, **kwargs):
super(MyPlugin, self).__init__(self, *args, **kwargs)
self.using_this = None
def startContext(self, context):
if hasattr(context, 'nose_use_this'):
self.using_this = context.nose_use_this
def stopContext(self, context):
self.using_this = None
def startTest(self, test):
if self.using_this is not None:
do_something()
To be really robust you'll probably need to implement a stack of things since "startContext" is called for (at least) modules and classes, but if the only place where the attribute can be set is on the class then I think this simple thing should work. It seems to be working for me. (Nose 1.3.0 and plugin api version 0.10)
original:
Oh, it's the _context() function on the test instance:
ipdb> test._context()
<class 'tests.MyTests'>
And it does indeed have the expected class-level attributes. I'm still open to a better way of doing this, the _ makes me think that nose doesn't want me treating this as part of the API.

How to Write a valid Class Decorator in Python?

I just wrote a class decorator like below, tried to add debug support for every method in the target class:
import unittest
import inspect
def Debug(targetCls):
for name, func in inspect.getmembers(targetCls, inspect.ismethod):
def wrapper(*args, **kwargs):
print ("Start debug support for %s.%s()" % (targetCls.__name__, name));
result = func(*args, **kwargs)
return result
setattr(targetCls, name, wrapper)
return targetCls
#Debug
class MyTestClass:
def TestMethod1(self):
print 'TestMethod1'
def TestMethod2(self):
print 'TestMethod2'
class Test(unittest.TestCase):
def testName(self):
for name, func in inspect.getmembers(MyTestClass, inspect.ismethod):
print name, func
print '~~~~~~~~~~~~~~~~~~~~~~~~~~'
testCls = MyTestClass()
testCls.TestMethod1()
testCls.TestMethod2()
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main()
Run above code, the result is:
Finding files... done.
Importing test modules ... done.
TestMethod1 <unbound method MyTestClass.wrapper>
TestMethod2 <unbound method MyTestClass.wrapper>
~~~~~~~~~~~~~~~~~~~~~~~~~~
Start debug support for MyTestClass.TestMethod2()
TestMethod2
Start debug support for MyTestClass.TestMethod2()
TestMethod2
----------------------------------------------------------------------
Ran 1 test in 0.004s
OK
You can find that 'TestMethod2' printed twice.
Is there problem? Is my understanding right for the decorator in python?
Is there any workaround?
BTW, i don't want add decorator to every method in the class.
Consider this loop:
for name, func in inspect.getmembers(targetCls, inspect.ismethod):
def wrapper(*args, **kwargs):
print ("Start debug support for %s.%s()" % (targetCls.__name__, name))
When wrapper is eventually called, it looks up the value of name. Not finding it in locals(), it looks for it (and finds it) in the extended scope of the for-loop. But by then the for-loop has ended, and name refers to the last value in the loop, i.e. TestMethod2.
So both times the wrapper is called, name evaluates to TestMethod2.
The solution is to create an extended scope where name is bound to the right value. That can be done with a function, closure, with default argument values. The default argument values are evaluated and fixed at definition-time, and bound to the variables of the same name.
def Debug(targetCls):
for name, func in inspect.getmembers(targetCls, inspect.ismethod):
def closure(name=name,func=func):
def wrapper(*args, **kwargs):
print ("Start debug support for %s.%s()" % (targetCls.__name__, name))
result = func(*args, **kwargs)
return result
return wrapper
setattr(targetCls, name, closure())
return targetCls
In the comments eryksun suggests an even better solution:
def Debug(targetCls):
def closure(name,func):
def wrapper(*args, **kwargs):
print ("Start debug support for %s.%s()" % (targetCls.__name__, name));
result = func(*args, **kwargs)
return result
return wrapper
for name, func in inspect.getmembers(targetCls, inspect.ismethod):
setattr(targetCls, name, closure(name,func))
return targetCls
Now closure only has to be parsed once. Each call to closure(name,func) creates its own function scope with the distinct values for name and func bound correctly.
The problem isn't writing a valid class decorator as such; the class is obviously being decorated and doesn't just raise exceptions, you get to code you intended to add to the class. So clearly you need to be looking for a bug in your decorator, not questions of whether you're managing to write a valid decorator.
In this case, the problem is with closures. In your Debug decorator, you loop over name and func, and for each loop iteration you define a function wrapper, which is a closure that has access to the loop variables. The problem is that as soon as the next loop iteration starts the things referred to by the loop variables has changed. But you only ever call any of these wrapper functions after the entire loop has done. So every decorated method ends up calling out to the last values from the loop: in this case, TestMethod2.
What I would do in this case is make a method-level decorator, but as you don't want to explicitly decorate every method you then make a class decorator that goes through all the methods and passes them to the method decorator. This works because you're not giving the wrapper access to your loop variable through a closure; you're instead passing a reference to the thing the loop variable referred to into a function (the decorator function which constructs and returns a wrapper); once that's done it doesn't affect the wrapper function to rebind the loop variable on the next iteration.
This is a very common problem. You think wrapper is a closure that captures the current func argument, but that is not the case. If you don't pass the current func value to the wrapper, it's value is only looked up after the loop, so you get the last value.
You can do this:
def Debug(targetCls):
def wrap(name,func): # use the current func
def wrapper(*args, **kwargs):
print ("Start debug support for %s.%s()" % (targetCls.__name__, name));
result = func(*args, **kwargs)
return result
return wrapper
for name, func in inspect.getmembers(targetCls, inspect.ismethod):
setattr(targetCls, name, wrap(name, func))
return targetCls

Categories

Resources