Python PyQt Decorated Method Errors when Triggered - python

I have used decorators in the past without any issue, but I'm stuck on this one. It seems that when my decorated function is called by a PyQt action, it fails. My theory is that PyQt is somehow wrapping the connected function internally and messing with the signature, or some such thing, but I can't find any evidence of this.
By connected I mean like this:
from PyQt4 import QtGui
# In my widget init...
self.action_my_method = QtGui.QAction("action_my_method", self)
self.action_my_method.triggered.connect(self.my_method)
Here is my simple decorator:
from functools import wraps
def simple_decorator(fn):
#wraps(fn)
def wrapper(*args, **kwargs):
from inspect import getargspec
print getargspec(fn)
return fn(*args, **kwargs)
return wrapper
When I use this to decorate a method without any arguments (def my_method(self)), and execute it directly from my QWidget class, it works fine. When I used a menu to trigger the action, I get:
# Traceback (most recent call last):
# File "[...].py", line 88, in wrapper
# return fn(*args, **kwargs)
# TypeError: my_method() takes exactly 1 argument (2 given)
I'm seeing an arg spec:
ArgSpec(args=['self'], varargs=None, keywords=None, defaults=None)
If I add *args to my method, so the signature is my_method(self, *args), the decorator works fine and the arg spec is:
ArgSpec(args=['self'], varargs='args', keywords=None, defaults=None)
I don't see any clues here, since the decorator works fine when called directly, but fails when triggered by the action.
Can anyone shed some light on this, or offer something I can use to debug it further?

This isn't an answer so much as a workaround. I hope there aren't any downsides to this. I thought of it while I was typing out a simple re-direct function. This uses a lambda to do the same thing at run-time:
self.action_my_method.triggered.connect(lambda: self.my_method())
This sufficiently pads my decorated function against being mangled and prevents the error.

Alternatively you can redefine the signature of every method that may be called via a .connect as:
def my_method(self, qtcallparam=None, **kwargs):
So the method will ignore the first unnamed argument, at the price of use strictly keyword arguments in that method (not such a bad thing)
But the solution of #Rafe is probably cleaner

Related

Is it possible to hook C module function in python?

I have this code where "clr" and "Assembly" are parts of C library in Linux:
import clr
from System.Reflection import Assembly
import functools
def prefix_function(function, prefunction):
#functools.wraps(function)
def run(*args, **kwargs):
prefunction(*args, **kwargs)
return function(*args, **kwargs)
return run
def this_is_a_function():
print("Hook function called instead of the original!")
# This gives me error "attribute is read-only"
Assembly.GetEntryAssembly = prefix_function(
Assembly.GetEntryAssembly, this_is_a_function)
# Should print:
# "Hook function called instead of the original!"
Assembly.GetEntryAssembly()
Is it possible to somehow hook calling to static function "GetEntryAssembly()" as well as calls to any functions in C libraries ? I need to modify logic and return result of the original "GetEntryAssembly()". So, in the example above I wish to print "Hook function called" when call to GetEntryAssembly() is made.
I can't subclass, but could accept overwrite kind of "pointers" like in C code.
Also I tried this answer, but read-only attribute error occurs.

Decorator function to wrap a function?

I have to write a dummy function to get my code running on different systems, from which some don't have the needed packages. The function is wrapped and then called like a class-function. I am struggling with this problem, any ideas how to do that?
Here I got a short snippet, I import a python script ray.py which should contain this remote() function. The remote function has to take two arguments, without any usage.
Edit: The#ray.remote() wraps the run() function to be parallel executable. It doesn’t change the return of run(). On some systems ray is not supported and I want the same script to execute sequentially without changing anything. Therefore I import a ray-dummy instead of the real one. Now I want to write the ray.remote() to wrap the run() function in a way so that it’s callable with run.remote().
That may be a very inconvenient method to just sequentially execute a function, but necessary to achieve an easy integration for different systems.
# here the wrapped function
#ray.remote(arg1, arg2)
def run(x):
return x**2
# call it
squared = run.remote(2)
I got a working script, located in the ray.py file:
def remote(*args, **kwargs):
def new_func(func):
class Wrapper:
def __init__(self, f):
self.func = f
def remote(self, *arg):
out = self.func(*arg)
return out
ret = Wrapper(func)
return ret
return new_func

Getting Pytest, Relative Import, and Patch.Object to Cooperate

I'm having a hell of a time getting pytest, relative import, and patch to cooperate.
I have the following:
from .. import cleanup
class CleanupTest(unittest.TestCase):
#patch.object(cleanup, 'get_s3_url_components', MagicMock())
#patch.object(cleanup, 'get_db_session', MagicMock(return_value={'bucket', 'key'}))
#patch('time.time')
def test_cleanup(self, mock_time, mock_session, mock_components):
...
I've tried a few variations.
The currently shown one. Pytest returns 'TypeError: test_cleanup() takes exactly 4 arguments (2 given)'. It's not recognizing the patch.objects, even though I'm pretty sure I'm using them correctly according to https://docs.python.org/3/library/unittest.mock-examples.html
Changing the patch.objects over to simple patches causes them to throw 'no module named cleanup'. I can't figure out how to use patch with a relative import.
Edit: I'm using Python 2.7 in case that's relevant.
this actually doesn't have anything to do with pytest and is merely the behaviour of the mock decorators
From the docs
If patch() is used as a decorator and new is omitted, the created mock is passed in as an extra argument to the decorated function
that is, because you're passing a replacement object the signature-mutation doesn't occur
For example:
from unittest import mock
class x:
def y(self):
return 'q'
#mock.patch.object(x, 'y', mock.Mock(return_value='z'))
def t(x): ...
t() # TypeError: t() missing 1 required positional argument: 'x'
Fortunately, if you're producing a mock object, you rarely need to actually pass in the new argument and can use the keyword options
For your particular example this should work fine:
#patch.object(cleanup, 'get_s3_url_components')
#patch.object(cleanup, 'get_db_session', return_value={'bucket', 'key'})
#patch('time.time')
def test_cleanup(self, mock_time, mock_session, mock_components): ...
If you absolutely need them to be MagicMocks, you can use the new_callable=MagicMock keyword argument

Can I use aspects in python without changing a method / function's signature?

I've been using python-aspectlib to weave an aspect to certain methods - unfortunately this changes the methods signature to Argspec(args=[], varargs='args', keywords='kwargs', default=None), which creates problems when working with libraries that depend on inspect returning the proper signature(s).
Is there a way to use python-aspectlib without changing a method's signature? If not, are there other python aspect libraries that can do this?
I've looked at the decorator module, which explicitly mentions the problem of changing a method signature: http://micheles.googlecode.com/hg/decorator/documentation.html#statement-of-the-problem , but I was hoping to find a solution where I don't need to modify the methods I want to weave (since they are part of a third party library).
I'm using python 2.7.6
I've managed to 'fix' this for my specific use case with the following piece of code:
from decorator import decorator
from Module1 import Class1
from Module2 import Class2
def _my_decorator(func, *args, **kwargs):
#add decorator code here
return func(*args, **kwargs)
def my_decorator(f):
return decorator(_my_decorator, f)
methods_to_decorate = [
'Class1.method1',
'Class2.method2',
]
for method in methods_to_decorate:
exec_str = method + '= my_decorator('+method+'.im_func)'
exec(exec_str)
This probably doesn't handle all of the issues mentioned in the How you implement your Python decorator is wrong blog posts, but it full fills the criteria most important to me: correct method signatures.

Print function calls with a variable delay / Python

The question is, how can I configure the Python debugger to show me in the console what functions are being called?
In order not to see everything flash by, a delay between the function calls would be needed.
If you want to monitor when a few particular functions are being called,
you could use this decorator:
import functools
def trace(f):
#functools.wraps(f)
def wrapper(*arg,**kw):
'''This decorator shows how the function was called'''
arg_str=','.join(['%r'%a for a in arg]+['%s=%s'%(key,kw[key]) for key in kw])
print "%s(%s)" % (f.__name__, arg_str)
return f(*arg, **kw)
return wrapper
You would use it like this:
#trace # <--- decorator your functions with the #trace decorator
def foo(x,y):
# do stuff
When you run your program, every time foo(x,y) is called, you'll see the
function call with the value of its arguments in the console:
foo(y=(0, 1, 2),x=(0, 0, 0))
You can use the alternative pydb debugger. You can invoke it with pydb --fntrace --batch <scriptname> to get a function trace.
As for the "flashing-by", use the usual tools like Ctrl-S/Ctrl-Q on an ANSI terminal, or redirect to a file.

Categories

Resources