Background
I'm working on a PyQt application and have added the following debug decorator:
def debug_decorator(func):
#functools.wraps(func)
def wrapper_func(*args, **kwargs):
logging.debug(f"Calling {func.__name__}")
ret_val = func(*args, **kwargs)
logging.debug(f"{func.__name__} returned {ret_val!r}")
return ret_val
return wrapper_func
Among other things, i use this for click handler functions, for example:
self.share_qpb = QtWidgets.QPushButton("Share")
self.share_qpb.clicked.connect(self.on_share_clicked)
[...]
#debug_decorator
def on_share_clicked(self):
If i run this code i get the following error when the share button is clicked:
TypeError: on_share_clicked() takes 1 positional argument but 2 were given
The reason for this is that the clicked signal in Qt/PyQt sends the checked state of the button (documentation)
This is only an issue when i decorate functions with my debug decorator, not otherwise (i guess the extra argument is discarded somewhere)
Question
How can i make my debug decorator work without having to add a parameter for every clicked signal handler?
(I'd like to find a solution which works both for cases when the arguments match and when there is one extra argument)
I'm running Python 3.8 and PyQt 5.15
I managed to solve this by checking to see if the number of arguments provided len(args) is more than the number of arguments that the decorated function accepts func.__code__.co_argcount
I also check if the name of the decorated function contains the string "clicked" since i have made a habit of having that as part of the name of my click handler functions. This has the advantage that if another decorated function is called with too many arguments we will (as usual) get an error
def debug_decorator(func):
#functools.wraps(func)
def wrapper_func(*args, **kwargs):
logging.debug(f"=== {func.__name__} was called")
func_arg_count_int = func.__code__.co_argcount
func_name_str = func.__name__
if len(args) > func_arg_count_int and "clicked" in func_name_str: # <----
args = args[:-1]
ret_val = func(*args, **kwargs)
logging.debug(f"=== {func.__name__} returned {ret_val!r}")
return ret_val
return wrapper_func
Thank you to #ekhumoro for helping me out with this!
I got this code from Corey Schafer's tutorial of Python decorators. I am giving below two codes.
CODE- 1
def decorator_function(original_function):
def wrapper_function():
print("Wrapper executed before {}".format(original_function.__name__))
return original_function()
return wrapper_function
#decorator_function
def display():
print("display func ran")
def function_info(name,age):
print("Function info has {} and {} as arguments.".format(name, age))
display()
function_info('John',23)
This executes the code properly. But if we take the function_info(name,age) method, it's passing arguments, instead it's not giving any error.
While, below stated code takes args and kwargs as arguments.
CODE - 2
def decorator_function(original_function):
def wrapper_function(*args,**kwargs):
print("Wrapper executed before {}".format(original_function.__name__))
return original_function(*args,**kwargs)
return wrapper_function
#decorator_function
def display():
print("display func ran")
def function_info(name,age):
print("Function info has {} and {} as arguments.".format(name, age))
display()
function_info('Jim',23)
Can anyone explain me the difference?
Why in CODE - 1, when both methods are in the same decorator, doesn't gives any error for function_info(name,age) but, in CODE - 2 when both methods are decorated separately, function_info(name, age) requires args and kwargs?
The output is basically the same for both functions- no matter if you are using the arguments *args,**kwargs (tested with https://www.onlinegdb.com/online_python_interpreter).
CODE-1:
Wrapper executed before display
display func ran
Function info has John and 23 as arguments.
CODE - 2:
Wrapper executed before display
display func ran
Function info has Jim and 23 as arguments.
The thing is that the decorator is only used on the function display which has no arguments.
If you put
#decorator_function
def function_info(name,age):
print("Function info has {} and {} as arguments.".format(name, age))
then - for both codes - you need need *args,**kwargs because the wrapper_function() is executed before function_info() is executed. And as function_info() takes arguments, you also require arguments for the wrapper_function() in the decorator, otherwise the interpreter complains that there are two arguments for a function which takes none.
Output:
Wrapper executed before display
display func ran
Wrapper executed before function_info
Function info has Jim and 23 as arguments.
I saw the below function provided for mocking multi-processing call in python
class MockPoolApplyResult:
def __init__(self, func, args):
self._func = func
self._args = args
def get(self, timeout=0):
return self._func(*self._args)
monkeypatch.setattr("multiprocessing.pool.Pool.starmap",
lambda self, func, args=(), kwds={}, callback=None, error_callback=None:
MockPoolApplyResult(func, args))
What does the below lambda function do and how can I check the number of times it is called?
monkeypatch.setattr("multiprocessing.pool.Pool.starmap",
lambda self, func, args=(), kwds={}, callback=None, error_callback=None:
MockPoolApplyResult(func, args))
The lambda is equivalent to this:
def mock_starmap(self, func, args=(), kwds={}, callback=None, error_callback=None):
return MockPoolApplyResult(func, args)
monkeypatch.setattr("multiprocessing.pool.Pool.starmap", mock_starmap)
It replaces the starmap method on the Pool class object here with a function that takes the exact same arguments.
The intended behavior is probably that when you have my_pool = Pool(3) and run my_pool.starmap(myfunc, [1,2,3]); my_pool.get() it would execute myfunc(1), myfunc(2) and myfunc(3) within the current process, as opposed to these calls being performed by processes in the pool. But it looks like it will actually call myfunc(1, 2, 3) because of how MockPoolApplyResult.get() is written.
Edit. Forgot to answer part 2 of your question: you can't trace the number of calls to a monkeypatched function. For this you'll need to mock the starmap function instead, and then the mocked object will track the number of calls. You can do this either using the built-in unittest.mock module directly or with the pytest-mock wrapper package.
I have a decorator that records the functions present in my script:
registry=[]
def register(func):
print('running register(%s)' % func)
registry.append(func)
return func
I then have a series of decorated functions:
#register
def func1():
print('running f1')
#register
def func2():
print('running f2')
This works, after running the script, print(registry) returns:
[<function func1 at 0x0000000008433950>, <function func2 at 0x0000000008F06AE8>]
However calling the functions individually, for example:
func1()
Returns only 'running f1': just the function, without the decoration.
I was expecting it to return something like 'running register( func1) \n running func1'.
So my question is, when you have a decorated function and call it; when will it call the function in isolation and when will it call the decorated function?
Thanks very much.
Your register (decorator) function is only run once when the code is interpreted.
If you want to alter the function behaviour, try something like:
def register(func):
registry.append(func)
print('adding register(%s)' % func)
def wrap(*args, **kwargs):
print('running register(%s)' % func)
return func(*args, **kwargs)
return wrap
The first print is done once, the second one before each call.
Adding the arguments will make your decorator more transparent.
What we call a "decorator" is just a higher order function, and the #decorator syntax nothing more than syntactic sugar so this:
#decorate
def func():
pass
is strictly equivalent to
def func():
pass
func = decorate(func)
As mentionned by Guillaume Deslandes, if this code is at your module or script's top-level, the decorator is only invoked when the module or script is first loaded by the runtime.
In your case, the decorator function register returns it's argument (the function it's applied to) unchanged, so calling the "decorated" function will work exactly as if it never had been decorated.
If you want to modify the decorated function in any way (by executing code before and or after the original function or whatever), you have to return a new function that will "replace" the original (but - usually - keeping a reference to the original function so this new "wrapper" function can still call the original), which is usually done using the fact that Python functions are closure:
def decorate(fun):
def wrapper(*args, **kw):
res = fun(*args, **kw)
print("function {} called with *{}, *{} returned {}".format(fun, args, kw, res)
return res
return wrapper
#decorate
def fun(a):
return a * 2
I cannot figure out how to actually pass arguments to a fabric custom task.
I have a bunch of tasks that all need to do the same setup, so I was hoping to subclass the task and have the base class do the setup and then run the specific subtasks. Both the setup code and the subtasks need access to some arguments that are passed in from the command-line to the task. I also need to be able to set default values for the arguments.
Original Attempt
My original attempt shows what I am trying to do without any sub classes.
This code works correctly.
The code below is in file tmp1.py:
from fabric.api import task
def do_setup(myarg):
''' common setup for all tasks '''
print "in do_setup(myarg=%s)" % myarg
# do setup using myarg for something important
#task
def actual_task1(myarg='default_value', alias='at'):
print "In actual_task1(myarg=%s)" % myarg
do_setup(myarg)
# do rest of work ...
#task
def actual_task2(myarg='default_value', alias='at'):
print "In actual_task2(myarg=%s)" % myarg
do_setup(myarg)
# do rest of work ...
I run it from the command-line without any args and correctly see the default for myarg of 'default_value'
fab -f ./tmp1.py actual_task1
Prints:
In actual_task1(myarg=default_value)
in do_setup(myarg=default_value)
Done.
Then I call it with myarg='hello' and see that 'hello' gets passed through correctly
fab -f ./tmp1.py actual_task1:myarg='hello'
It outputs:
In actual_task1(myarg=hello)
in do_setup(myarg=hello)
Done.
Attempt with a custom task
My next attempt is to make a common task to encapsulate the setup part.
This is copied from http://docs.fabfile.org/en/1.5/usage/tasks.html
The code below is in the file tmp2.py:
from fabric.api import task
from fabric.tasks import Task
def do_setup(myarg):
''' common setup for all tasks '''
print "in do_setup(myarg=%s)" % myarg
# do setup using myarg for something important
'''
Attempt to make a common task to encapsulate the setup part
copied from http://docs.fabfile.org/en/1.5/usage/tasks.html
'''
class CustomTask(Task):
def init(self, func, myarg, args, *kwargs):
super(CustomTask, self).init(args, *kwargs)
print("=> init(myarg=%s, args=%s, kwargs=%s" % (myarg, args, kwargs))
self.func = func
self.myarg = myarg
print "in init: self.func=",self.func,"self.myarg=",self.myarg
def run(self, *args, **kwargs):
return self.func(self.myarg, *args, **kwargs)
#task(task_class=CustomTask, myarg='default_value', alias='at')
def actual_task1():
print "In actual_task1(myarg=%s)" % myarg
# do rest of work ...
When run, there are 2 problems:
__init__ gets "default_value" instead of "Hello"
It complains that actual_task1() expects 0 arguments
I run it this way:
fab -f ./tmp2.py actual_task1:myarg="Hello"
Prints:
=> init(myarg=default_value, args=(), kwargs={'alias': 'at'}
in init: self.func= self.myarg= default_value
Traceback (most recent call last):
File "/home/xxx/Documents/pyenvs/xxx/local/lib/python2.7/site-packages/fabric/main.py", line 743, in main args, *kwargs
File "/home/xxx/Documents/pyenvs/xxx/local/lib/python2.7/site-packages/fabric/tasks.py", line 405, in execute results[''] = task.run(args, *new_kwargs)
File "/home/xxx/test_fab/tmp2.py", line 21, in run
return self.func(self.myarg, args, *kwargs)
TypeError: actual_task1() takes no arguments (1 given)
I spent quite a bit of time trying to make this work but I cannot seem to solve the default_value issue. I must be missing something?
I would appreciate some help figuring out how to make this sample program run. The second version with the custom task needs to behave just like the original version I showed.
Thank you for any help with this issue.
Fixed example with setup:
from fabric.api import task
from fabric.tasks import Task
def do_setup(foo, verbose):
''' common setup for all tasks '''
print "IN do_setup(foo=%s, verbose=%s)" % (foo, verbose)
# do setup using foo and verbose...
class CustomTask(Task):
def __init__(self, func, *args, **kwargs):
'''
The special args like hosts and roles do not show up in
args, and kwargs, they are stripped already.
args and kwargs may contain task specific special arguments
(e.g. aliases, alias, default, and name) to customize the
task. They are set in the #task decorator and cannot be passed
on the command-line. Note also that these special task
arguments are not passed to the run method.
Non-special arguments (there are none in this example) are
set in the task decorator. These other arguments are not
passed to the run method and cannot be overridden from the
command-line.
Note that if you pass any "task specific special arguments" or
"non-special arguments declared in the task decorator" from the
command-line, they are treated as different arguments and the
command-line values are passed to the run method but not to
this method.
'''
super(CustomTask, self).__init__(*args, **kwargs)
print "IN __init__(args=%s, kwargs=%s)" % (args, kwargs)
self.func = func
def run(self, foo='foo_default_val', verbose='verbose_default_val',
*args, **kwargs):
'''
The arguments to this method will be:
1) arguments from the actual task (e.g. foo and verbose). This method
is where you set a default value for the arguments from the
actual_task, not on the actual_task.
2) task specific arguments from the command-line
(e.g. actual_task:bar='xxx'). This example is not expecting any,
so it strips them and does not pass them to the
actual_function one (e.g. it calls self.func with only foo
and verbose and does not pass args and kwargs)
'''
print "IN run(foo=%s, verbose=%s, args=%s, kwargs=%s)" % \
(foo, verbose, args, kwargs)
do_setup(foo, verbose)
return self.func(foo, verbose)
#task(task_class=CustomTask, alias="RUNME")
def actual_task(foo, verbose):
print 'IN task actual_task(foo=%s, verbose=%s)' % (foo, verbose)
Run with only host specified on the command-line:
fab -f ./example_with_setup.py actual_task:host='hhh'
IN __init__(args=(), kwargs={'alias': 'RUNME'})
[hhh] Executing task 'actual_task'
IN run(foo=foo_default_val, verbose=verbose_default_val, args=(), kwargs={})
IN do_setup(foo=foo_default_val, verbose=verbose_default_val)
IN task actual_task(foo=foo_default_val, verbose=verbose_default_val)
Run specifying foo on the commandline:
fab -f ./example_with_setup.py actual_task:host='hhh',foo='bar'
IN __init__(args=(), kwargs={'alias': 'RUNME'})
[hhh] Executing task 'actual_task'
IN run(foo=bar, verbose=verbose_default_val, args=(), kwargs={})
IN do_setup(foo=bar, verbose=verbose_default_val)
IN task actual_task(foo=bar, verbose=verbose_default_val)
Run specifying both foo and verbose on the command-line:
fab -f ./example_with_setup.py actual_task:host='hhh',foo='bar',verbose=True
IN __init__(args=(), kwargs={'alias': 'RUNME'})
[hhh] Executing task 'actual_task'
IN run(foo=bar, verbose=True, args=(), kwargs={})
IN do_setup(foo=bar, verbose=True)
IN task actual_task(foo=bar, verbose=True)
In the custom class section, the function actual_task1 doesn't actually take arguments, so the only valid way to invoke your fabric file is:
fab -f ./tmp2.py actual_task1
Furthermore, I don't think you're actually calling do_setup in either CustomTask or actual_task1
This is the fixed example.
# fixed the example from http://docs.fabfile.org/en/1.8/usage/tasks.html
from fabric.api import task
from fabric.tasks import Task
class CustomTask(Task):
def __init__(self, func, myarg1, *args, **kwargs):
'''
The special args like hosts and roles do not show up in
args, and kwargs, they are stripped already.
args and kwargs may contain task specific special arguments
(e.g. aliases, alias, default, and name) to customize the
task. They are set in the #task decorator and cannot be passed
on the command-line. Note also that these special task
arguments are not passed to the run method.
Non-special arguments (in this example myarg1) are set in the task
decorator. These other arguments are not passed to the run
method and cannot be overridden from the command-line.
Note that if you pass any "task specific special arguments" or
"non-special arguments declared in the task decorator" from the
command-line, they are treated as different arguments and the
command-line values are passed to the run method but not to
this method.
'''
super(CustomTask, self).__init__(*args, **kwargs)
print "IN __init__(myarg1=%s, args=%s, kwargs=%s)" % \
(myarg1, args, kwargs)
self.func = func
self.myarg1 = myarg1
def run(self, myarg2='default_value2', *args, **kwargs):
'''
The arguments to this method will be:
1) arguments from the actual task (e.g. myarg2). This method
is where you set a default value for the arguments from the
actual_task, not on the actual_task.
2) task specific arguments from the command-line
(e.g. actual_host:foo='foo'). This example is not expecting
any, so it strips them and does not pass them to the
actual_function (e.g. it calls self.func with only myarg2 and
does not pass args and kwargs)
'''
print "IN run(myarg2=%s, args=%s, kwargs=%s)" % \
(myarg2, args, kwargs)
return self.func(myarg2)
#task(task_class=CustomTask, myarg1='special_value', alias='RUNME')
def actual_task(myarg2):
print "IN actual_task(myarg2=%s)" % myarg2
Run with only hosts specified on the command-line:
fab -f ./fixed_example actual_task:hosts="hhh"
IN __init__(myarg1=special_value, args=(), kwargs={'alias': 'RUNME'})
[hhh] Executing task 'actual_task'
IN run(myarg2=default_value2, args=(), kwargs={})
IN actual_task(myarg2=default_value2)
Run specifying myarg2 on the command-line:
fab -f ./fixed_example actual_task:hosts="hhh",myarg2="good_value"
IN __init__(myarg1=special_value, args=(), kwargs={'alias': 'RUNME'})
[hhh] Executing task 'actual_task'
IN run(myarg2=good_value, args=(), kwargs={})
IN actual_task(myarg2=good_value)
Bad run specifying myarg1 and alias on the command-line. Notice that init gets the values specified in the task decorator and not the values from the command-line. Notice that run gets myarg1 and alias as arguments now.
fab -f ./fixed_example actual_task:hosts="hhh",myarg1="myarg1_from_commandline",alias="alias_from_commandline"
IN __init__(myarg1=special_value, args=(), kwargs={'alias': 'RUNME'})
[hhh] Executing task 'actual_task'
IN run(myarg2=default_value2, args=(), kwargs={'alias': 'alias_from_commandline', 'myarg1': 'myarg1_from_commandline'})
IN actual_task(myarg2=default_value2)