The question is, how can I configure the Python debugger to show me in the console what functions are being called?
In order not to see everything flash by, a delay between the function calls would be needed.
If you want to monitor when a few particular functions are being called,
you could use this decorator:
import functools
def trace(f):
#functools.wraps(f)
def wrapper(*arg,**kw):
'''This decorator shows how the function was called'''
arg_str=','.join(['%r'%a for a in arg]+['%s=%s'%(key,kw[key]) for key in kw])
print "%s(%s)" % (f.__name__, arg_str)
return f(*arg, **kw)
return wrapper
You would use it like this:
#trace # <--- decorator your functions with the #trace decorator
def foo(x,y):
# do stuff
When you run your program, every time foo(x,y) is called, you'll see the
function call with the value of its arguments in the console:
foo(y=(0, 1, 2),x=(0, 0, 0))
You can use the alternative pydb debugger. You can invoke it with pydb --fntrace --batch <scriptname> to get a function trace.
As for the "flashing-by", use the usual tools like Ctrl-S/Ctrl-Q on an ANSI terminal, or redirect to a file.
Related
I have this code where "clr" and "Assembly" are parts of C library in Linux:
import clr
from System.Reflection import Assembly
import functools
def prefix_function(function, prefunction):
#functools.wraps(function)
def run(*args, **kwargs):
prefunction(*args, **kwargs)
return function(*args, **kwargs)
return run
def this_is_a_function():
print("Hook function called instead of the original!")
# This gives me error "attribute is read-only"
Assembly.GetEntryAssembly = prefix_function(
Assembly.GetEntryAssembly, this_is_a_function)
# Should print:
# "Hook function called instead of the original!"
Assembly.GetEntryAssembly()
Is it possible to somehow hook calling to static function "GetEntryAssembly()" as well as calls to any functions in C libraries ? I need to modify logic and return result of the original "GetEntryAssembly()". So, in the example above I wish to print "Hook function called" when call to GetEntryAssembly() is made.
I can't subclass, but could accept overwrite kind of "pointers" like in C code.
Also I tried this answer, but read-only attribute error occurs.
I have to write a dummy function to get my code running on different systems, from which some don't have the needed packages. The function is wrapped and then called like a class-function. I am struggling with this problem, any ideas how to do that?
Here I got a short snippet, I import a python script ray.py which should contain this remote() function. The remote function has to take two arguments, without any usage.
Edit: The#ray.remote() wraps the run() function to be parallel executable. It doesn’t change the return of run(). On some systems ray is not supported and I want the same script to execute sequentially without changing anything. Therefore I import a ray-dummy instead of the real one. Now I want to write the ray.remote() to wrap the run() function in a way so that it’s callable with run.remote().
That may be a very inconvenient method to just sequentially execute a function, but necessary to achieve an easy integration for different systems.
# here the wrapped function
#ray.remote(arg1, arg2)
def run(x):
return x**2
# call it
squared = run.remote(2)
I got a working script, located in the ray.py file:
def remote(*args, **kwargs):
def new_func(func):
class Wrapper:
def __init__(self, f):
self.func = f
def remote(self, *arg):
out = self.func(*arg)
return out
ret = Wrapper(func)
return ret
return new_func
Q.1 : When calling display second time why print('inside outer') is not called or why inside outer does't print?
Q.2 : We are just providing wrapper (function name) not actually calling it, then how does the wrapper function called, is it the decorator functionality that internally it calls the function when returned?
Code:
def outer(func):
def wrapper():
print('inside wrapper func')
func()
print('inside outer')
return wapper
#outer
def display():
print('inside display')
display()
display()
Output:
inside outer
inside wrapper func
inside display
inside wrapper func
inside display
In functional programing, what you just did is called side effect. A function is meant to return a value but you introduced a side effect by printing to the console!
Answering Q1, the outer function is called only once and that's immediately you do #outer. Even if you don't call display, just decorating it will print 'inside outer'. Try it! So when you call display, you're calling the wrapper function.
Answering Q2, the pie syntax (#) is just a way of saying display = outer(display)
Now what it does is to print 'inside outer' and then returns the wrapper function. So when you call display, you aren't calling the outer function you're calling the wrapper. display is now like a pseudonym for wrapper.
At Q2
just providing wrapper (function name) not actually calling it, then how does the wrapper function called
It is just how python decorator works
At Q1
In decorator definition, we just call ONE command to get the decorator working
return wrapper
When you need two commands
print('inside outer')
return wapper
My best guess is that the wrapper definition method load once only and for the 2nd time, it just reuse previous loaded compiled code
say I have a module like so:
def print_hello(name):
" prints a greeting "
print("Hello, {}!".format(name.title()))
I want to set a sys.settrace on my module, so whenever a function from my module is called, it prints a block to stdout, like so
CALLED FUNCTION: say_hello()
Hello, Alex!
Predictably, the trace method will pick up ALL methods which are called, which results in this:
$ python3 trace_example.py
Called function: print_hello
Hello, Alex!
Called function: _remove
How can I tell if a method is in the current module?
(here is my trace function if you were wondering:)
import sys
def tracefunc(frame, event, args):
if event == 'call':
print("Called function: {}()".format(frame.f_code.co_name))
sys.settrace(tracefunc)
You can check the module name of a frame object with:
frame.f_globals['__name__']
Or if you want to check the previous call in the frame stack, (I'm not sure which one is more interesting right now):
frame.f_back.f_globals['__name__']
Of course, note that f_back may be None and that the global dictionary may not have a __name__ member.
UPDATE:
If you have a module object and want to list all the top-level callable objects:
callables = [n for n,f in module.__dict__.items() if hasattr(f, '__call__')]
That will get also the types, because technically they look a bit like functions, but you can refine the condition further if you want. (See this other question for extra details.
I have used decorators in the past without any issue, but I'm stuck on this one. It seems that when my decorated function is called by a PyQt action, it fails. My theory is that PyQt is somehow wrapping the connected function internally and messing with the signature, or some such thing, but I can't find any evidence of this.
By connected I mean like this:
from PyQt4 import QtGui
# In my widget init...
self.action_my_method = QtGui.QAction("action_my_method", self)
self.action_my_method.triggered.connect(self.my_method)
Here is my simple decorator:
from functools import wraps
def simple_decorator(fn):
#wraps(fn)
def wrapper(*args, **kwargs):
from inspect import getargspec
print getargspec(fn)
return fn(*args, **kwargs)
return wrapper
When I use this to decorate a method without any arguments (def my_method(self)), and execute it directly from my QWidget class, it works fine. When I used a menu to trigger the action, I get:
# Traceback (most recent call last):
# File "[...].py", line 88, in wrapper
# return fn(*args, **kwargs)
# TypeError: my_method() takes exactly 1 argument (2 given)
I'm seeing an arg spec:
ArgSpec(args=['self'], varargs=None, keywords=None, defaults=None)
If I add *args to my method, so the signature is my_method(self, *args), the decorator works fine and the arg spec is:
ArgSpec(args=['self'], varargs='args', keywords=None, defaults=None)
I don't see any clues here, since the decorator works fine when called directly, but fails when triggered by the action.
Can anyone shed some light on this, or offer something I can use to debug it further?
This isn't an answer so much as a workaround. I hope there aren't any downsides to this. I thought of it while I was typing out a simple re-direct function. This uses a lambda to do the same thing at run-time:
self.action_my_method.triggered.connect(lambda: self.my_method())
This sufficiently pads my decorated function against being mangled and prevents the error.
Alternatively you can redefine the signature of every method that may be called via a .connect as:
def my_method(self, qtcallparam=None, **kwargs):
So the method will ignore the first unnamed argument, at the price of use strictly keyword arguments in that method (not such a bad thing)
But the solution of #Rafe is probably cleaner