How to find calls to a method using Python introspection? - python

I just found some test methods in a project which did not have the required "test_" prefix to ensure that they are actually run. It should be possible to avoid this with a bit of linting:
Find all TestCase assertion calls in the code base.
Look for a method with a name starting with "test_" in the call hierarchy.
If there is no such method, print an error message.
I'm wondering how to do the first two, which basically boil down to one problem: how do I find all calls to a specific method in my code base?
Grepping or other text searches won't do, because I need to introspect the results and find parent methods etc. until I either get to the test method or there are no more callers. I need to get a reference to the method to avoid matching methods which happen to have the same name as the ones I'm looking for.

There are 2 possible approaches here.
Static approach:
You could parse the code base using the ast module to identify all function calls and consistently store the origin and the target of the call. You would have to identify all classes and function definition to keep a track of the current context of each call. The limit here is that if you call instance methods, there is no simple way to identify what class the method actually belongs. Same if you use variables that refer to modules
Here is a Visitor subclass that can read Python source files and build a dict {caller: callee}:
class CallMapper(ast.NodeVisitor):
def __init__(self):
self.ctx = []
self.funcs = []
self.calls = collections.defaultdict(set)
def process(self, filename):
self.ctx = [('M', os.path.basename(filename)[:-3])]
tree = ast.parse(open(filename).read(), filename)
self.visit(tree)
self.ctx.pop()
def visit_ClassDef(self, node):
print('ClassDef', node.name, node.lineno, self.ctx)
self.ctx.append(('C', node.name))
self.generic_visit(node)
self.ctx.pop()
def visit_FunctionDef(self, node):
print('FunctionDef', node.name, node.lineno, self.ctx)
self.ctx.append(('F', node.name))
self.funcs.append('.'.join([elt[1] for elt in self.ctx]))
self.generic_visit(node)
self.ctx.pop()
def visit_Call(self, node):
print('Call', vars(node.func), node.lineno, self.ctx)
try:
id = node.func.id
except AttributeError:
id = '*.' + node.func.attr
self.calls['.'.join([elt[1] for elt in self.ctx])].add(id)
self.generic_visit(node)
Dynamic approach:
If you really want to identify what method is called, when more than one could share the same name, you will have to use a dynamic approach. You would decorate individual functions or all methods from a class in order to count how many times they were called, and optionnaly where they were called from. Then you would start the tests and examine what actually happened.
Here is a function that will decorate all methods from a class so that the number all calls will be stored in a dictionnary:
def tracemethods(cls, track):
def tracker(func, track):
def inner(*args, **kwargs):
if func.__qualname__ in track:
track[func.__qualname__] += 1
else:
track[func.__qualname__] = 1
return func(*args, *kwargs)
inner.__doc__ = func.__doc__
inner.__signature__ = inspect.signature(func)
return inner
for name, func in inspect.getmembers(cls, inspect.isfunction):
setattr(cls, name, tracker(func, track))
You could tweak that code to browse the interpretor stack to identify the caller for each call, but is is not very easy because you get the unqualified name of the caller function and will have to use the file name and line number to uniquely identify the caller.

Well, here's a start. You will use a couple of standard libraries:
import dis
import inspect
Suppose you're interested in this source code: myfolder/myfile.py
Then do this:
import myfolder.myfile
def some_func():
''
loads = {'LOAD_GLOBAL', 'LOAD_ATTR'}
name_to_member = dict(inspect.getmembers(myfolder.myfile))
for name, member in name_to_member.items():
if type(member) == type(some_func):
print(name)
for ins in dis.get_instructions(member):
if ins.opname in loads:
print(name, ins.opname, ins.argval)
Other fun things to do: run dis.dis(member), or print out dis.code_info(member).
This will let you visit each function defined in the file,
and visit each executable statement to see if it might be a method call you care about.
Then it's up to you to Do The Right Thing with potential test methods.

Related

Python naming convention for setter method name without args

I have been writing python code with classes that will have a method called something like:
def set_log_paths(self):
The thing is, this method doesn't take an argument, it determines what some values should be based on other values of self. Is it inappropriate to use the word "set" in this case? I ask because it isn't a direct getter or a setter as one would use in a language with private members.
Is there a common conventional word to use in my method name?
f you don't pass any values, and instead, compute the value at the moment the method is called, based on current values, it is reasonable that the verb describing the action be "update" - therefore update_log_paths().
Just double check you really need this design, and what are the chances of you/other users of your class forgetting calling these "update" methods.
Python's introspection easily allows adopting some elements from "reactive programing", which could be used to trigger these updater methods when the values they depend upon are changed.
One optimal choice for such an architecture would be a descriptor for your properties that upon having __set__ called would check a class-level registry to "see" if events should be triggered, and then one decorator that would enable you to list the attributes that would trigger it. A base class with a proper __init_subclass__ method could set everything up.
Let's suppose you will have the "base properties" on your class as annotated attributes in the class body - the descritptor, decorator and base-class code for this to work could be something along:
from functools import wraps
from collections import ChainMap
class EventDescriptor:
def __init__(self, name, default):
self.name = name
self.default = default
def __get__(self, instance, owner):
if not instance:
return self
return instance.__dict__[self.name] if self.name in instance.__dict__ else self.default
def __set__(self, instance, value):
instance.__dict__[self.name] = value
triggers = instance._post_change_registry.get(self.name, [])
for trigger in triggers:
getattr(instance, trigger)()
def triggered_by(*args):
def decorator(func):
func._triggered_by = args
return func
return decorator
class EventPropertyMixin:
def __init_subclass__(cls, **kw):
super.__init_subclass__(**kw)
for property_name, type_ in cls.__annotations__.items():
if not hasattr(cls, property_name):
raise TypeError("Properties without default values not supported in this example code")
# It would also be trivial to implement runtime type-checking in this point (and on the descriptor code)
setattr(cls, property_name, EventDescriptor(property_name, getattr(cls, property_name)))
# collects all registries in ancestor-classes, preserving order:
post_change_registry = ChainMap()
for ancestor in cls.__mro__[:0:-1]:
if hasattr(ancestor, "_post_change_registry"):
post_change_registry = post_change_registy.new_child(ancestor._post_change_registry)
post_change_registry = post_change_registry.new_child({})
for method_name, method in cls.__dict__.items():
if callable(method) and hasattr(method, "_triggered_by"):
for property_name in method._triggered_by:
triggers = post_change_registry.setdefault(property_name, [])
if method_name not in triggers:
triggers.append(method_name)
cls._post_change_registry = post_change_registry
class Test(EventPropertyMixin):
path1: str = ""
path2: str = ""
#triggered_by("path1", "path2")
def update_log_paths(self):
self.log_paths = self.path1 + self.path2
And let's this working:
In [2]: t = Test()
In [3]: t.path1 = "/tmp"
In [4]: t.path2 = "/inner"
In [5]: t.log_paths
Out[5]: '/tmp/inner'
So, this is complicated code, but code that usually would lie inside a framework, or in base utility libraries - with these 50 lines of code, you could be using Python to work for you, and have it call the updating methods, so their name won't matter at all! :-)
(ok, this code is way overkill for the question asked - but I was in a mood to produce something like this before sleeping tonight - disclaimer: I had not tested the inheritance-related corner cases covered in here)

Module organization, inheritance, and #classmethods

I'm trying to write a class that works kind of like the builtins and some of the other "grown-up" Python stuff I've seen. My Pythonic education is a little spotty, classes-wise, and I'm worried I've got it all mixed up.
I'd like to create a class that serves as a kind of repository, containing a dictionary of unprocessed files (and their names), and a dictionary of processed files (and their names). I'd like to implement some other (sub?)classes that handle things like opening and processing the files. The file handling classes should be able to update the dictionaries in the main class. I'd also like to be able to directly call the various submodules without having to separately instantiate everything, e.g.:
import Pythia
p = Pythia()
p.FileManager.addFile("/path/to/some/file")
or even
Pythia.FileManager.addFile("/path/to/some/file")
I've been looking around at stuff about #classmethod and super and such, but I can't say I entirely understand it. I'm also beginning to suspect that I might have the whole chain of inheritance backwards--that what I think of as my main class should actually be the child class of the handling and processing classes. I'm also wondering whether this would all work better as a package, but that's a separate, very intimidating issue.
Here's my code so far:
#!/usr/bin/python
import re
import os
class Pythia(object):
def __init__(self):
self.raw_files = {}
self.parsed_files = {}
self.FileManger = FileManager()
def listf(self,fname,f):
if fname in self.raw_files.keys():
_isRaw = "raw"
elif fname in self.parsed_files.keys():
_isRaw = "parsed"
else:
return "Error: invalid file"
print "{} ({}):{}...".format(fname,_isRaw,f[:100])
def listRaw(self,n=None):
max = n or len(self.raw_files.items())
for item in self.raw_files.items()[:max]:
listf(item[0],item[1])
def listParsed(self,n=None):
max = n or len(self.parsed_files.items())
for item in self.parsed_files.items()[:max]:
listf(item[0],item[1])
class FileManager(Pythia):
def __init__(self):
pass
def addFile(self,f,name=None,recurse=True,*args):
if name:
fname = name
else:
fname = ".".join(os.path.basename(f).split(".")[:-1])
if os.path.exists(f):
if not os.path.isdir(f):
with open(f) as fil:
Pythia.raw_files[fname] = fil.read()
else:
print "{} seems to be a directory.".format(f)
if recurse == False:
return "Stopping..."
elif recurse == True:
print "Recursively navingating directory {}".format(f)
addFiles(dir,*args)
else:
recurse = raw_input("Recursively navigate through directory {}? (Y/n)".format(f))
if recurse[0].lower() == "n":
return "Stopping..."
else:
addFiles(dir,*args)
else:
print "Error: file or directory not found at {}".format(f)
def addFiles(self,directory=None,*args):
if directory:
self._recursivelyOpen(directory)
def argHandler(arg):
if isinstance(arg,str):
self._recursivelyOpen(arg)
elif isinstance(arg,tuple):
self.addFile(arg[0],arg[1])
else:
print "Warning: {} is not a valid argument...skipping..."
pass
for arg in args:
if not isinstance(arg,(str,dict)):
if len(arg) > 2:
for subArg in arg:
argHandler(subArg)
else:
argHandler(arg)
elif isinstance(arg,dict):
for item in arg.items():
argHandler(item)
else:
argHandler(arg)
def _recursivelyOpen(self,f):
if os.path.isdir(f):
l = [os.path.join(f,x) for x in os.listdir(f) if x[0] != "."]
for x in l:
_recursivelyOpen(x)
else:
addFile(f)
First off: follow PEP8's guidelines. Module names, variable names, and function names should be lowercase_with_underscores; only class names should be CamelCase. Following your code is a little difficult otherwise. :)
You're muddying up OO concepts here: you have a parent class that contains an instance of a subclass.
Does a FileManager do mostly what a Pythia does, with some modifications or extensions? Given that the two only work together, I'd guess not.
I'm not quite sure what you ultimately want this to look like, but I don't think you need inheritance at all. FileManager can be its own class, self.file_manager on a Pythia instance can be an instance of FileManager, and then Pythia can delegate to it if necessary. That's not far from how you're using this code already.
Build small, independent pieces, then worry about how to plug them into each other.
Also, some bugs and style nits:
You call _recursivelyOpen(x) but forgot the self..
Single space after commas.
Watch out for max as a variable name: it's also the name of a builtin function.
Avoid type-checking (isinstance) if you can help it. It's extra-hard to follow your code when it does a dozen different things depending on argument types. Have very clear argument types, and create helper functions that accept different arguments if necessary.
You have Pythia.raw_files[fname] inside FileManager, but Pythia is a class, and it doesn't have a raw_files attribute anyway.
You check if recurse is True, then False, then... something else. When is it something else? Also, you should use is instead of == for testing against the builtin singletons like this.
There is a lot here and you are probably best to educate yourself some more.
For your intended usage:
import Pythia
p = Pythia()
p.file_manager.addFile("/path/to/some/file")
A class structure like this would work:
class FileManager(object):
def __init__(self, parent):
self.parent = parent
def addFile(self, file):
# Your code
self.parent.raw_files[file] = file
def addFiles(self, files)
# Your code
for file in files:
self.parent.raw_files[file] = file
class Pythia(object):
def __init__(self):
self.file_manager = FileManager(self)
However there are a lot of options. You should write some client code first to work out what you want, then implement your class/objects to match that. I don't tend to ever use inheritance in python, it is not really required due to pythons duck typing.
Also if you want a method to be called without instantiating the class use staticmethod, not classmethod. For example:
class FileManager(object):
#staticmethod
def addFiles(files):
pass

Python - how do I force the use of a factory method to instantiate an object?

I have a set of related classes that all inherit from one base class. I would like to use a factory method to instantiate objects for these classes. I want to do this because then I can store the objects in a dictionary keyed by the class name before returning the object to the caller. Then if there is a request for an object of a particular class, I can check to see whether one already exists in my dictionary. If not, I'll instantiate it and add it to the dictionary. If so, then I'll return the existing object from the dictionary. This will essentially turn all the classes in my module into singletons.
I want to do this because the base class that they all inherit from does some automatic wrapping of the functions in the subclasses, and I don't want to the functions to get wrapped more than once, which is what happens currently if two objects of the same class are created.
The only way I can think of doing this is to check the stacktrace in the __init__() method of the base class, which will always be called, and to throw an exception if the stacktrace does not show that the request to make the object is coming from the factory function.
Is this a good idea?
Edit: Here is the source code for my base class. I've been told that I need to figure out metaclasses to accomplish this more elegantly, but this is what I have for now. All Page objects use the same Selenium Webdriver instance, which is in the driver module imported at the top. This driver is very expensive to initialize -- it is initialized the first time a LoginPage is created. After it is initialized the initialize() method will return the existing driver instead of creating a new one. The idea is that the user must begin by creating a LoginPage. There will eventually be dozens of Page classes defined and they will be used by unit testing code to verify that the behavior of a website is correct.
from driver import get_driver, urlpath, initialize
from settings import urlpaths
class DriverPageMismatchException(Exception):
pass
class URLVerifyingPage(object):
# we add logic in __init__() to check the expected urlpath for the page
# against the urlpath that the driver is showing - we only want the page's
# methods to be invokable if the driver is actualy at the appropriate page.
# If the driver shows a different urlpath than the page is supposed to
# have, the method should throw a DriverPageMismatchException
def __init__(self):
self.driver = get_driver()
self._adjust_methods(self.__class__)
def _adjust_methods(self, cls):
for attr, val in cls.__dict__.iteritems():
if callable(val) and not attr.startswith("_"):
print "adjusting:"+str(attr)+" - "+str(val)
setattr(
cls,
attr,
self._add_wrapper_to_confirm_page_matches_driver(val)
)
for base in cls.__bases__:
if base.__name__ == 'URLVerifyingPage': break
self._adjust_methods(base)
def _add_wrapper_to_confirm_page_matches_driver(self, page_method):
def _wrapper(self, *args, **kwargs):
if urlpath() != urlpaths[self.__class__.__name__]:
raise DriverPageMismatchException(
"path is '"+urlpath()+
"' but '"+urlpaths[self.__class.__name__]+"' expected "+
"for "+self.__class.__name__
)
return page_method(self, *args, **kwargs)
return _wrapper
class LoginPage(URLVerifyingPage):
def __init__(self, username=username, password=password, baseurl="http://example.com/"):
self.username = username
self.password = password
self.driver = initialize(baseurl)
super(LoginPage, self).__init__()
def login(self):
driver.find_element_by_id("username").clear()
driver.find_element_by_id("username").send_keys(self.username)
driver.find_element_by_id("password").clear()
driver.find_element_by_id("password").send_keys(self.password)
driver.find_element_by_id("login_button").click()
return HomePage()
class HomePage(URLVerifyingPage):
def some_method(self):
...
return SomePage()
def many_more_methods(self):
...
return ManyMorePages()
It's no big deal if a page gets instantiated a handful of times -- the methods will just get wrapped a handful of times and a handful of unnecessary checks will take place, but everything will still work. But it would be bad if a page was instantiated dozens or hundreds or tens of thousands of times. I could just put a flag in the class definition for each page and check to see if the methods have already been wrapped, but I like the idea of keeping the class definitions pure and clean and shoving all the hocus-pocus into a deep corner of my system where no one can see it and it just works.
In Python, it's almost never worth trying to "force" anything. Whatever you come up with, someone can get around it by monkeypatching your class, copying and editing the source, fooling around with bytecode, etc.
So, just write your factory, and document that as the right way to get an instance of your class, and expect anyone who writes code using your classes to understand TOOWTDI, and not violate it unless she really knows what she's doing and is willing to figure out and deal with the consequences.
If you're just trying to prevent accidents, rather than intentional "misuse", that's a different story. In fact, it's just standard design-by-contract: check the invariant. Of course at this point, SillyBaseClass is already screwed up, and it's too late to repair it, and all you can do is assert, raise, log, or whatever else is appropriate. But that's what you want: it's a logic error in the application, and the only thing to do is get the programmer to fix it, so assert is probably exactly what you want.
So:
class SillyBaseClass:
singletons = {}
class Foo(SillyBaseClass):
def __init__(self):
assert self.__class__ not in SillyBaseClass.singletons
def get_foo():
if Foo not in SillyBaseClass.singletons:
SillyBaseClass.singletons[Foo] = Foo()
return SillyBaseClass.singletons[Foo]
If you really do want to stop things from getting this far, you can check the invariant earlier, in the __new__ method, but unless "SillyBaseClass got screwed up" is equivalent to "launch the nukes", why bother?
it sounds like you want to provide a __new__ implementation: Something like:
class MySingledtonBase(object):
instance_cache = {}
def __new__(cls, arg1, arg2):
if cls in MySingletonBase.instance_cache:
return MySingletonBase.instance_cache[cls]
self = super(MySingletonBase, cls).__new__(arg1, arg2)
MySingletonBase.instance_cache[cls] = self
return self
Rather than adding complex code to catch mistakes at runtime, I'd first try to use convention to guide users of your module to do the right thing on their own.
Give your classes "private" names (prefixed by an underscore), give them names that suggest they shouldn't be instantiated (eg _Internal...) and make your factory function "public".
That is, something like this:
class _InternalSubClassOne(_BaseClass):
...
class _InternalSubClassTwo(_BaseClass):
...
# An example factory function.
def new_object(arg):
return _InternalSubClassOne() if arg == 'one' else _InternalSubClassTwo()
I'd also add docstrings or comments to each class, like "Don't instantiate this class by hand, use the factory method new_object."
You can also just nest classes in factory method, as described here:
https://python-3-patterns-idioms-test.readthedocs.io/en/latest/Factory.html#preventing-direct-creation
Working example from mentioned source:
# Factory/shapefact1/NestedShapeFactory.py
import random
class Shape(object):
types = []
def factory(type):
class Circle(Shape):
def draw(self): print("Circle.draw")
def erase(self): print("Circle.erase")
class Square(Shape):
def draw(self): print("Square.draw")
def erase(self): print("Square.erase")
if type == "Circle": return Circle()
if type == "Square": return Square()
assert 0, "Bad shape creation: " + type
def shapeNameGen(n):
for i in range(n):
yield factory(random.choice(["Circle", "Square"]))
# Circle() # Not defined
for shape in shapeNameGen(7):
shape.draw()
shape.erase()
I'm not fan of this solution, just want to add this as one more option.

How to access call parameters and arguments with __getattr__

I have the following code, where most of the code seem to look awkward, confusing and/or circumstantial, but most of it is to demonstrate the parts of the much larger code where I have a problem with. Please read carefully
# The following part is just to demonstrate the behavior AND CANNOT BE CHANGED UNDER NO CIRCUMSTANCES
# Just define something so you can access something like derived.obj.foo(x)
class Basic(object):
def foo(self, x=10):
return x*x
class Derived(object):
def info(self, x):
return "Info of Derived: "+str(x)
def set(self, obj):
self.obj = obj
# The following piece of code might be changed, but I would rather not
class DeviceProxy(object):
def __init__(self):
# just to set up something that somewhat behaves as the real code in question
self.proxy = Derived()
self.proxy.set(Basic())
# crucial part: I want any attributes forwarded to the proxy object here, without knowing beforehand what the names will be
def __getattr__(self, attr):
return getattr(self.proxy, attr)
# ======================================
# This code is the only I want to change to get things work
# Original __getattr__ function
original = DeviceProxy.__getattr__
# wrapper for the __getattr__ function to log/print out any attribute/parameter/argument/...
def mygetattr(device, key):
attr = original(device, key)
if callable(attr):
def wrapper(*args, **kw):
print('%r called with %r and %r' % (attr, args, kw))
return attr(*args, **kw)
return wrapper
else:
print "not callable: ", attr
return attr
DeviceProxy.__getattr__ = mygetattr
# make an instance of the DeviceProxy class and call the double-dotted function
dev = DeviceProxy()
print dev.info(1)
print dev.obj.foo(3)
What I want is to catch all method calls to DeviceProxy to be able to print all arguments/parameters and so on. In the given example, this works great when calling info(1), all of the information is printed.
But when I call the double-dotted function dev.obj.foo(3), I only get the message that this is not a callable.
How can I modify the above code so I also get my information in the second case? Only the code below the === can be modified.
You have just a __getattr__ on dev and you want, from within this __getattr__, to have access to foo when you do dev.obj.foo. This isn't possible. The attribute accesses are not a "dotted function" that is accessed as a whole. The sequence of attribute accesses (the dots) is evaluated one at a time, left to right. At the time that you access dev.obj, there is no way to know that you will later access foo. The method dev.__getattr__ only knows what attributes you are accessing on dev, not what attributes of that result you may later access.
The only way to achieve what you want would be to also include some wrapping behavior in obj. You say you can't modify the "Base"/"Derived" classes, so you can't do it that way. You could, in theory, have DeviceProxy.__getattr__ not return the actual value of the accessed attribute, but instead wrap that object in another proxy and return the proxy. However, that could get a bit tricky and make your code more difficult to understand and debug, since you could wind up with tons of objects being wrapped in thin proxies.

Dynamic function docstring

I'd like to write a python function that has a dynamically created docstring. In essence for a function func() I want func.__doc__ to be a descriptor that calls a custom __get__ function create the docstring on request. Then help(func) should return the dynamically generated docstring.
The context here is to write a python package wrapping a large number of command line tools in an existing analysis package. Each tool becomes a similarly named module function (created via function factory and inserted into the module namespace), with the function documentation and interface arguments dynamically generated via the analysis package.
You can't do what you're looking to do, in the way you want to do it.
From your description it seems like you could do something like this:
for tool in find_tools():
def __tool(*arg):
validate_args(tool, args)
return execute_tool(tool, args)
__tool.__name__ = tool.name
__tool.__doc__ = compile_docstring(tool)
setattr(module, tool.name, __tool)
i.e. create the documentation string dynamically up-front when you create the function.
Is the a reason why the docstring has to be dynamic from one call to __doc__ to the next?
Assuming there is, you'll have to wrap your function up in a class, using __call__ to trigger the action.
But even then you've got a problem. When help() is called to find the docstring, it is called on the class, not the instance, so this kind of thing:
class ToolWrapper(object):
def __init__(self, tool):
self.tool = tool
self.__name__ = tool.name
def _get_doc(self):
return compile_docstring(self.tool)
__doc__ = property(_get_doc)
def __call__(self, *args):
validate_args(args)
return execute_tool(tool, args)
won't work, because properties are instance, not class attributes. You can make the doc property work by having it on a metaclass, rather than the class itself
for tool in find_tools():
# Build a custom meta-class to provide __doc__.
class _ToolMetaclass(type):
def _get_doc(self):
return create_docstring(tool)
__doc__ = property(_get_doc)
# Build a callable class to wrap the tool.
class _ToolWrapper(object):
__metaclass__ = _ToolMetaclass
def _get_doc(self):
return create_docstring(tool)
__doc__ = property(_get_doc)
def __call__(self, *args):
validate_args(tool, args)
execute_tool(tool, args)
# Add the tool to the module.
setattr(module, tool.name, _ToolWrapper())
Now you can do
help(my_tool_name)
and get the custom docstring, or
my_tool_name.__doc__
for the same thing. The __doc__ property is in the _ToolWrapper class is needed to trap the latter case.
(Python 3 solution)
You could make use of Python's duck typing to implement a dynamic string:
import time
def fn():
pass
class mydoc( str ):
def expandtabs( self, *args, **kwargs ):
return "this is a dynamic strting created on {}".format( time.asctime() ).expandtabs( *args, **kwargs )
fn.__doc__ = mydoc()
help( fn )
Caveats:
This assumes that the help function is calling .expandtabs to get the text from the __doc__ object, which works in Python 3.7. A more robust solution would implement the other str methods in order to have our duck continue acting like a duck even if the help method changes. Also note that our mydoc class derives from str, this is because help, somewhat atypically, enforces strong typing by asserting isinstance(thing.__doc__, str). Like all solutions this is a bit hacky, but whether this is a problem largely depends on the full project requirements.
Instead of messing with the function, why not write your own help function?
my_global=42
def help(func):
print('%s: my_global=%s'%(func.func_name,my_global))
def foo():
pass
help(foo)

Categories

Resources