I have a number of functions that need to get called from various imported files.
The functions are formated along the lines of this:
a.foo
b.foo2
a.bar.foo4
a.c.d.foo5
and they are passed in to my script as a raw string.
I'm looking for a clean way to run these, with arguments, and get the return values
Right now I have a messy system of splitting the strings then feeding them to the right getattr call but this feels kind of clumsy and is very un-scalable. Is there a way I can just pass the object portion of getattr as a string? Or some other way of doing this?
import a, b, a.bar, a.c.d
if "." in raw_script:
split_script = raw_script.split(".")
if 'a' in raw_script:
if 'a.bar' in raw_script:
out = getattr(a.bar, split_script[-1])(args)
if 'a.c.d' in raw_script:
out = getattr(a.c.d, split_script[-1])(args)
else:
out = getattr(a, split_script[-1])(args)
elif 'b' in raw_script:
out = getattr(b, split_script[-1])(args)
It's hard to tell from your question, but it sounds like you have a command line tool you run as my-tool <function> [options]. You could use importlib like this, avoiding most of the getattr calls:
import importlib
def run_function(name, args):
module, function = name.rsplit('.', 1)
module = importlib.import_module(module)
function = getattr(module, function)
function(*args)
if __name__ == '__main__':
# Elided: retrieve function name and args from command line
run_function(name, args)
Try this:
def lookup(path):
obj = globals()
for element in path.split('.'):
try:
obj = obj[element]
except KeyError:
obj = getattr(obj, element)
return obj
Note that this will handle a path starting with ANY global name, not just your a and b imported modules. If there are any possible concerns with untrusted input being provided to the function, you should start with a dict containing the allowed starting points, not the entire globals dict.
Related
I have an object A which contains parserA - an argparse.ArgumentParser object
There is also object B which contains parserB - another argparse.ArgumentParser
Object A contains an instance of object B, however object B's arguments now need to be parsed by the parser in object A (since A is the one being called from the command line with the arguments, not B)
Is there a way to write in Python object A: parserA += B.parserB?
argparse was developed around objects. Other than a few constants and utility functions it is all class definitions. The documentation focuses on use rather than that class structure. But it may help to understand a bit of that.
parser = argparse.ArgumentParser(...)
creates a parser object.
arg1 = parser.add_argument(...)
creates an argparse.Action (subclass actually) object and adds it to several parser attributes (lists). Normally we ignore the fact that the method returns this Action object, but occasionally I find it helpful. And when I build a parser in an interactive shell I see a this action.
args = parser.parse_args()
runs another method, and returns an namespace object (class argparse.Namespace).
The group methods and subparsers methods also create and return objects (groups, actions and/or parsers).
The ArgumentParser method takes a parents parameter, where the value is a list of parser objects.
With
parsera = argparse.ArgumentParser(parents=[parserb])
during the creation of parsera, the actions and groups in parserb are copied to parsera. That way, parsera will recognize all the arguments that parserb does. I encourage you to test it.
But there are a few qualifications. The copy is by reference. That is, parsera gets a pointer to each Action defined in parserb. Occasionally that creates problems (I won't get into that now). And one or the other has to have add_help=False. Normally a help action is added to a parser at creation. But if parserb also has a help there will be conflict (a duplication) that has to be resolved.
But parents can't be used if parsera has been created independently of parserb. There's no existing mechanism for adding Actions from parserb. It might possible to make a new parser, with both as parents
parserc = argparse.ArgumentParser(parents=[parsera, parserb])
I could probably write a function that would add arguments from parserb to parsera, borrowing ideas from the method that implements parents. But I'd have to know how conflicts are to be resolved.
Look at the argparse._ActionsContainer._add_container_actions to see how arguments (Actions) are copies from a parent to a parser. Something that may be confusing is that each Action is part of a group (user defined or one of the 2 default groups (seen in the help)) in addition to being in a parser.
Another possibility is to use
[argsA, extrasA] = parserA.parse_known_args()
[argsB, extrasB] = parserB.parse_known_args() # uses the same sys.argv
# or
args = parserB.parse_args(extrasA, namespace=argsA)
With this each parser handles the arguments it knows about, and returns the rest in the extras list.
Unless the parsers are designed for this kind of integration, there will be rough edges with this kind of integration. It may be easier to deal with those conficts with Arnial's approach, which is to put the shared argument definitions in your own methods. Others like to put the argument parameters in some sort of database (list, dictionary, etc), and build the parser from that. You can wrap parser creation in as many layers of boilerplate as you find convenient.
You can't use one ArgumentParser inside another. But there is a way around. You need to extract to method code that add arguments to parser.
Then you will be able to use them to merge arguments in parser.
Also it will be easer to group arguments (related to their parsers). But you must be shore that sets of arguments names do not intersect.
Example:
foo.py:
def add_foo_params( group ):
group.add_argument('--foo', help='foo help')
if __name__ = "__main__":
parser = argparse.ArgumentParser(prog='Foo')
boo.py
def add_boo_params( group ):
group.add_argument('--boo', help='boo help')
if __name__ = "__main__":
parser = argparse.ArgumentParser(prog='Boo')
fooboo.py
from foo import add_foo_params
from boo import add_boo_params
if __name__ = "__main__":
parser = argparse.ArgumentParser(prog='FooBoo')
foo_group = parser.add_argument_group(title="foo params")
boo_group = parser.add_argument_group(title="boo params")
add_foo_params( foo_group )
add_boo_params( boo_group )
For your use case, if you can, you could try simply sharing the same argparse object between classes via a dedicated method.
Below is based on what it seems like your situation is.
import argparse
class B(object):
def __init__(self, parserB=argparse.ArgumentParser()):
super(B, self).__init__()
self.parserB = parserB
def addArguments(self):
self.parserB.add_argument("-tb", "--test-b", help="Test B", type=str, metavar="")
#Add more arguments specific to B
def parseArgs(self):
return self.parserB.parse_args()
class A(object):
def __init__(self, parserA=argparse.ArgumentParser(), b=B()):
super(A, self).__init__()
self.parserA = parserA
self.b = b
def addArguments(self):
self.parserA.add_argument("-ta", "--test-a", help="Test A", type=str, metavar="")
#Add more arguments specific to A
def parseArgs(self):
return self.parserA.parse_args()
def mergeArgs(self):
self.b.parserB = self.parserA
self.b.addArguments()
self.addArguments()
Code Explanation:
As stated, in the question, object A and object B contain their own parser objects. Object A also contains an instance of object B.
The code simply separates the anticipated flow into separate methods so that it is possible to keep adding arguments to a single parser before attempting to parse it.
Test Individual
a = A()
a.addArguments()
print(vars(a.parseArgs()))
# CLI Command
python test.py -ta "Testing A"
# CLI Result
{'test_a': 'Testing A'}
Combined Test
aCombined = A()
aCombined.mergeArgs()
print(vars(aCombined.parseArgs()))
# CLI Command
testing -ta "Testing A" -tb "Testing B"
# CLI Result
{'test_b': 'Testing B', 'test_a': 'Testing A'}
Additional
You can also make a general method that takes variable args, and would iterate over and keep adding the args of various classes. I created class C and D for sample below with a general "parser" attribute name.
Multi Test
# Add method to Class A
def mergeMultiArgs(self, *objects):
parser = self.parserA
for object in objects:
object.parser = parser
object.addArguments()
self.addArguments()
aCombined = A()
aCombined.mergeMultiArgs(C(), D())
print(vars(aCombined.parseArgs()))
# CLI Command
testing -ta "Testing A" -tc "Testing C" -td "Testing D"
# CLI Result
{'test_d': 'Testing D', 'test_c': 'Testing C', 'test_a': 'Testing A'}
Yes they can be combined, do this:
Here is a function that merges two args:
def merge_args_safe(args1: Namespace, args2: Namespace) -> Namespace:
"""
Merges two namespaces but throws an error if there are keys that collide.
ref: https://stackoverflow.com/questions/56136549/how-can-i-merge-two-argparse-namespaces-in-python-2-x
:param args1:
:param args2:
:return:
"""
# - the merged args
# The vars() function returns the __dict__ attribute to values of the given object e.g {field:value}.
args = Namespace(**vars(args1), **vars(args2))
return args
test
def merge_args_test():
args1 = Namespace(foo="foo", collided_key='from_args1')
args2 = Namespace(bar="bar", collided_key='from_args2')
args = merge_args(args1, args2)
print('-- merged args')
print(f'{args=}')
output:
Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd.py", line 1483, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Users/brando/ultimate-utils/ultimate-utils-proj-src/uutils/__init__.py", line 1202, in <module>
merge_args_test()
File "/Users/brando/ultimate-utils/ultimate-utils-proj-src/uutils/__init__.py", line 1192, in merge_args_test
args = merge_args(args1, args2)
File "/Users/brando/ultimate-utils/ultimate-utils-proj-src/uutils/__init__.py", line 1116, in merge_args
args = Namespace(**vars(args1), **vars(args2))
TypeError: argparse.Namespace() got multiple values for keyword argument 'collided_key'
python-BaseException
you can find it in this library: https://github.com/brando90/ultimate-utils
If you want to have collisions resolved do this:
def merge_two_dicts(starting_dict: dict, updater_dict: dict) -> dict:
"""
Starts from base starting dict and then adds the remaining key values from updater replacing the values from
the first starting/base dict with the second updater dict.
For later: how does d = {**d1, **d2} replace collision?
:param starting_dict:
:param updater_dict:
:return:
"""
new_dict: dict = starting_dict.copy() # start with keys and values of starting_dict
new_dict.update(updater_dict) # modifies starting_dict with keys and values of updater_dict
return new_dict
def merge_args(args1: Namespace, args2: Namespace) -> Namespace:
"""
ref: https://stackoverflow.com/questions/56136549/how-can-i-merge-two-argparse-namespaces-in-python-2-x
:param args1:
:param args2:
:return:
"""
# - the merged args
# The vars() function returns the __dict__ attribute to values of the given object e.g {field:value}.
merged_key_values_for_namespace: dict = merge_two_dicts(vars(args1), vars(args2))
args = Namespace(**merged_key_values_for_namespace)
return args
test:
def merge_args_test():
args1 = Namespace(foo="foo", collided_key='from_args1')
args2 = Namespace(bar="bar", collided_key='from_args2')
args = merge_args(args1, args2)
print('-- merged args')
print(f'{args=}')
assert args.collided_key == 'from_args2', 'Error in merge dict, expected the second argument to be the one used' \
'to resolve collision'
Is it possible to check if some class exists? I have class names in a json config file.
I know that I can simply try to create an object by class name string but that is actually a bad idea because the class constructor can do some unexpected stuff while at this point in time I just want to check if my config is valid and all mentioned classes are available.
Is there any way to do it?
EDIT: Also I do understand that u can get all the methods from some module, in my case I am not sure and don't actually care from what module comes the method. It can be from any import statement and I probably don't know where exactly from.
Using eval() leaves the door open for arbitrary code execution, for security's sake it should be avoided.
Especially if you ask for a solution for such a problem here.
Then we can assume that you do not know these risks sufficiently.
import sys
def str_to_class(str):
return reduce(getattr, str.split("."), sys.modules[__name__])
try:
cls = str_to_class(<json-fragment-here>)
except AttributeError:
cls = None
if cls:
obj = cls(...)
else:
# fight against this
This avoids using eval and is approved by several SO users.
Solution is similar to Convert string to Python class object?.
You can parse the source to get all the class names:
from ast import ClassDef, parse
import importlib
import inspect
mod = "test"
mod = importlib.import_module(mod)
p = parse(inspect.getsource(mod))
names = [kls.name for kls in p.body if isinstance(kls, ClassDef)]
Input:
class Foo(object):
pass
class Bar(object):
pass
Output:
['Foo', 'Bar']
Just compare the class names from the config to the names returned.
{set of names in config}.difference(names)
If you want to include imported names you can parse the module it was imported from but depending on how it was imported you can still find cases that won't work:
from ast import ClassDef, parse, ImportFrom
import importlib
import inspect
mod = "test"
mod = importlib.import_module(mod)
p = parse(inspect.getsource(mod))
names = []
for node in p.body:
if isinstance(node, ClassDef):
names.append(node.name)
elif isinstance(node, ImportFrom):
names.extend(imp.name for imp in node.names)
print(names)
Input:
from test2 import Foobar, Barbar, foo
class Foo(object):
pass
class Bar(object):
pass
test2:
foo = 123
class Foobar(object):
pass
class Barbar(object):
pass
Output:
['Foobar', 'Barbar', 'Foo', 'Bar']
I tried the built-in type function, which worked for me, but there is maybe a more pythonic way to test for the existence of a class:
import types
def class_exist(className):
result = False
try:
result = (eval("type("+className+")") == types.ClassType)
except NameError:
pass
return result
# this is a test class, it's only purpose is pure existence:
class X:
pass
print class_exist('X')
print class_exist('Y')
The output is
True
False
Of course, this is a basic solution which should be used only with well-known input: the eval function can be a great a back door opener. There is a more reliable (but also compact) solution by wenzul.
I am working on a quick python script using the cmd module that will allow the user to enter text commands followed by parameters in basic url query string format. The prompts will be answered with something like
commandname foo=bar&baz=brack
Using cmd, I can't seem to find which method to override to affect the way the argument line is handed off to all the do_* methods. I want to run urlparse.parse_qs on these values, and calling this upon line in every do_* method seems clumsy.
The precmd method gets the whole line, before the commandname is split off and interpreted, so this will not work for my purposes. I'm also not terribly familiar with how to place a decorator inside a class like this and haven't been able to pull it off without breaking the scope.
Basically, the python docs for cmd say the following
Repeatedly issue a prompt, accept input, parse an initial prefix off
the received input, and dispatch to action methods, passing them the
remainder of the line as argument.
I want to make a method that will do additional processing to that "remainder of the line" and hand that generated dictionary off to the member functions as the line argument, rather than interpreting them in every function.
Thanks!
You could potentially override the onecmd() method, as the following quick example shows. The onecmd() method there is basically a copy of the one from the original cmd.py, but adds a call to urlparse.parse_qs() before passing the arguments to a function.
import cmd
import urlparse
class myCmd(cmd.Cmd):
def onecmd(self, line):
"""Mostly ripped from Python's cmd.py"""
cmd, arg, line = self.parseline(line)
arg = urlparse.parse_qs(arg) # <- added line
if not line:
return self.emptyline()
if cmd is None:
return self.default(line)
self.lastcmd = line
if cmd == '':
return self.default(line)
else:
try:
func = getattr(self, 'do_' + cmd)
except AttributeError:
return self.default(line)
return func(arg)
def do_foo(self, arg)
print arg
my_cmd = myCmd()
my_cmd.cmdloop()
Sample output:
(Cmd) foo
{}
(Cmd) foo a b c
{}
(Cmd) foo a=b&c=d
{'a': ['b'], 'c': ['d']}
Is this what you are trying to achieve?
Here's another potential solution that uses a class decorator to modify a
cmd.Cmd subclass and basically apply a decorator function to all do_*
methods of that class:
import cmd
import urlparse
import types
# function decorator to add parse_qs to individual functions
def parse_qs_f(f):
def f2(self, arg):
return f(self, urlparse.parse_qs(arg))
return f2
# class decorator to iterate over all attributes of a class and apply
# the parse_qs_f decorator to all do_* methods
def parse_qs(cls):
for attr_name in dir(cls):
attr = getattr(cls, attr_name)
if attr_name.startswith('do_') and type(attr) == types.MethodType:
setattr(cls, attr_name, parse_qs_f(attr))
return cls
#parse_qs
class myCmd(cmd.Cmd):
def do_foo(self, args):
print args
my_cmd = myCmd()
my_cmd.cmdloop()
I quickly cobbled this together and it appears to work as intended, however, I'm
open to suggestions on any pitfalls or how this solution could be improved.
I've seen plenty of examples of people extracting all of the classes from a module, usually something like:
# foo.py
class Foo:
pass
# test.py
import inspect
import foo
for name, obj in inspect.getmembers(foo):
if inspect.isclass(obj):
print obj
Awesome.
But I can't find out how to get all of the classes from the current module.
# foo.py
import inspect
class Foo:
pass
def print_classes():
for name, obj in inspect.getmembers(???): # what do I do here?
if inspect.isclass(obj):
print obj
# test.py
import foo
foo.print_classes()
This is probably something really obvious, but I haven't been able to find anything. Can anyone help me out?
Try this:
import sys
current_module = sys.modules[__name__]
In your context:
import sys, inspect
def print_classes():
for name, obj in inspect.getmembers(sys.modules[__name__]):
if inspect.isclass(obj):
print(obj)
And even better:
clsmembers = inspect.getmembers(sys.modules[__name__], inspect.isclass)
Because inspect.getmembers() takes a predicate.
I don't know if there's a 'proper' way to do it, but your snippet is on the right track: just add import foo to foo.py, do inspect.getmembers(foo), and it should work fine.
What about
g = globals().copy()
for name, obj in g.iteritems():
?
I was able to get all I needed from the dir built in plus getattr.
# Works on pretty much everything, but be mindful that
# you get lists of strings back
print dir(myproject)
print dir(myproject.mymodule)
print dir(myproject.mymodule.myfile)
print dir(myproject.mymodule.myfile.myclass)
# But, the string names can be resolved with getattr, (as seen below)
Though, it does come out looking like a hairball:
def list_supported_platforms():
"""
List supported platforms (to match sys.platform)
#Retirms:
list str: platform names
"""
return list(itertools.chain(
*list(
# Get the class's constant
getattr(
# Get the module's first class, which we wrote
getattr(
# Get the module
getattr(platforms, item),
dir(
getattr(platforms, item)
)[0]
),
'SYS_PLATFORMS'
)
# For each include in platforms/__init__.py
for item in dir(platforms)
# Ignore magic, ourselves (index.py) and a base class.
if not item.startswith('__') and item not in ['index', 'base']
)
))
import pyclbr
print(pyclbr.readmodule(__name__).keys())
Note that the stdlib's Python class browser module uses static source analysis, so it only works for modules that are backed by a real .py file.
If you want to have all the classes, that belong to the current module, you could use this :
import sys, inspect
def print_classes():
is_class_member = lambda member: inspect.isclass(member) and member.__module__ == __name__
clsmembers = inspect.getmembers(sys.modules[__name__], is_class_member)
If you use Nadia's answer and you were importing other classes on your module, that classes will be being imported too.
So that's why member.__module__ == __name__ is being added to the predicate used on is_class_member. This statement checks that the class really belongs to the module.
A predicate is a function (callable), that returns a boolean value.
This is the line that I use to get all of the classes that have been defined in the current module (ie not imported). It's a little long according to PEP-8 but you can change it as you see fit.
import sys
import inspect
classes = [name for name, obj in inspect.getmembers(sys.modules[__name__], inspect.isclass)
if obj.__module__ is __name__]
This gives you a list of the class names. If you want the class objects themselves just keep obj instead.
classes = [obj for name, obj in inspect.getmembers(sys.modules[__name__], inspect.isclass)
if obj.__module__ is __name__]
This is has been more useful in my experience.
Another solution which works in Python 2 and 3:
#foo.py
import sys
class Foo(object):
pass
def print_classes():
current_module = sys.modules[__name__]
for key in dir(current_module):
if isinstance( getattr(current_module, key), type ):
print(key)
# test.py
import foo
foo.print_classes()
I think that you can do something like this.
class custom(object):
__custom__ = True
class Alpha(custom):
something = 3
def GetClasses():
return [x for x in globals() if hasattr(globals()[str(x)], '__custom__')]
print(GetClasses())`
if you need own classes
I frequently find myself writing command line utilities wherein the first argument is meant to refer to one of many different classes. For example ./something.py feature command —-arguments, where Feature is a class and command is a method on that class. Here's a base class that makes this easy.
The assumption is that this base class resides in a directory alongside all of its subclasses. You can then call ArgBaseClass(foo = bar).load_subclasses() which will return a dictionary. For example, if the directory looks like this:
arg_base_class.py
feature.py
Assuming feature.py implements class Feature(ArgBaseClass), then the above invocation of load_subclasses will return { 'feature' : <Feature object> }. The same kwargs (foo = bar) will be passed into the Feature class.
#!/usr/bin/env python3
import os, pkgutil, importlib, inspect
class ArgBaseClass():
# Assign all keyword arguments as properties on self, and keep the kwargs for later.
def __init__(self, **kwargs):
self._kwargs = kwargs
for (k, v) in kwargs.items():
setattr(self, k, v)
ms = inspect.getmembers(self, predicate=inspect.ismethod)
self.methods = dict([(n, m) for (n, m) in ms if not n.startswith('_')])
# Add the names of the methods to a parser object.
def _parse_arguments(self, parser):
parser.add_argument('method', choices=list(self.methods))
return parser
# Instantiate one of each of the subclasses of this class.
def load_subclasses(self):
module_dir = os.path.dirname(__file__)
module_name = os.path.basename(os.path.normpath(module_dir))
parent_class = self.__class__
modules = {}
# Load all the modules it the package:
for (module_loader, name, ispkg) in pkgutil.iter_modules([module_dir]):
modules[name] = importlib.import_module('.' + name, module_name)
# Instantiate one of each class, passing the keyword arguments.
ret = {}
for cls in parent_class.__subclasses__():
path = cls.__module__.split('.')
ret[path[-1]] = cls(**self._kwargs)
return ret
import Foo
dir(Foo)
import collections
dir(collections)
The following can be placed at the top of the file:
def get_classes():
import inspect, sys
return dict(inspect.getmembers(
sys.modules[__name__],
lambda member: inspect.isclass(member) and member.__module__ == __name__
))
Note, this can be placed at the top of the module because we've wrapped the logic in a function definition. If you want the dictionary to exist as a top-level object you will need to place the definition at the bottom of the file to ensure all classes are included.
Go to Python Interpreter. type help ('module_name') , then press Enter.
e.g. help('os') .
Here, I've pasted one part of the output below:
class statvfs_result(__builtin__.object)
| statvfs_result: Result from statvfs or fstatvfs.
|
| This object may be accessed either as a tuple of
| (bsize, frsize, blocks, bfree, bavail, files, ffree, favail, flag, namemax),
| or via the attributes f_bsize, f_frsize, f_blocks, f_bfree, and so on.
|
| See os.statvfs for more information.
|
| Methods defined here:
|
| __add__(...)
| x.__add__(y) <==> x+y
|
| __contains__(...)
| x.__contains__(y) <==> y in x
I want to wrap the default open method with a wrapper that should also catch exceptions. Here's a test example that works:
truemethod = open
def fn(*args, **kwargs):
try:
return truemethod(*args, **kwargs)
except (IOError, OSError):
sys.exit('Can\'t open \'{0}\'. Error #{1[0]}: {1[1]}'.format(args[0], sys.exc_info()[1].args))
open = fn
I want to make a generic method of it:
def wrap(method, exceptions = (OSError, IOError)):
truemethod = method
def fn(*args, **kwargs):
try:
return truemethod(*args, **kwargs)
except exceptions:
sys.exit('Can\'t open \'{0}\'. Error #{1[0]}: {1[1]}'.format(args[0], sys.exc_info()[1].args))
method = fn
But it doesn't work:
>>> wrap(open)
>>> open
<built-in function open>
Apparently, method is a copy of the parameter, not a reference as I expected. Any pythonic workaround?
The problem with your code is that inside wrap, your method = fn statement is simply changing the local value of method, it isn't changing the larger value of open. You'll have to assign to those names yourself:
def wrap(method, exceptions = (OSError, IOError)):
def fn(*args, **kwargs):
try:
return method(*args, **kwargs)
except exceptions:
sys.exit('Can\'t open \'{0}\'. Error #{1[0]}: {1[1]}'.format(args[0], sys.exc_info()[1].args))
return fn
open = wrap(open)
foo = wrap(foo)
Try adding global open. In the general case, you might want to look at this section of the manual:
This module provides direct access to all ‘built-in’ identifiers of Python; for example, __builtin__.open is the full name for the built-in function open(). See chapter Built-in Objects.
This module is not normally accessed explicitly by most applications, but can be useful in modules that provide objects with the same name as a built-in value, but in which the built-in of that name is also needed. For example, in a module that wants to implement an open() function that wraps the built-in open(), this module can be used directly:
import __builtin__
def open(path):
f = __builtin__.open(path, 'r')
return UpperCaser(f)
class UpperCaser:
'''Wrapper around a file that converts output to upper-case.'''
def __init__(self, f):
self._f = f
def read(self, count=-1):
return self._f.read(count).upper()
# ...
CPython implementation detail: Most modules have the name __builtins__ (note the 's') made available as part of their globals. The value of __builtins__ is normally either this module or the value of this modules’s __dict__ attribute. Since this is an implementation detail, it may not be used by alternate implementations of Python.
you can just add return fn at the end of your wrap function and then do:
>>> open = wrap(open)
>>> open('bhla')
Traceback (most recent call last):
File "<pyshell#24>", line 1, in <module>
open('bhla')
File "<pyshell#18>", line 7, in fn
sys.exit('Can\'t open \'{0}\'. Error #{1[0]}: {1[1]}'.format(args[0], sys.exc_info()[1].args))
SystemExit: Can't open 'bhla'. Error #2: No such file or directory