I'm switching from Ruby to Python for a project. I appreciate the fact that Python has first-class functions and closures, so this question ought to be easy. I just haven't figured out what is idiomatically correct for Python:
In Ruby, I could write:
def with_quietude(level, &block)
begin
saved_gval = gval
gval = level
yield
ensure
gval = saved_gval
end
end
and call it like this:
with_quietude(3) {
razz_the_jazz
begin_the_beguine
}
(Note: I'm not asking about Python try/finally handling nor about saving and restoring variables -- I just wanted a non-trivial example of wrapping a block inside some other code.)
update
Or, since some of the answers are getting hung up on the global assignments in the previous example when I'm really asking about closures, what if the call was as follows? (Note that this doesn't change the definition of with_quietude):
def frumble(x)
with_quietude {
razz_the_jazz(x)
begin_the_beguine(2 * x)
}
end
How would you implement something similar in Python (and not get laughed at by the Python experts)?
Looking more into ruby's yield, it looks like you want something like contextlib.contextmanager:
from contextlib import contextmanager
def razz_the_jazz():
print gval
#contextmanager
def quietude(level):
global gval
saved_gval = gval
gval = level
try:
yield
finally:
gval = saved_gval
gval = 1
with quietude(3):
razz_the_jazz()
razz_the_jazz()
This script outputs:
3
1
indicating that our context manager did reset gval in the global namespace. Of course, I wouldn't use this context manager since it only works in the global namespace. (It won't work with locals in a function) for example.
This is basically a limitation of how assignment creates a new reference to an object and that you can never mutate an object by assignment to it directly. (The only way to mutate an object is to assign to one of it's attributes or via __setitem__ (a[x] = whatever))
A word of warning if you are coming from Ruby: All python 'def's are basically the same as ruby 'proc's.
Python doesn't have an equivalent for ruby's 'def'
You can get very similar behaviour to what you are asking for by defining your own functions in the scope of the calling function
def quietude(level, my_func):
saved_gval = gval
gval = level
my_func()
def my_func():
razz_the_jazz()
begin_the_beguine()
quietude(3, my_func)
---- EDIT: Request for further information: -----
Python's lambdas are limited to one line so they are not as flexible as ruby's.
To pass functions with arguments around I would recommend partial functions see the below code:
import functools
def run(a, b):
print a
print b
def runner(value, func):
func(value)
def start():
s = functools.partial(run, 'first')
runner('second', s)
---- Edit 2 More information ----
Python functions are only called when the '()' is added to them. This is different from ruby where the '()' are optional. The below code runs 'b_method' in start() and 'a_method' in run()
def a_method():
print 'a_method is running'
return 'a'
def b_method():
print 'b_method is running'
return 'b'
def run(a, b):
print a()
print b
def start():
run(a_method, b_method())
I like the answer that mgilson gives, so it gets the check. This is just a small expansion on the capabilities of #contextmanager for someone coming from the Ruby world.
gval = 0
from contextlib import contextmanager
#contextmanager
def quietude(level):
global gval
saved_gval = gval
gval = level
try:
yield
finally:
gval = saved_gval
def bebop(x):
with quietude(3):
print "first", x*2, "(gval =", gval, ")"
print "second", x*4, "(gval =", gval, ")"
bebop(100)
bebop("xxxx")
This prints out:
first 200 (gval = 3 )
second 400 (gval = 3 )
first xxxxxxxx (gval = 3 )
second xxxxxxxxxxxxxxxx (gval = 3 )
This shows that everything within the scope of the with has access to the lexically closed variables, and behaves more or less the way someone coming from the Ruby world would expect.
Good stuff.
Related
I have a python function that runs other functions.
def main():
func1(a,b)
func2(*args,*kwargs)
func3()
Now I want to apply exceptions on main function. If there was an exception in any of the functions inside main, the function should not stop but continue executing next line. In other words, I want the below functionality
def main():
try:
func1()
except:
pass
try:
func2()
except:
pass
try:
func3()
except:
pass
So is there any way to loop through each statement inside main function and apply exceptions on each line.
for line in main_function:
try:
line
except:
pass
I just don't want to write exceptions inside the main function.
Note : How to prevent try catching every possible line in python? this question comes close to solving this problem, but I can't figure out how to loop through lines in a function.
If you have any other way to do this other than looping, that would help too.
What you want is on option that exists in some languages where an exception handler can choose to proceed on next exception. This used to lead to poor code and AFAIK has never been implemented in Python. The rationale behind is that you must explicitely say how you want to process an exception and where you want to continue.
In your case, assuming that you have a function called main that only calls other function and is generated automatically, my advice would be to post process it between its generation and its execution. The inspect module can even allow to do it at run time:
def filter_exc(func):
src = inspect.getsource(func)
lines = src.split('\n')
out = lines[0] + "\n"
for line in lines[1:]:
m = re.match('(\s*)(.*)', line)
lead, text = m.groups()
# ignore comments and empty lines
if not (text.startswith('#') or text.strip() == ""):
out += lead + "try:\n"
out += lead + " " + text + "\n"
out += lead + "except:\n" + lead + " pass\n"
return out
You can then use the evil exec (the input in only the source from your function):
exec(filter_exc(main)) # replaces main with the filtered version
main() # will ignore exceptions
After your comment, you want a more robust solution that can cope with multi line statements and comments. In that case, you need to actually parse the source and modify the parsed tree. ast module to the rescue:
class ExceptFilter(ast.NodeTransformer):
def visit_Expr(self, node):
self.generic_visit(node)
if isinstance(node.value, ast.Call): # filter all function calls
# print(node.value.func.id)
# use a dummy try block
n = ast.parse("""try:
f()
except:
pass""").body[0]
n.body[0] = node # make the try call the real function
return n # and use it
return node # keep other nodes unchanged
With that example code:
def func1():
print('foo')
def func2():
raise Exception("Test")
def func3(x):
print("f3", x)
def main():
func1()
# this is a comment
a = 1
if a == 1: # this is a multi line statement
func2()
func3("bar")
we get:
>>> node = ast.parse(inspect.getsource(main))
>>> exec(compile(ExceptFilter().visit(node), "", mode="exec"))
>>> main()
foo
f3 bar
In that case, the unparsed node(*) write as:
def main():
try:
func1()
except:
pass
a = 1
if (a == 1):
try:
func2()
except:
pass
try:
func3('bar')
except:
pass
Alternatively it is also possible to wrap every top level expression:
>>> node = ast.parse(inspect.getsource(main))
>>> for i in range(len(node.body[0].body)): # process top level expressions
n = ast.parse("""try:
f()
except:
pass""").body[0]
n.body[0] = node.body[0].body[i]
node.body[0].body[i] = n
>>> exec(compile(node, "", mode="exec"))
>>> main()
foo
f3 bar
Here the unparsed tree writes:
def main():
try:
func1()
except:
pass
try:
a = 1
except:
pass
try:
if (a == 1):
func2()
except:
pass
try:
func3('bar')
except:
pass
BEWARE: there is an interesting corner case if you use exec(compile(... in a function. By default exec(code) is exec(code, globals(), locals()). At top level, local and global dictionary is the same dictionary, so the top level function is correctly replaced. But if you do the same in a function, you only create a local function with the same name that can only be called from the function (it will go out of scope when the function will return) as locals()['main'](). So you must either alter the global function by passing explicitely the global dictionary:
exec(compile(ExceptFilter().visit(node), "", mode="exec"), globals(), globals())
or return the modified function without altering the original one:
def myfun():
# print(main)
node = ast.parse(inspect.getsource(main))
exec(compile(ExceptFilter().visit(node), "", mode="exec"))
# print(main, locals()['main'], globals()['main'])
return locals()['main']
>>> m2 = myfun()
>>> m2()
foo
f3 bar
(*) Python 3.6 contains an unparser in Tools/parser, but a simpler to use version exists in pypi
You could use a callback, like this:
def main(list_of_funcs):
for func in list_of_funcs:
try:
func()
except Exception as e:
print(e)
if __name__ == "__main__":
main([func1, func2, func3])
Python: How to get the caller's method name in the called method?
Assume I have 2 methods:
def method1(self):
...
a = A.method2()
def method2(self):
...
If I don't want to do any change for method1, how to get the name of the caller (in this example, the name is method1) in method2?
inspect.getframeinfo and other related functions in inspect can help:
>>> import inspect
>>> def f1(): f2()
...
>>> def f2():
... curframe = inspect.currentframe()
... calframe = inspect.getouterframes(curframe, 2)
... print('caller name:', calframe[1][3])
...
>>> f1()
caller name: f1
this introspection is intended to help debugging and development; it's not advisable to rely on it for production-functionality purposes.
Shorter version:
import inspect
def f1(): f2()
def f2():
print 'caller name:', inspect.stack()[1][3]
f1()
(with thanks to #Alex, and Stefaan Lippen)
This seems to work just fine:
import sys
print sys._getframe().f_back.f_code.co_name
I would use inspect.currentframe().f_back.f_code.co_name. Its use hasn't been covered in any of the prior answers which are mainly of one of three types:
Some prior answers use inspect.stack but it's known to be too slow.
Some prior answers use sys._getframe which is an internal private function given its leading underscore, and so its use is implicitly discouraged.
One prior answer uses inspect.getouterframes(inspect.currentframe(), 2)[1][3] but it's entirely unclear what [1][3] is accessing.
import inspect
from types import FrameType
from typing import cast
def demo_the_caller_name() -> str:
"""Return the calling function's name."""
# Ref: https://stackoverflow.com/a/57712700/
return cast(FrameType, cast(FrameType, inspect.currentframe()).f_back).f_code.co_name
if __name__ == '__main__':
def _test_caller_name() -> None:
assert demo_the_caller_name() == '_test_caller_name'
_test_caller_name()
Note that cast(FrameType, frame) is used to satisfy mypy.
Acknowlegement: comment by 1313e for an answer.
I've come up with a slightly longer version that tries to build a full method name including module and class.
https://gist.github.com/2151727 (rev 9cccbf)
# Public Domain, i.e. feel free to copy/paste
# Considered a hack in Python 2
import inspect
def caller_name(skip=2):
"""Get a name of a caller in the format module.class.method
`skip` specifies how many levels of stack to skip while getting caller
name. skip=1 means "who calls me", skip=2 "who calls my caller" etc.
An empty string is returned if skipped levels exceed stack height
"""
stack = inspect.stack()
start = 0 + skip
if len(stack) < start + 1:
return ''
parentframe = stack[start][0]
name = []
module = inspect.getmodule(parentframe)
# `modname` can be None when frame is executed directly in console
# TODO(techtonik): consider using __main__
if module:
name.append(module.__name__)
# detect classname
if 'self' in parentframe.f_locals:
# I don't know any way to detect call from the object method
# XXX: there seems to be no way to detect static method call - it will
# be just a function call
name.append(parentframe.f_locals['self'].__class__.__name__)
codename = parentframe.f_code.co_name
if codename != '<module>': # top level usually
name.append( codename ) # function or a method
## Avoid circular refs and frame leaks
# https://docs.python.org/2.7/library/inspect.html#the-interpreter-stack
del parentframe, stack
return ".".join(name)
Bit of an amalgamation of the stuff above. But here's my crack at it.
def print_caller_name(stack_size=3):
def wrapper(fn):
def inner(*args, **kwargs):
import inspect
stack = inspect.stack()
modules = [(index, inspect.getmodule(stack[index][0]))
for index in reversed(range(1, stack_size))]
module_name_lengths = [len(module.__name__)
for _, module in modules]
s = '{index:>5} : {module:^%i} : {name}' % (max(module_name_lengths) + 4)
callers = ['',
s.format(index='level', module='module', name='name'),
'-' * 50]
for index, module in modules:
callers.append(s.format(index=index,
module=module.__name__,
name=stack[index][3]))
callers.append(s.format(index=0,
module=fn.__module__,
name=fn.__name__))
callers.append('')
print('\n'.join(callers))
fn(*args, **kwargs)
return inner
return wrapper
Use:
#print_caller_name(4)
def foo():
return 'foobar'
def bar():
return foo()
def baz():
return bar()
def fizz():
return baz()
fizz()
output is
level : module : name
--------------------------------------------------
3 : None : fizz
2 : None : baz
1 : None : bar
0 : __main__ : foo
You can use decorators, and do not have to use stacktrace
If you want to decorate a method inside a class
import functools
# outside ur class
def printOuterFunctionName(func):
#functools.wraps(func)
def wrapper(self):
print(f'Function Name is: {func.__name__}')
func(self)
return wrapper
class A:
#printOuterFunctionName
def foo():
pass
you may remove functools, self if it is procedural
An alternative to sys._getframe() is used by Python's Logging library to find caller information. Here's the idea:
raise an Exception
immediately catch it in an Except clause
use sys.exc_info to get Traceback frame (tb_frame).
from tb_frame get last caller's frame using f_back.
from last caller's frame get the code object that was being executed in that frame.
In our sample code it would be method1 (not method2) being executed.
From code object obtained, get the object's name -- this is caller method's name in our sample.
Here's the sample code to solve example in the question:
def method1():
method2()
def method2():
try:
raise Exception
except Exception:
frame = sys.exc_info()[2].tb_frame.f_back
print("method2 invoked by: ", frame.f_code.co_name)
# Invoking method1
method1()
Output:
method2 invoked by: method1
Frame has all sorts of details, including line number, file name, argument counts, argument type and so on. The solution works across classes and modules too.
Code:
#!/usr/bin/env python
import inspect
called=lambda: inspect.stack()[1][3]
def caller1():
print "inside: ",called()
def caller2():
print "inside: ",called()
if __name__=='__main__':
caller1()
caller2()
Output:
shahid#shahid-VirtualBox:~/Documents$ python test_func.py
inside: caller1
inside: caller2
shahid#shahid-VirtualBox:~/Documents$
I found a way if you're going across classes and want the class the method belongs to AND the method. It takes a bit of extraction work but it makes its point. This works in Python 2.7.13.
import inspect, os
class ClassOne:
def method1(self):
classtwoObj.method2()
class ClassTwo:
def method2(self):
curframe = inspect.currentframe()
calframe = inspect.getouterframes(curframe, 4)
print '\nI was called from', calframe[1][3], \
'in', calframe[1][4][0][6: -2]
# create objects to access class methods
classoneObj = ClassOne()
classtwoObj = ClassTwo()
# start the program
os.system('cls')
classoneObj.method1()
Hey mate I once made 3 methods without plugins for my app and maybe that can help you, It worked for me so maybe gonna work for you too.
def method_1(a=""):
if a == "method_2":
print("method_2")
if a == "method_3":
print("method_3")
def method_2():
method_1("method_2")
def method_3():
method_1("method_3")
method_2()
Given Python code,
def foo():
def bar():
pass
bar()
foo()
bar()
I'd like to get a list of functions which, if I execute the Python code, will result in a NameError.
In this example, the list should be ['bar'], because it is not defined in the global scope and will cause an error when executed.
Executing the code in a loop, each time defining new functions, is not performant enough.
Currently I walk the AST tree, record all function definitions and all function calls, and subtract one from the other. This gives the wrong result in this case.
it seems you are trying to write some static analyzer for python. maybe you are working on C, but i think it would be faster for me to show the idea only in python:
list_token = # you have tokenized the program now.
class Env:
def __init__(self):
self.env = set()
self.super_env = None # this will point to Env instance
def __contains__(self, key):
if key in self.env:
return True
if self.sub_env is not None:
return key in self.super_env
def add(self, key):
self.env.add(key)
topenv = Env()
currentenv = topenv
ret = [] # return list
for tok in list_token:
if is_colon(tok): # is ':', ie. enter a new scope
newenv = Env()
currentenv.super_env = newenv
currentenv = newenv
else if is_exiting(tok): # exit a scope
currentenv = currentenv.super_env
else if refing_symbol(tok):
if tok not in currentenv: ret.add(tok)
else if new_symbol(tok):
currentenv.add(tok)
else: pass
if you think this code is not enough, please point out the reason. and if you want to capture all by static analysis, i think it's not quite possible.
Please be kind with me, I'm a Python beginner :-)
Now, I see that the 'best practice' for writing Python programs would be to wrap the main code inside a 'main' function, and do the if "__main__" == __name__: test to invoke the 'main' function.
This of course results in the necessity of using a series of global statements in the 'main' function to access the global variables.
I wonder if it is more proper (or 'Pythonic', if you will) to gather the global variables into a custom class, say _v, and refer to the variables using _v. prefix instead?
Also, as a corollary question, would that have any negative impact to, let's say, performance or exception handling?
EDIT : The following is the general structure of the program:
paramset = {
0: { ...dict of params... }
1: { ...dict of params... }
2: { ...dict of params... }
}
selector = 0
reset_requested = False
selector_change = False
def sighup_handler(signal,frame):
global reset_requested
logger.info('Caught SIGHUP, resetting to set #{0}'.format(new_selector))
reset_requested = True
selector = 0
def sigusr1_handler(signal,frame):
global selector
new_selector = (selector + 1) % len(paramset)
logger.info('Caught SIGHUP, changing parameters to set #{0}'.format(new_selector))
selector = new_selector
selector_change = True
signal.signal(signal.SIGHUP, sighup_handler)
signal.signal(signal.SIGUSR1, sigusr1_handler)
def main():
global reset_requested
global selector
global selector_change
keep_running = True
while keep_running
logger.info('Processing selector {0}'.format(selector))
for stage in [process_stage1, process_stage2, process_stage3]
err, result = stage(paramset[selector])
if err is not None:
logger.critical('Stage failure! Err {0} details: {0}'.format(err, result))
raise SystemError('Err {0} details: {0}'.format(err, result))
else:
logger.info('Stage success: {0}'.format(result))
if reset_requested:
stage_cleanup()
reset_requested = False
else:
inter_stage_pause()
if selector_change:
selector_change = False
break
selector = (selector + 1) % len(paramset)
There are enough pieces missing from the example code that reaching any firm conclusions is difficult.
Event-driven approach
The usual approach for this type of problem would be to make it entirely event-driven. As it stands, the code is largely polling. For example, sighup_handler sets reset_requested = True and the while loop in main processes that request. An event-driven approach would handle the reset, meaning the call to stage_cleanup, directly:
def sighup_handler(signal,frame):
logger.info('Caught SIGHUP, resetting to set #{0}'.format(new_selector))
stage_cleanup()
Class with shared variables
In the sample code, the purpose of all those process_stages and cycling through the stages is not clear. Can it all be put in an event-driven context? I don't know. If it can't and it does require shared variables, then your suggestion of a class would be a natural choice. The beginnings of such a class might look like:
class Main(object);
def __init__(self):
self.selector = 0
self.selector_change = False
signal.signal(signal.SIGHUP, self.sighup_handler)
signal.signal(signal.SIGUSR1, self.sigusr1_handler)
def sighup_handler(self, signal,frame):
logger.info('Caught SIGHUP, resetting to set #{0}'.format(new_selector))
stage_cleanup()
self.selector = 0
def sigusr1_handler(self, signal,frame):
new_selector = (selector + 1) % len(paramset)
logger.info('Caught SIGHUP, changing parameters to set #{0}'.format(new_selector))
self.selector = new_selector
self.selector_change = True
def mainloop(self):
# Do here whatever polling is actually required.
if __name__ == '__main__':
main = Main()
main.mainloop()
Again, because the true purpose of the polling loop is not clear to me, I didn't try to reproduce its functionality in the class above.
Generally, it is best practice to avoid global variables, and instead just pass variables to classes/methods that need them through method calls. Example: if you are making a calculator, make an addition method that takes 2 ints and returns an int. This is in contrast to making 2 input ints and 1 output int as global variables, and having the add method work on those.
Why can two functions with the same id value have differing attributes like __doc__ or __name__?
Here's a toy example:
some_dict = {}
for i in range(2):
def fun(self, *args):
print i
fun.__doc__ = "I am function {}".format(i)
fun.__name__ = "function_{}".format(i)
some_dict["function_{}".format(i)] = fun
my_type = type("my_type", (object,), some_dict)
m = my_type()
print id(m.function_0)
print id(m.function_1)
print m.function_0.__doc__
print m.function_1.__doc__
print m.function_0.__name__
print m.function_1.__name__
print m.function_0()
print m.function_1()
Which prints:
57386560
57386560
I am function 0
I am function 1
function_0
function_1
1 # <--- Why is it bound to the most recent value of that variable?
1
I've tried mixing in a call to copy.deepcopy (not sure if the recursive copy is needed for functions or it is overkill) but this doesn't change anything.
You are comparing methods, and method objects are created anew each time you access one on an instance or class (via the descriptor protocol).
Once you tested their id() you discard the method again (there are no references to it), so Python is free to reuse the id when you create another method. You want to test the actual functions here, by using m.function_0.__func__ and m.function_1.__func__:
>>> id(m.function_0.__func__)
4321897240
>>> id(m.function_1.__func__)
4321906032
Method objects inherit the __doc__ and __name__ attributes from the function that they wrap. The actual underlying functions are really still different objects.
As for the two functions returning 1; both functions use i as a closure; the value for i is looked up when you call the method, not when you created the function. See Local variables in Python nested functions.
The easiest work-around is to add another scope with a factory function:
some_dict = {}
for i in range(2):
def create_fun(i):
def fun(self, *args):
print i
fun.__doc__ = "I am function {}".format(i)
fun.__name__ = "function_{}".format(i)
return fun
some_dict["function_{}".format(i)] = create_fun(i)
Per your comment on ndpu's answer, here is one way you can create the functions without needing to have an optional argument:
for i in range(2):
def funGenerator(i):
def fun1(self, *args):
print i
return fun1
fun = funGenerator(i)
fun.__doc__ = "I am function {}".format(i)
fun.__name__ = "function_{}".format(i)
some_dict["function_{}".format(i)] = fun
#Martjin Pieters is perfectly correct. To illustrate, try this modification
some_dict = {}
for i in range(2):
def fun(self, *args):
print i
fun.__doc__ = "I am function {}".format(i)
fun.__name__ = "function_{}".format(i)
some_dict["function_{}".format(i)] = fun
print "id",id(fun)
my_type = type("my_type", (object,), some_dict)
m = my_type()
print id(m.function_0)
print id(m.function_1)
print m.function_0.__doc__
print m.function_1.__doc__
print m.function_0.__name__
print m.function_1.__name__
print m.function_0()
print m.function_1()
c = my_type()
print c
print id(c.function_0)
You see that the fun get's a different id each time, and is different from the final one. It's the method creation logic that send's it pointing to the same location, as that's where the class's code is stored. Also, if you use the my_type as a sort of class, instances created with it have the same memory address for that function
This code gives:
id 4299601152
id 4299601272
4299376112
4299376112
I am function 0
I am function 1
function_0
function_1
1
None
1
None
<main.my_type object at 0x10047c350>
4299376112
You should save current i to make this:
1 # <--- Why is it bound to the most recent value of that variable?
1
work, for example by setting default value to function argument:
for i in range(2):
def fun(self, i=i, *args):
print i
# ...
or create a closure:
for i in range(2):
def f(i):
def fun(self, *args):
print i
return fun
fun = f(i)
# ...