I have a method that i have broken into smaller nested functions to break up the code base:
def foo(x,y):
def do_this(x,y):
pass
def do_that(x,y):
pass
do_this(x,y)
do_that(x,y)
return
Is there a way to run one of the nested functions by itself. eg:
foo.do_this(x,y)
EDIT:
I am trying to setup caching on a web server i have built using pyramid_breaker
def getThis(request):
def invalidate_data(getData,'long_term',search_term):
region_invalidate(getData,'long_term',search_term)
#cached_region('long_term')
def getData(search_term):
return response
search_term = request.matchdict['searchterm']
return getData(search_term)
This is my understanding may not be accurate:
Now the reason i have this is that the namespace used by the decorator to create the cache key is genereated from the function and the arguements. You can't therefore just put the decorator on getThis as the request variable is unique-ish and the cache is useless. So i created the inner function which has repeatable args (search_term).
However to invalidate the cache (ie refresh), the invalidation function requires scope to know of the 'getData' function so also needs to be nested. Therefore i need to call the nested function. You wonderful people have made it clear its not possible so is someone able to explain how i might do it with a different structure?
I assume do_this and do_that are actually dependent on some argument of foo, since otherwise you could just move them out of foo and call them directly.
I suggest reworking the whole thing as a class. Something like this:
class Foo(object):
def __init__(self, x, y):
self.x = x
self.y = y
def do_this(self):
pass
def do_that(self):
pass
def __call__(self):
self.do_this()
self.do_that()
foo = Foo(x, y)
foo()
foo.do_this()
These previous answers, telling you that you can not do this, are of course wrong.
This is python, you can do almost anything you want using some magic code magic.
We can take the first constant out of foo's function code, this will be the do_this function. We can then use this code to create a new function with it.
see https://docs.python.org/2/library/new.html for more info on new and https://docs.python.org/2/library/inspect.html for more info on how to get to internal code.
Warning: it's not because you CAN do this that you SHOULD do this,
rethinking the way you have your functions structured is the way to go, but if you want a quick and dirty hack that will probably break in the future, here you go:
import new
myfoo = new.function(foo.func_code.co_consts[1],{})
myfoo(x,y) # hooray we have a new function that does what I want
UPDATE: in python3 you can use the types module with foo.__code__:
import types
myfoo = types.FunctionType(foo.__code__.co_consts[1], {})
myfoo() # behaves like it is do_this()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: do_this() missing 2 required positional arguments: 'x' and 'y'
There is, you have to make them as an attribute of the function object. But this will work only after the first call of foo.
def foo(x,y):
def do_this(x,y):
pass
def do_that(x,y):
pass
do_this(x,y)
do_that(x,y)
foo.do_this = do_this
foo.do_that = do_that
return
>>> foo.do_this(1, 2)
AttributeError: 'function' object has no attribute 'do_this'
>>> foo(1, 2)
>>> foo.do_this(1, 2)
>>>
No (apart from poking around in closure objects, which is complete overkill here). If you need that, use a class.
class foo(object):
def do_this(self, x, y):
...
def do_that(self, x, y):
...
def do_other_stuff(self, x, y):
# or __call__, possibly
Or just put those functions in the outer scope, since you're passing everything as arguments anyway:
def foo(x, y):
do_this(x, y)
do_that(x, y)
def do_this(x, y):
...
def do_that(x, y):
...
No, there is not. Since you may access variables in an outer scope from within a nested function:
def foo(x,y):
def do_this(z):
print(x,y,z)
# ...
there is no way to call do_this while providing a binding for x and y.
If you must call do_this from elsewhere, simply make it a top level function at the same level as foo.
You can try this way:
def a(x, y):
name = 'Michael'
a.name = name
a.z = z = x * y
#a.z = z
def b():
def give_me_price(f,g):
price = f * g
return price
def two(j,k):
surname = 'Jordan' # without return surname give None
# two = two('arg1', 'arg2')
# b.blabla = two
one = give_me_price(5, 10)
b.halabala = one
print(a.name) # ;)
x = 20
y = 30
a(x,y) # IMPORTANT! first you must run function
print(a.z)
print(a.name * 5)
print('-'*12)
b() # IMPORTANT! first you must run function
print('price is: ' + str(b.give_me_price(5, 25)))
# print(b.blabla)
This is how I did it.
CODE
def getMessage(a="", b="", c=""):
def getErrorMessage(aa, bb):
return "Error Message with/without params: {}{}".format(aa, bb)
def getSuccessMessage(bb, cc):
return "Success Message with/without params: {}{}".format(bb, cc)
def getWarningMessage(aa, cc):
return "Warning Message with/without params: {}{}".format(aa, cc)
return {
"getErrorMessage": getErrorMessage(a, b),
"getSuccessMessage": getSuccessMessage(b, c),
"getWarningMessage": getWarningMessage(a, c),
}
a = "hello"
b = " World"
c = "!"
print(getMessage(a, b)["getErrorMessage"])
print(getMessage(b=b, c=c)["getSuccessMessage"])
print(getMessage(a=a, c=c)["getWarningMessage"])
print(getMessage(c=c)["getWarningMessage"])
OUTPUT
Error Message with/without params: hello World
Success Message with/without params: World!
Warning Message with/without params: hello!
Warning Message with/without params: !
Related
Python: How to get the caller's method name in the called method?
Assume I have 2 methods:
def method1(self):
...
a = A.method2()
def method2(self):
...
If I don't want to do any change for method1, how to get the name of the caller (in this example, the name is method1) in method2?
inspect.getframeinfo and other related functions in inspect can help:
>>> import inspect
>>> def f1(): f2()
...
>>> def f2():
... curframe = inspect.currentframe()
... calframe = inspect.getouterframes(curframe, 2)
... print('caller name:', calframe[1][3])
...
>>> f1()
caller name: f1
this introspection is intended to help debugging and development; it's not advisable to rely on it for production-functionality purposes.
Shorter version:
import inspect
def f1(): f2()
def f2():
print 'caller name:', inspect.stack()[1][3]
f1()
(with thanks to #Alex, and Stefaan Lippen)
This seems to work just fine:
import sys
print sys._getframe().f_back.f_code.co_name
I would use inspect.currentframe().f_back.f_code.co_name. Its use hasn't been covered in any of the prior answers which are mainly of one of three types:
Some prior answers use inspect.stack but it's known to be too slow.
Some prior answers use sys._getframe which is an internal private function given its leading underscore, and so its use is implicitly discouraged.
One prior answer uses inspect.getouterframes(inspect.currentframe(), 2)[1][3] but it's entirely unclear what [1][3] is accessing.
import inspect
from types import FrameType
from typing import cast
def demo_the_caller_name() -> str:
"""Return the calling function's name."""
# Ref: https://stackoverflow.com/a/57712700/
return cast(FrameType, cast(FrameType, inspect.currentframe()).f_back).f_code.co_name
if __name__ == '__main__':
def _test_caller_name() -> None:
assert demo_the_caller_name() == '_test_caller_name'
_test_caller_name()
Note that cast(FrameType, frame) is used to satisfy mypy.
Acknowlegement: comment by 1313e for an answer.
I've come up with a slightly longer version that tries to build a full method name including module and class.
https://gist.github.com/2151727 (rev 9cccbf)
# Public Domain, i.e. feel free to copy/paste
# Considered a hack in Python 2
import inspect
def caller_name(skip=2):
"""Get a name of a caller in the format module.class.method
`skip` specifies how many levels of stack to skip while getting caller
name. skip=1 means "who calls me", skip=2 "who calls my caller" etc.
An empty string is returned if skipped levels exceed stack height
"""
stack = inspect.stack()
start = 0 + skip
if len(stack) < start + 1:
return ''
parentframe = stack[start][0]
name = []
module = inspect.getmodule(parentframe)
# `modname` can be None when frame is executed directly in console
# TODO(techtonik): consider using __main__
if module:
name.append(module.__name__)
# detect classname
if 'self' in parentframe.f_locals:
# I don't know any way to detect call from the object method
# XXX: there seems to be no way to detect static method call - it will
# be just a function call
name.append(parentframe.f_locals['self'].__class__.__name__)
codename = parentframe.f_code.co_name
if codename != '<module>': # top level usually
name.append( codename ) # function or a method
## Avoid circular refs and frame leaks
# https://docs.python.org/2.7/library/inspect.html#the-interpreter-stack
del parentframe, stack
return ".".join(name)
Bit of an amalgamation of the stuff above. But here's my crack at it.
def print_caller_name(stack_size=3):
def wrapper(fn):
def inner(*args, **kwargs):
import inspect
stack = inspect.stack()
modules = [(index, inspect.getmodule(stack[index][0]))
for index in reversed(range(1, stack_size))]
module_name_lengths = [len(module.__name__)
for _, module in modules]
s = '{index:>5} : {module:^%i} : {name}' % (max(module_name_lengths) + 4)
callers = ['',
s.format(index='level', module='module', name='name'),
'-' * 50]
for index, module in modules:
callers.append(s.format(index=index,
module=module.__name__,
name=stack[index][3]))
callers.append(s.format(index=0,
module=fn.__module__,
name=fn.__name__))
callers.append('')
print('\n'.join(callers))
fn(*args, **kwargs)
return inner
return wrapper
Use:
#print_caller_name(4)
def foo():
return 'foobar'
def bar():
return foo()
def baz():
return bar()
def fizz():
return baz()
fizz()
output is
level : module : name
--------------------------------------------------
3 : None : fizz
2 : None : baz
1 : None : bar
0 : __main__ : foo
You can use decorators, and do not have to use stacktrace
If you want to decorate a method inside a class
import functools
# outside ur class
def printOuterFunctionName(func):
#functools.wraps(func)
def wrapper(self):
print(f'Function Name is: {func.__name__}')
func(self)
return wrapper
class A:
#printOuterFunctionName
def foo():
pass
you may remove functools, self if it is procedural
An alternative to sys._getframe() is used by Python's Logging library to find caller information. Here's the idea:
raise an Exception
immediately catch it in an Except clause
use sys.exc_info to get Traceback frame (tb_frame).
from tb_frame get last caller's frame using f_back.
from last caller's frame get the code object that was being executed in that frame.
In our sample code it would be method1 (not method2) being executed.
From code object obtained, get the object's name -- this is caller method's name in our sample.
Here's the sample code to solve example in the question:
def method1():
method2()
def method2():
try:
raise Exception
except Exception:
frame = sys.exc_info()[2].tb_frame.f_back
print("method2 invoked by: ", frame.f_code.co_name)
# Invoking method1
method1()
Output:
method2 invoked by: method1
Frame has all sorts of details, including line number, file name, argument counts, argument type and so on. The solution works across classes and modules too.
Code:
#!/usr/bin/env python
import inspect
called=lambda: inspect.stack()[1][3]
def caller1():
print "inside: ",called()
def caller2():
print "inside: ",called()
if __name__=='__main__':
caller1()
caller2()
Output:
shahid#shahid-VirtualBox:~/Documents$ python test_func.py
inside: caller1
inside: caller2
shahid#shahid-VirtualBox:~/Documents$
I found a way if you're going across classes and want the class the method belongs to AND the method. It takes a bit of extraction work but it makes its point. This works in Python 2.7.13.
import inspect, os
class ClassOne:
def method1(self):
classtwoObj.method2()
class ClassTwo:
def method2(self):
curframe = inspect.currentframe()
calframe = inspect.getouterframes(curframe, 4)
print '\nI was called from', calframe[1][3], \
'in', calframe[1][4][0][6: -2]
# create objects to access class methods
classoneObj = ClassOne()
classtwoObj = ClassTwo()
# start the program
os.system('cls')
classoneObj.method1()
Hey mate I once made 3 methods without plugins for my app and maybe that can help you, It worked for me so maybe gonna work for you too.
def method_1(a=""):
if a == "method_2":
print("method_2")
if a == "method_3":
print("method_3")
def method_2():
method_1("method_2")
def method_3():
method_1("method_3")
method_2()
I have a very long function func which takes a browser handle and performs a bunch of requests and reads a bunch of responses in a specific order:
def func(browser):
# make sure we are logged in otherwise log in
# make request to /search and check that the page has loaded
# fill form in /search and submit it
# read table of response and return the result as list of objects
Each operation require a large amount of code due to the complexity of the DOM and they tend to grow really fast.
What would be the best way to refactor this function into smaller components so that the following properties still hold:
the execution flow of the operations and/or their preconditions is guaranteed just like in the current version
the preconditions are not checked with asserts against the state, as this is a very costly operation
func can be called multiple times on the browser
?
Just wrap the three helper methods in a class, and track which methods are allowed to run in an instance.
class Helper(object):
def __init__(self):
self.a = True
self.b = False
self.c = False
def funcA(self):
if not self.A:
raise Error("Cannot run funcA now")
# do stuff here
self.a = False
self.b = True
return whatever
def funcB(self):
if not self.B:
raise Error("Cannot run funcB now")
# do stuff here
self.b = False
self.c = True
return whatever
def funcC(self):
if not self.C:
raise Error("Cannot run funcC now")
# do stuff here
self.c = False
self.a = True
return whatever
def func(...):
h = Helper()
h.funcA()
h.funcB()
h.funcC()
# etc
The only way to call a method is if its flag is true, and each method clears its own flag and sets the next method's flag before exiting. As long as you don't touch h.a et al. directly, this ensures that each method can only be called in the proper order.
Alternately, you can use a single flag that is a reference to the function currently allowed to run.
class Helper(object):
def __init__(self):
self.allowed = self.funcA
def funcA(self):
if self.allowed is not self.funcA:
raise Error("Cannot run funcA now")
# do stuff
self.allowed = self.funcB
return whatever
# etc
Here's the solution I came up with. I used a decorator (closely related to the one in this blog post) which only allows for a function to be called once.
def call_only_once(func):
def new_func(*args, **kwargs):
if not new_func._called:
try:
return func(*args, **kwargs)
finally:
new_func._called = True
else:
raise Exception("Already called this once.")
new_func._called = False
return new_func
#call_only_once
def stateA():
print 'Calling stateA only this time'
#call_only_once
def stateB():
print 'Calling stateB only this time'
#call_only_once
def stateC():
print 'Calling stateC only this time'
def state():
stateA()
stateB()
stateC()
if __name__ == "__main__":
state()
You'll see that if you re-call any of the functions, the function will throw an Exception stating that the functions have already been called.
The problem with this is that if you ever need to call state() again, you're hosed. Unless you implement these functions as private functions, I don't think you can do exactly what you want due to the nature of Python's scoping rules.
Edit
You can also remove the else in the decorator and your function will always return None.
Here a snippet I used once for my state machine
class StateMachine(object):
def __init__(self):
self.handlers = {}
self.start_state = None
self.end_states = []
def add_state(self, name, handler, end_state=0):
name = name.upper()
self.handlers[name] = handler
if end_state:
self.end_states.append(name)
def set_start(self, name):
# startup state
self.start_state = name
def run(self, **kw):
"""
Run
:param kw:
:return:
"""
# the first .run call call the first handler with kw keywords
# each registered handler should returns the following handler and the needed kw
try:
handler = self.handlers[self.start_state]
except:
raise InitializationError("must call .set_start() before .run()")
while True:
(new_state, kw) = handler(**kw)
if isinstance(new_state, str):
if new_state in self.end_states:
print("reached ", new_state)
break
else:
handler = self.handlers[new_state]
elif hasattr(new_state, "__call__"):
handler = new_state
else:
return
The use
class MyParser(StateMachine):
def __init__(self):
super().__init__()
# define handlers
# we can define many handler as we want
self.handlers["begin_parse"] = self.begin_parse
# define the startup handler
self.set_start("begin_parse")
def end(self, **kw):
logging.info("End of parsing ")
# no callable handler => end
return None, None
def second(self, **kw):
logging.info("second ")
# do something
# if condition is reach the call `self.end` handler
if ...:
return self.end, {}
def begin_parse(self, **kw):
logging.info("start of parsing ")
# long process until the condition is reach then call the `self.second` handler with kw new keywords
while True:
kw = {}
if ...:
return self.second, kw
# elif other cond:
# return self.other_handler, kw
# elif other cond 2:
# return self.other_handler 2, kw
else:
return self.end, kw
# start the state machine
MyParser().run()
will print
INFO:root:start of parsing
INFO:root:second
INFO:root:End of parsing
You could use local functions in your func function. Ok, they are still declared inside one single global function, but Python is nice enough to still give you access to them for tests.
Here is one example of one function declaring and executing 3 (supposedly heavy) subfunctions. It takes one optional parameter test that when set to TEST prevent actual execution but instead gives external access to individual sub-functions and to a local variable:
def func(test=None):
glob = []
def partA():
glob.append('A')
def partB():
glob.append('B')
def partC():
glob.append('C')
if (test == 'TEST'):
global testA, testB, testC, testCR
testA, testB, testC, testCR = partA, partB, partC, glob
return None
partA()
partB()
partC()
return glob
When you call func, the 3 parts are executed in sequence. But if you first call func('TEST'), you can then access the local glob variable as testCR, and the 3 subfunctions as testA, testB and testC. This way you can still test individually the 3 parts with well defined input and control their output.
I would insist on the suggestion given by #user3159253 in his comment on the original question:
If the sole purpose is readability I would split the func into three "private" > or "protected" ones (i.e. _func1 or __func1) and a private or protected property > which keeps the state shared between the functions.
This makes a lot of sense to me and seems more usual amongst object oriented programming than the other options. Consider this example as an alternative:
Your class (teste.py):
class Test:
def __init__(self):
self.__environment = {} # Protected information to be shared
self.public_stuff = 'public info' # Accessible to outside callers
def func(self):
print "Main function"
self.__func_a()
self.__func_b()
self.__func_c()
print self.__environment
def __func_a(self):
self.__environment['function a says'] = 'hi'
def __func_b(self):
self.__environment['function b says'] = 'hello'
def __func_c(self):
self.__environment['function c says'] = 'hey'
Other file:
from teste import Test
t = Test()
t.func()
This will output:
Main function says hey guys
{'function a says': 'hi', 'function b says': 'hello', 'function c says': 'hey'}
If you try to call one of the protected functions, an error occurs:
Traceback (most recent call last):
File "C:/Users/Lucas/PycharmProjects/testes/other.py", line 6, in <module>
t.__func_a()
AttributeError: Test instance has no attribute '__func_a'
Same thing if you try to access the protected environment variable:
Traceback (most recent call last):
File "C:/Users/Lucas/PycharmProjects/testes/other.py", line 5, in <module>
print t.__environment
AttributeError: Test instance has no attribute '__environment'
In my view this is the most elegant, simple and readable way to solve your problem, let me know if it fits your needs :)
Basically I want to do something like this:
How can I hook a function in a python module?
but I want to call the old function after my own code.
like
import whatever
oldfunc = whatever.this_is_a_function
def this_is_a_function(parameter):
#my own code here
# and call original function back
oldfunc(parameter)
whatever.this_is_a_function = this_is_a_function
Is this possible?
I tried copy.copy, copy.deepcopy original function but it didn't work.
Something like this? It avoids using globals, which is generally a good thing.
import whatever
import functools
def prefix_function(function, prefunction):
#functools.wraps(function)
def run(*args, **kwargs):
prefunction(*args, **kwargs)
return function(*args, **kwargs)
return run
def this_is_a_function(parameter):
pass # Your own code here that will be run before
whatever.this_is_a_function = prefix_function(
whatever.this_is_a_function, this_is_a_function)
prefix_function is a function that takes two functions: function and prefunction. It returns a function that takes any parameters, and calls prefunction followed by function with the same parameters. The prefix_function function works for any callable, so you only need to program the prefixing code once for any other hooking you might need to do.
#functools.wraps makes it so that the docstring and name of the returned wrapper function is the same.
If you need this_is_a_function to call the old whatever.this_is_a_function with arguments different than what was passed to it, you could do something like this:
import whatever
import functools
def wrap_function(oldfunction, newfunction):
#functools.wraps(function)
def run(*args, **kwargs):
return newfunction(oldfunction, *args, **kwargs)
return run
def this_is_a_function(oldfunc, parameter):
# Do some processing or something to customize the parameters to pass
newparams = parameter * 2 # Example of a change to newparams
return oldfunc(newparams)
whatever.this_is_a_function = wrap_function(
whatever.this_is_a_function, this_is_a_function)
There is a problem that if whatever is a pure C module, it's typically impossible (or very difficult) to change its internals in the first place.
So, here's an example of monkey-patching the time function from the time module.
import time
old_time = time.time
def time():
print('It is today... but more specifically the time is:')
return old_time()
time.time = time
print time.time()
# Output:
# It is today... but more specifically the time is:
# 1456954003.2
However, if you are trying to do this to C code, you will most likely get an error like cannot overwrite attribute. In that case, you probably want to subclass the C module.
You may want to take a look at this question.
This is the perfect time to tout my super-simplistic Hooker
def hook(hookfunc, oldfunc):
def foo(*args, **kwargs):
hookfunc(*args, **kwargs)
return oldfunc(*args, **kwargs)
return foo
Incredibly simple. It will return a function that first runs the desired hook function (with the same parameters, mind you) and will then run the original function that you are hooking and return that original value. This also works to overwrite a class method. Say we have static method in a class.
class Foo:
#staticmethod
def bar(data):
for datum in data:
print(datum, end="") # assuming python3 for this
print()
But we want to print the length of the data before we print out its elements
def myNewFunction(data):
print("The length is {}.".format(len(data)))
And now we simple hook the function
Foo.bar(["a", "b", "c"])
# => a b c
Foo.bar = hook(Foo.bar, myNewFunction)
Foo.bar(["x", "y", "z"])
# => The length is 3.
# => x y z
Actually, you can replace the target function's func_code. The example below
# a normal function
def old_func():
print "i am old"
# a class method
class A(object):
def old_method(self):
print "i am old_method"
# a closure function
def make_closure(freevar1, freevar2):
def wrapper():
print "i am old_clofunc, freevars:", freevar1, freevar2
return wrapper
old_clofunc = make_closure('fv1', 'fv2')
# ===============================================
# the new function
def new_func(*args):
print "i am new, args:", args
# the new closure function
def make_closure2(freevar1, freevar2):
def wrapper():
print "i am new_clofunc, freevars:", freevar1, freevar2
return wrapper
new_clofunc = make_closure2('fv1', 'fv2')
# ===============================================
# hook normal function
old_func.func_code = new_func.func_code
# hook class method
A.old_method.im_func.func_code = new_func.func_code
# hook closure function
# Note: the closure function's `co_freevars` count should be equal
old_clofunc.func_code = new_clofunc.func_code
# ===============================================
# call the old
old_func()
A().old_method()
old_clofunc()
output:
i am new, args: ()
i am new, args: (<__main__.A object at 0x0000000004A5AC50>,)
i am new_clofunc, freevars: fv1 fv2
Why can two functions with the same id value have differing attributes like __doc__ or __name__?
Here's a toy example:
some_dict = {}
for i in range(2):
def fun(self, *args):
print i
fun.__doc__ = "I am function {}".format(i)
fun.__name__ = "function_{}".format(i)
some_dict["function_{}".format(i)] = fun
my_type = type("my_type", (object,), some_dict)
m = my_type()
print id(m.function_0)
print id(m.function_1)
print m.function_0.__doc__
print m.function_1.__doc__
print m.function_0.__name__
print m.function_1.__name__
print m.function_0()
print m.function_1()
Which prints:
57386560
57386560
I am function 0
I am function 1
function_0
function_1
1 # <--- Why is it bound to the most recent value of that variable?
1
I've tried mixing in a call to copy.deepcopy (not sure if the recursive copy is needed for functions or it is overkill) but this doesn't change anything.
You are comparing methods, and method objects are created anew each time you access one on an instance or class (via the descriptor protocol).
Once you tested their id() you discard the method again (there are no references to it), so Python is free to reuse the id when you create another method. You want to test the actual functions here, by using m.function_0.__func__ and m.function_1.__func__:
>>> id(m.function_0.__func__)
4321897240
>>> id(m.function_1.__func__)
4321906032
Method objects inherit the __doc__ and __name__ attributes from the function that they wrap. The actual underlying functions are really still different objects.
As for the two functions returning 1; both functions use i as a closure; the value for i is looked up when you call the method, not when you created the function. See Local variables in Python nested functions.
The easiest work-around is to add another scope with a factory function:
some_dict = {}
for i in range(2):
def create_fun(i):
def fun(self, *args):
print i
fun.__doc__ = "I am function {}".format(i)
fun.__name__ = "function_{}".format(i)
return fun
some_dict["function_{}".format(i)] = create_fun(i)
Per your comment on ndpu's answer, here is one way you can create the functions without needing to have an optional argument:
for i in range(2):
def funGenerator(i):
def fun1(self, *args):
print i
return fun1
fun = funGenerator(i)
fun.__doc__ = "I am function {}".format(i)
fun.__name__ = "function_{}".format(i)
some_dict["function_{}".format(i)] = fun
#Martjin Pieters is perfectly correct. To illustrate, try this modification
some_dict = {}
for i in range(2):
def fun(self, *args):
print i
fun.__doc__ = "I am function {}".format(i)
fun.__name__ = "function_{}".format(i)
some_dict["function_{}".format(i)] = fun
print "id",id(fun)
my_type = type("my_type", (object,), some_dict)
m = my_type()
print id(m.function_0)
print id(m.function_1)
print m.function_0.__doc__
print m.function_1.__doc__
print m.function_0.__name__
print m.function_1.__name__
print m.function_0()
print m.function_1()
c = my_type()
print c
print id(c.function_0)
You see that the fun get's a different id each time, and is different from the final one. It's the method creation logic that send's it pointing to the same location, as that's where the class's code is stored. Also, if you use the my_type as a sort of class, instances created with it have the same memory address for that function
This code gives:
id 4299601152
id 4299601272
4299376112
4299376112
I am function 0
I am function 1
function_0
function_1
1
None
1
None
<main.my_type object at 0x10047c350>
4299376112
You should save current i to make this:
1 # <--- Why is it bound to the most recent value of that variable?
1
work, for example by setting default value to function argument:
for i in range(2):
def fun(self, i=i, *args):
print i
# ...
or create a closure:
for i in range(2):
def f(i):
def fun(self, *args):
print i
return fun
fun = f(i)
# ...
I have created a class that can take a function with a set of arguments. I would like to run the passed function every time the event handler signals.
I am attaching my code below which runs when I pass a fun2 which has no arguments but not with fun1. Any suggestions that I can make to the code below work with fun1 and fun2? If I omit the return statement from fun1, I get an error that 'str' object is not callable.
>>> TimerTest.main()
function 1. this function does task1
my function from init from function1
my function in start of runTimerTraceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Program Files (x86)\IronPython 2.7\TimerTest.py", line 57, in main
File "C:\Program Files (x86)\IronPython 2.7\TimerTest.py", line 25, in runTime
r
TypeError: str is not callable
import System
from System.Timers import (Timer, ElapsedEventArgs)
class timerTest:
def __init__ (self, interval,autoreset, fun):
self.Timer = Timer()
self.Timer.Interval= interval
self.Timer.AutoReset = autoreset
self.Timer.Enabled = True
self.myfunction = fun
def runTimer(self):
print 'my function in start of runTimer', self.myfunction ()
self.Timer.Start()
def OnTimedEvent (s, e):
print "The Elapsed event was raised at " , e.SignalTime
print 'printing myfunction...', self.myfunction()
self.myfunction()
self.Timer.Elapsed += OnTimedEvent
def stopTimer(self):
self.Timer.Stop()
self.Timer.Dispose= True
def fun1(a,b):
print 'function 1. this function does task1'
return 'from function1'
def fun2():
print 'Function 2. This function does something'
print 'Test 1...2...3...'
return 'From function 2'
def main():
a = timerTest(1000, True, fun1(10,20))
a.runTimer()
b= timerTest(3000,True,fun2)
b.runTimer()
if __name__ == '__main__':
main()
I am learning Python and I apologize if my questions are basic.
To change the interval, I stop the timer using a stopTimer method I added to the timerTest class:
def stopTimer(self):
self.Timer.Stop()
I take the new user input to call the runTimer method which I have revised per Paolo Moretti's suggestions:
def runTimer(self, interval,autoreset,fun,arg1, arg2, etc.):
self.Timer.Interval= interval
self.Timer.AutoReset = autoreset
myfunction = fun
my_args = args
self.Timer.Start()
def OnTimedEvent (s, e):
print "The Elapsed event was raised at " , e.SignalTime
myfunction(*my_args)
self.Timer.Elapsed += OnTimedEvent
Whenever a command button is pressed, the following method is called:
requestTimer.runTimer((self.intervalnumericUpDown.Value* 1000),True, function, *args)
I do not understand why stopping the timer and sending the request causes the runTimer method to be executed multiple times and it seems dependent on how many times I change the interval. I have tried a couple of methods: Close and Dispose with no success.
A second question on slightly different subject.
I have been looking at other .NET classes with Timer classes. A second question is on how I would translate the following VB sample code into Python. Is "callback As TimerCallback" equivalent to myfunction(*my_args)?
Public Sub New ( _
callback As TimerCallback, _
state As Object, _
dueTime As Integer, _
period As Integer _
)
per .NET documentation:
callback
Type: System.Threading.TimerCallback
A TimerCallback delegate representing a method to be executed.
I can partially get the timer event to fire if I define a function with no arguments such as:
def fun2(stateinfo):
# function code
which works with:
self.Timer = Timer(fun2, self.autoEvent, self.dueTime,self.period)
The function call fails if I replace fun2 with a more generic function call myfunction(*my_args)
You can also use * syntax for calling a function with an arbitrary argument list:
class TimerTest:
def __init__(self, interval, autoreset, fun, *args):
# ...
self.my_function = fun
self.my_args = args
# ...
def run_timer(self):
# ...
def on_timed_event(s, e):
# ...
self.my_function(*self.my_args)
# ...
Usage:
>>> t1 = TimerTest(1000, True, fun1, 10, 20)
>>> t2 = TimerTest(1000, True, fun2)
And check out the PEP8 style guide as well. Python's preferred coding conventions are different than many other common languages.
Question 1
Every time you use the addition assignment operator (+=) you are attaching a new event handler to the event. For example this code:
timer = Timer()
def on_timed_event(s, e):
print "Hello form my event handler"
timer.Elapsed += on_timed_event
timer.Elapsed += on_timed_event
timer.Start()
will print the "Hello form my event handler"phrase twice.
For more information you can check out the MSDN documentation, in particular Subscribe to and Unsubscribe from Events .
So, you should probably move the event subscription to the __init__ method, and only start the timer in your run_timer method:
def run_timer(self):
self.Timer.Start()
You could also add a new method (or use a property) for changing the interval:
def set_interval(self, interval):
self.Timer.Interval = interval
Question 2
You are right about TimerCallback: it's a delegate representing a method to be executed.
For example, this Timer constructor:
public Timer(
TimerCallback callback
)
is expecting a void function with a single parameter of type Object.
public delegate void TimerCallback(
Object state
)
When you are invoking a function using the * syntax you are doing something completely different. It's probably easier if I'll show you an example:
def foo(a, b, *args):
print a
print b
print args
>>> foo(1, 2, 3, 4, 5)
1
2
(3, 4, 5)
>>> args = (1, 2, 3)
>>> foo(1, 2, *args)
1
2
(1, 2, 3)
Basically in the second case you are invoking a function with additional arguments unpacked from a tuple.
So If you want to pass a function with a different signature to a constructor which accepts a TimerCallback delegate you have to create a new function, like #Lasse is suggesting.
def my_func(state, a, b):
pass
You can do this either using the lambda keyword:
t1 = Timer(lambda state: my_func(state, 1, 2))
or by declaring a new function:
def time_proc(state):
my_func(state, 1, 2)
t2 = Timer(time_proc)
If the function takes no parameters, simply pass it without calling it:
b = timerTest(3000, True, fun2)
If it takes parameters, you need to convert it to a function that doesn't take parameters. What you're doing is calling it, and then you pass the result, which in this case is a string. Instead do this:
a = timerTest(1000, True, lambda: fun1(10, 20))