Hide post-function calls in with statement - python

I have a function f that returns two parameters. After this function, I use the first parameter. I need the second one to glue things together with another function g. So my code would look like:
a, b = f()
# do stuff with a
g(b)
This is very repetitive, so I figured I could use something like a RAII approach. But, since I don't know when objects will die and my main goal is to get rid of the repetitive code, I used the with statement:
with F() as a:
# do stuff with a
That's it. I basically created an object F around this function, supplying __enter__ and __exit__ functions (and obviously __init__ function).
However, I still wonder whether this is the right "Pythonic" way to handle this, since a with statement was meant to handle exceptions. Especially this __exit__ function has three arguments, which I don't use at the moment.
Edit (further explanations):
Whenever i call f() i NEED to do something with b. It does not matter what happened in between. I expressed it as g(b). And exactly this i hide away in the with statement. So the programmer doesn't have to type g(b) again and again after every call of f(), which might get very messy and confusing, since #do stuff with a might get lengthy.

A slightly more pythonic way of doing it would be to use the contextlib module. You can do away with rolling out your own class with __enter__ and __exit__ functions.
from contextlib import contextmanager
def g(x):
print "printing g(..):", x
def f():
return "CORE", "extra"
#contextmanager
def wrap():
a, b = f()
try:
yield a
except Exception as e:
print "Exception occurred", e
finally:
g(b)
if __name__ == '__main__':
with wrap() as x:
print "printing f(..):", x
raise Exception()
Output:
$ python temp.py
printing f(..): CORE
Exception occurred
printing g(..): extra

For someone else reading your code (or you in 6 months time), a context manager probably will imply the creation/initialisation and closure/deletion of a resource, and I would be very wary of re-purposing the idiom.
You could use composition, rather than a context manager:
def func(a, b):
# do stuff with a & optionally b
def new_f(do_stuff):
a, b = f()
do_stuff(a, b)
g(b)
new_f(func)

Related

Why was Python decorator chaining designed to work backwards? What is the logic behind this order?

To start with, my question here is about the semantics and the logic behind why the Python language was designed like this in the case of chained decorators. Please notice the nuance how this is different from the question
How decorators chaining work?
Link: How decorators chaining work? It seems quite a number of other users had the same doubts, about the call order of chained Python decorators. It is not like I can't add a __call__ and see the order for myself. I get this, my point is, why was it designed to start from the bottom, when it comes to chained Python decorators?
E.g.
def first_func(func):
def inner():
x = func()
return x * x
return inner
def second_func(func):
def inner():
x = func()
return 2 * x
return inner
#first_func
#second_func
def num():
return 10
print(num())
Quoting the documentation on decorators:
The decorator syntax is merely syntactic sugar, the following two function definitions are semantically equivalent:
def f(arg):
...
f = staticmethod(f)
#staticmethod
def f(arg):
...
From this it follows that the decoration in
#a
#b
#c
def fun():
...
is equivalent to
fun = a(b(c(fun)))
IOW, it was designed like that because it's just syntactic sugar.
For proof, let's just decorate an existing function and not return a new one:
def dec1(f):
print(f"dec1: got {vars(f)}")
f.dec1 = True
return f
def dec2(f):
print(f"dec2: got {vars(f)}")
f.dec2 = True
return f
#dec1
#dec2
def foo():
pass
print(f"Fully decked out: {vars(foo)}")
prints out
dec2: got {}
dec1: got {'dec2': True}
Fully decked out: {'dec2': True, 'dec1': True}
TL;DR
g(f(x)) means applying f to x first, then applying g to the output.
Omit the parentheses, add # before and line break after each function name:
#g
#f
x
(Syntax only valid if x is the definition of a function/class.)
Abstract explanation
The reasoning behind this design decision becomes fairly obvious IMHO, if you remember what the decorator syntax - in its most abstract and general form - actually means. So I am going to try the abstract approach to explain this.
It is all about syntax
To be clear here, the distinguishing factor in the concept of the "decorator" is not the object underneath it (so to speak) nor the operation it performs. It is the special syntax and the restrictions for it. Thus, a decorator at its core is nothing more than feature of Python grammar.
The decorator syntax requires a target to be decorated. Initially (see PEP 318) the target could only be function definitions; later class definitions were also allowed to be decorated (see PEP 3129).
Minimal valid syntax
Syntactically, this is valid Python:
def f(): pass
#f
class Target: pass # or `def target: pass`
However, this will (perhaps unsuprisingly) cause a TypeError upon execution. As has been reiterated multiple times here and in other posts on this platform, the above is equivalent to this:
def f(): pass
class Target: pass
Target = f(Target)
Minimal working decorator
The TypeError stems from the fact that f lacks a positional argument. This is the obvious logical restriction imposed by what a decorator is supposed to do. Thus, to achieve not only syntactically valid code, but also have it run without errors, this is sufficient:
def f(x): pass
#f
class Target: pass
This is still not very useful, but it is enough for the most general form of a working decorator.
Decoration is just application of a function to the target and assigning the output to the target's name.
Chaining functions ⇒ Chaining decorators
We can ignore the target and what it is or does and focus only on the decorator. Since it merely stands for applying a function, the order of operations comes into play, as soon as we have more than one. What is the order of operation, when we chain functions?
def f(x): pass
def g(x): pass
class Target: pass
Target = g(f(Target))
Well, just like in the composition of purely mathematical functions, this implies that we apply f to Target first and then apply g to the result of f. Despite g appearing first (i.e. further left), it is not what is applied first.
Since stacking decorators is equivalent to nesting functions, it seems obvious to define the order of operation the same way. This time, we just skip the parentheses, add an # symbol in front of the function name and a line break after it.
def f(x): pass
def g(x): pass
#g
#f
class Target: pass
But, why though?
If after the explanation above (and reading the PEPs for historic background), the reasoning behind the order of operation is still not clear or still unintuitive, there is not really any good answer left, other than "because the devs thought it made sense, so get used to it".
PS
I thought I'd add a few things for additional context based on all the comments around your question.
Decoration vs. calling a decorated function
A source of confusion seems to be the distinction between what happens when applying the decorator versus calling the decorated function.
Notice that in my examples above I never actually called target itself (the class or function being decorated). Decoration is itself a function call. Adding #f above the target is calling the f and passing the target to it as the first positional argument.
A "decorated function" might not even be a function
The distinction is very important because nowhere does it say that a decorator actually needs to return a callable (function or class). f being just a function means it can return whatever it wants. This is again valid and working Python code:
def f(x): return 3.14
#f
def target(): return "foo"
try:
target()
except Exception as e:
print(repr(e))
print(target)
Output:
TypeError("'float' object is not callable")
3.14
Notice that the name target does not even refer to a function anymore. It just holds the 3.14 returned by the decorator. Thus, we cannot even call target. The entire function behind it is essentially lost immediately before it is even available to the global namespace. That is because f just completely ignores its first positional argument x.
Replacing a function
Expanding this further, if we want, we can have f return a function. Not doing that seems very strange, considering it is used to decorate a function. But it doesn't have to be related to the target at all. Again, this is fine:
def bar(): return "bar"
def f(x): return bar
#f
def target(): return "foo"
print(target())
print(target is bar)
Output:
bar
True
It comes down to convention
The way decorators are actually overwhelmingly used out in the wild, is in a way that still keeps a reference to the target being decorated around somewhere. In practice it can be as simple as this:
def f(x):
print(f"applied `f({x.__name__})`")
return
#f
def target(): return "foo"
Just running this piece of code outputs applied f(target). Again, notice that we don't call target here, we only called f. But now, the decorated function is still target, so we could add the call print(target()) at the bottom and that would output foo after the other output produced by f.
The fact that most decorators don't just throw away their target comes down to convention. You (as a developer) would not expect your function/class to simply be thrown away completely, when you use a decorator.
Decoration with wrapping
This is why real-life decorators typically either return the reference to the target at the end outright (like in the last example) or they return a different callable, but that callable itself calls the target, meaning a reference to the target is kept in that new callable's local namespace . These functions are what is usually referred to as wrappers:
def f(x):
print(f"applied `f({x.__name__})`")
def wrapper():
print(f"wrapper executing with {locals()=}")
return x()
return wrapper
#f
def target(): return "foo"
print(f"{target()=}")
print(f"{target.__name__=}")
Output:
applied `f(target)`
wrapper executing with locals()={'x': <function target at 0x7f1b2f78f250>}
target()='foo'
target.__name__='wrapper'
As you can see, what the decorator left us is wrapper, not what we originally defined as target. And the wrapper is what we call, when we write target().
Wrapping wrappers
This is the kind of behavior we typically expect, when we use decorators. And therefore it is not surprising that multiple decorators stacked together behave the way they do. The are called from the inside out (as explained above) and each adds its own wrapper around what it receives from the one applied before:
def f(x):
print(f"applied `f({x.__name__})`")
def wrapper_from_f():
print(f"wrapper_from_f executing with {locals()=}")
return x()
return wrapper_from_f
def g(x):
print(f"applied `g({x.__name__})`")
def wrapper_from_g():
print(f"wrapper_from_g executing with {locals()=}")
return x()
return wrapper_from_g
#g
#f
def target(): return "foo"
print(f"{target()=}")
print(f"{target.__name__=}")
Output:
applied `f(target)`
applied `g(wrapper_from_f)`
wrapper_from_g executing with locals()={'x': <function f.<locals>.wrapper_from_f at 0x7fbfc8d64f70>}
wrapper_from_f executing with locals()={'x': <function target at 0x7fbfc8d65630>}
target()='foo'
target.__name__='wrapper_from_g'
This shows very clearly the difference between the order in which the decorators are called and the order in which the wrapped/wrapping functions are called.
After the decoration is done, we are left with wrapper_from_g, which is referenced by our target name in global namespace. When we call it, wrapper_from_g executes and calls wrapper_from_f, which in turn calls the original target.

How could I pass block to a function in Python which is like the way to pass block in Ruby

In Ruby, I can pass a block of code to a method.
For example, I can pass different code blocks to get_schedules_with_retries method.
And invoke the block by calling black.call
I'd like to know how could I implement that logic in Python,
Because I have lots of code blocks, need retry pattern.
I don't like copy paste the retry logic in many code blocks
Example:
def get_schedules_with_retries(&block)
max_retry_count = 3
retry_count = 0
while (retry_count < max_retry_count)
begin
schedules = get_more_raw_schedules
block.call(schedules)
rescue Exception => e
print_error(e)
end
if schedules.count > 0
break
else
retry_count+=1
end
end
return schedules
end
get_schedules_with_retries do |schedules|
# do something here
end
get_schedules_with_retries do |schedules|
# do another thing here
end
In Python, a block is a syntactic feature (an indentation under block opening statements like if or def) and not an object. The feature you expect may be a closure (which can access variables outside of the block), which you can achieve using inner functions, but any callable could be used. Because of how lambda works in Python, the inline function definition you've shown with do |arg| is limited to a single expression.
Here's a rough rewrite of your sample code in Python.
def get_schedules_with_retries(callable, max_retry_count = 3):
retry_count = 0
while retry_count < max_retry_count:
schedules = get_more_raw_schedules()
try:
callable(schedules)
except: # Note: could filter types, bind name etc.
traceback.print_exc()
if schedules.count > 0:
break
else:
retry_count+=1
return schedules
get_schedules_with_retries(lambda schedules: single_expression)
def more_complex_function(schedules):
pass # do another thing here
get_schedules_with_retries(more_complex_function)
One variant uses a for loop to make it clear the loop is finite:
def call_with_retries(callable, args=(), tries=3):
for attempt in range(tries):
try:
result=callable(*args)
break
except:
traceback.print_exc()
continue
else: # break never reached, so function always failed
raise # Reraises the exception we printed above
return result
Frequently when passing callables like this, you'll already have the function you want available somewhere and won't need to redefine it. For instance, methods on objects (bound methods) are perfectly valid callables.
You could do it like this:
def codeBlock(paramter1, parameter2):
print("I'm a code block")
def passMeABlock(block, *args):
block(*args)
#pass the block like this
passMeABlock(codeBlock, 1, 2)
You do so by defining a function, either by using the def statement or a lambda expression.
There are other techniques however, that may apply here. If you need to apply common logic to the input or output of a function, write a decorator. If you need to handle exceptions in a block of code, perhaps creating a context manager is applicable.

How to prevent try catching every possible line in python?

I got many lines in a row which may throw an exception, but no matter what, it should still continue the next line. How to do this without individually try catching every single statement that may throw an exception?
try:
this_may_cause_an_exception()
but_I_still_wanna_run_this()
and_this()
and_also_this()
except Exception, e:
logging.exception('An error maybe occured in one of first occuring functions causing the others not to be executed. Locals: {locals}'.format(locals=locals()))
Let's see above code, all functions may throw exceptions, but it should still execute the next functions no matter if it threw an exception or not. Is there a nice way of doing that?
I dont wanna do this:
try:
this_may_cause_an_exception()
except:
pass
try:
but_I_still_wanna_run_this()
except:
pass
try:
and_this()
except:
pass
try:
and_also_this()
except:
pass
I think code should still continue to run after an exception only if the exception is critical (The computer will burn or the whole system will get messed up, it should stop the whole program, but for many small things also exceptions are thrown such as connection failed etc.)
I normally don't have any problems with exception handling, but in this case I'm using a 3rd party library which easily throws exceptions for small things.
After looking at m4spy's answer, i thought wouldn't it be possible, to have a decorator which will let every line in the function execute even if one of them raises an exception.
Something like this would be cool:
def silent_log_exceptions(func):
#wraps(func)
def _wrapper(*args, **kwargs):
try:
func(*args, **kwargs)
except Exception:
logging.exception('...')
some_special_python_keyword # which causes it to continue executing the next line
return _wrapper
Or something like this:
def silent_log_exceptions(func):
#wraps(func)
def _wrapper(*args, **kwargs):
for line in func(*args, **kwargs):
try:
exec line
except Exception:
logging.exception('...')
return _wrapper
#silent_log_exceptions
def save_tweets():
a = requests.get('http://twitter.com)
x = parse(a)
bla = x * x
for func in [this_may_cause_an_exception,
but_I_still_wanna_run_this,
and_this,
and_also_this]:
try:
func()
except:
pass
There are two things to notice here:
All actions you want to perform have to represented by callables with the same signature (in the example, callables that take no arguments). If they aren't already, wrap them in small functions, lambda expressions, callable classes, etc.
Bare except clauses are a bad idea, but you probably already knew that.
An alternative approach, that is more flexible, is to use a higher-order function like
def logging_exceptions(f, *args, **kwargs):
try:
f(*args, **kwargs)
except Exception as e:
print("Houston, we have a problem: {0}".format(e))
I ran into something similar, and asked a question on SO here. The accepted answer handles logging, and watching for only a specific exception. I ended up with a modified version:
class Suppressor:
def __init__(self, exception_type, l=None):
self._exception_type = exception_type
self.logger = logging.getLogger('Suppressor')
if l:
self.l = l
else:
self.l = {}
def __call__(self, expression):
try:
exec expression in self.l
except self._exception_type as e:
self.logger.debug('Suppressor: suppressed exception %s with content \'%s\'' % (type(self._exception_type), e))
Usable like so:
s = Suppressor(yourError, locals())
s(cmdString)
So you could set up a list of commands and use map with the suppressor to run across all of them.
You can handle such a task with a decorator:
import logging
from functools import wraps
def log_ex(func):
#wraps(func)
def _wrapper(*args, **kwargs):
try:
func(*args, **kwargs)
except Exception:
logging.exception('...')
return _wrapper
#log_ex
def this_may_cause_an_exception():
print 'this_may_cause_an_exception'
raise RuntimeError()
#log_ex
def but_i_wanna_run_this():
print 'but_i_wanna_run_this'
def test():
this_may_cause_an_exception()
but_i_wanna_run_this()
Calling the test function will look like (which will show that both functions were executed):
>>> test()
this_may_cause_an_exception
ERROR:root:...
Traceback (most recent call last):
File "<stdin>", line 5, in _wrapper
File "<stdin>", line 4, in my_func
RuntimeError
but_i_wanna_run_this
Sometimes, when language misses to support your elegant way of expressing an idea because language development literally failed the last decades, you can only rely on the fact that Python is still a dynamical language which supports the exec statement, which makes the following possible:
code="""
for i in range(Square_Size):
Square[i,i] #= 1
Square[i+1,i] #= 2
#dowhatever()
"""
This new operator makes code more pythonic and elegant since you don't need to specify additional if-statemens that guarantee that the index stays in bound or the function does succeed which is totally irrelevant to what we want to express (it just shouldn't stop) here (note: while safe indexing would be possible by creating a class based on the list class, this operator works whenever there should be a try catch) , in Lisp it would be easy to define it in a Lispy way, but it seams to be impossible to define it in an elegant way in Python, but still, here is the little preparser which will make it possible:
exec "\n".join([o+"try: "+z.replace("#","")+"\n"+o+"except: pass" if "#" in z else z for z in code.split("\n") for o in ["".join([h for h in z if h==" "])]]) #new <- hackish operator which wraps try catch into line
The result, assuming that Square was 4x4 and contained only zeros:
[1 0 0 0]
[2 1 0 0]
[0 2 1 0]
[0 0 2 1]
Relevant: The Sage / Sagemath CAS uses a preparse-function
which transforms code before it reaches the Python interpreter.
A monkey-patch for that function would be:
def new_preparse(code,*args, **kwargs):
code="\n".join([o+"try: "+z.replace("#","")+"\n"+o+"except: pass" if "#" in z else z for z in code.split("\n") for o in ["".join([h for h in z if h==" "])]])
return preparse(code)
sage.misc.preparser.preparse=new_preparse
Apart from the answers provided, I think its worth to note that one-line try-except statements have been proposed - see the related PEP 463 with the unfortunate rejection notice:
""" I want to reject this PEP. I think the proposed syntax is acceptable given the
desired semantics, although it's still a bit jarring. It's probably no worse than the
colon used with lambda (which echoes the colon used in a def just like the colon here
echoes the one in a try/except) and definitely better than the alternatives listed.
But the thing I can't get behind are the motivation and rationale. I don't think that
e.g. dict.get() would be unnecessary once we have except expressions, and I disagree
with the position that EAFP is better than LBYL, or "generally recommended" by Python.
(Where do you get that? From the same sources that are so obsessed with DRY they'd rather
introduce a higher-order-function than repeat one line of code? :-)
This is probably the most you can get out of me as far as a pronouncement. Given that
the language summit is coming up I'd be happy to dive deeper in my reasons for rejecting
it there (if there's demand).
I do think that (apart from never explaining those dreadful acronyms :-) this was a
well-written and well-researched PEP, and I think you've done a great job moderating the
discussion, collecting objections, reviewing alternatives, and everything else that is
required to turn a heated debate into a PEP. Well done Chris (and everyone who
helped), and good luck with your next PEP! """
try:
this_may_cause_an_exception()
except:
logging.exception('An error occured')
finally:
but_I_still_wanna_run_this()
and_this()
and_also_this()
You can use the finally block of exception handling. It is actually meant for cleanup code though.
EDIT:
I see you said all of the functions can throw exceptions, in which case larsmans' answer is about the cleanest I can think of to catch exception for each function call.

Wrapping generator functions in Python

I am writing some code that traverses a structure that may have cyclic references.
Rather than explicitly doing checks at the beginning of the recursive functions I thought that I would create a decorator that didn't allow a function to be called more than once with the same arguments.
Below is what I came up with. As it is written, this will try to iterate over Nonetype and raise an exception. I know that I could fix it by returning say an empty list, but I wanted to be more elegant. Is there a way to tell from within the decorator whether the function being decorated is a generator function or not? This way I could conditionally raise StopIteration if it is a generator or just return None otherwise.
previous = set()
def NO_DUPLICATE_CALLS(func):
def wrapped(*args, **kwargs):
if args in previous:
print 'skipping previous call to %s with args %s %s' % (func.func_name, repr(args), repr(kwargs))
return
else:
ret = func(*args, **kwargs)
previous.add(args)
return ret
return wrapped
#NO_DUPLICATE_CALLS
def foo(x):
for y in x:
yield y
for f in foo('Hello'):
print f
for f in foo('Hello'):
print f
Okay, check this out:
>>> from inspect import isgeneratorfunction
>>> def foo(x):
... for y in x:
... yield y
...
>>> isgeneratorfunction(foo)
True
This requires Python 2.6 or higher, though.
Unfortunately there is not really a good way to know whether a function might return some type of iterable without calling it, see this answer to another question for a pretty good explanation of some potential issues.
However, you can get around this by using a modified memoize decorator. Normally memoizing decorators would create a cache with the return values for previous parameters, but instead of storing the full value you could just store the type of the return value. When you come across parameters you have already seen just return a new initialization of that type, which would result in an empty string, list, etc.
Here is a link to memoize decorator to get you started:
http://wiki.python.org/moin/PythonDecoratorLibrary#Memoize

Efficient way of having a function only execute once in a loop

At the moment, I'm doing stuff like the following, which is getting tedious:
run_once = 0
while 1:
if run_once == 0:
myFunction()
run_once = 1:
I'm guessing there is some more accepted way of handling this stuff?
What I'm looking for is having a function execute once, on demand. For example, at the press of a certain button. It is an interactive app which has a lot of user controlled switches. Having a junk variable for every switch, just for keeping track of whether it has been run or not, seemed kind of inefficient.
I would use a decorator on the function to handle keeping track of how many times it runs.
def run_once(f):
def wrapper(*args, **kwargs):
if not wrapper.has_run:
wrapper.has_run = True
return f(*args, **kwargs)
wrapper.has_run = False
return wrapper
#run_once
def my_function(foo, bar):
return foo+bar
Now my_function will only run once. Other calls to it will return None. Just add an else clause to the if if you want it to return something else. From your example, it doesn't need to return anything ever.
If you don't control the creation of the function, or the function needs to be used normally in other contexts, you can just apply the decorator manually as well.
action = run_once(my_function)
while 1:
if predicate:
action()
This will leave my_function available for other uses.
Finally, if you need to only run it once twice, then you can just do
action = run_once(my_function)
action() # run once the first time
action.has_run = False
action() # run once the second time
Another option is to set the func_code code object for your function to be a code object for a function that does nothing. This should be done at the end of your function body.
For example:
def run_once():
# Code for something you only want to execute once
run_once.func_code = (lambda:None).func_code
Here run_once.func_code = (lambda:None).func_code replaces your function's executable code with the code for lambda:None, so all subsequent calls to run_once() will do nothing.
This technique is less flexible than the decorator approach suggested in the accepted answer, but may be more concise if you only have one function you want to run once.
Run the function before the loop. Example:
myFunction()
while True:
# all the other code being executed in your loop
This is the obvious solution. If there's more than meets the eye, the solution may be a bit more complicated.
I'm assuming this is an action that you want to be performed at most one time, if some conditions are met. Since you won't always perform the action, you can't do it unconditionally outside the loop. Something like lazily retrieving some data (and caching it) if you get a request, but not retrieving it otherwise.
def do_something():
[x() for x in expensive_operations]
global action
action = lambda : None
action = do_something
while True:
# some sort of complex logic...
if foo:
action()
There are many ways to do what you want; however, do note that it is quite possible that —as described in the question— you don't have to call the function inside the loop.
If you insist in having the function call inside the loop, you can also do:
needs_to_run= expensive_function
while 1:
…
if needs_to_run: needs_to_run(); needs_to_run= None
…
I've thought of another—slightly unusual, but very effective—way to do this that doesn't require decorator functions or classes. Instead it just uses a mutable keyword argument, which ought to work in most versions of Python. Most of the time these are something to be avoided since normally you wouldn't want a default argument value to change from call-to-call—but that ability can be leveraged in this case and used as a cheap storage mechanism. Here's how that would work:
def my_function1(_has_run=[]):
if _has_run: return
print("my_function1 doing stuff")
_has_run.append(1)
def my_function2(_has_run=[]):
if _has_run: return
print("my_function2 doing some other stuff")
_has_run.append(1)
for i in range(10):
my_function1()
my_function2()
print('----')
my_function1(_has_run=[]) # Force it to run.
Output:
my_function1 doing stuff
my_function2 doing some other stuff
----
my_function1 doing stuff
This could be simplified a little further by doing what #gnibbler suggested in his answer and using an iterator (which were introduced in Python 2.2):
from itertools import count
def my_function3(_count=count()):
if next(_count): return
print("my_function3 doing something")
for i in range(10):
my_function3()
print('----')
my_function3(_count=count()) # Force it to run.
Output:
my_function3 doing something
----
my_function3 doing something
Here's an answer that doesn't involve reassignment of functions, yet still prevents the need for that ugly "is first" check.
__missing__ is supported by Python 2.5 and above.
def do_once_varname1():
print 'performing varname1'
return 'only done once for varname1'
def do_once_varname2():
print 'performing varname2'
return 'only done once for varname2'
class cdict(dict):
def __missing__(self,key):
val=self['do_once_'+key]()
self[key]=val
return val
cache_dict=cdict(do_once_varname1=do_once_varname1,do_once_varname2=do_once_varname2)
if __name__=='__main__':
print cache_dict['varname1'] # causes 2 prints
print cache_dict['varname2'] # causes 2 prints
print cache_dict['varname1'] # just 1 print
print cache_dict['varname2'] # just 1 print
Output:
performing varname1
only done once for varname1
performing varname2
only done once for varname2
only done once for varname1
only done once for varname2
One object-oriented approach and make your function a class, aka as a "functor", whose instances automatically keep track of whether they've been run or not when each instance is created.
Since your updated question indicates you may need many of them, I've updated my answer to deal with that by using a class factory pattern. This is a bit unusual, and it may have been down-voted for that reason (although we'll never know for sure because they never left a comment). It could also be done with a metaclass, but it's not much simpler.
def RunOnceFactory():
class RunOnceBase(object): # abstract base class
_shared_state = {} # shared state of all instances (borg pattern)
has_run = False
def __init__(self, *args, **kwargs):
self.__dict__ = self._shared_state
if not self.has_run:
self.stuff_done_once(*args, **kwargs)
self.has_run = True
return RunOnceBase
if __name__ == '__main__':
class MyFunction1(RunOnceFactory()):
def stuff_done_once(self, *args, **kwargs):
print("MyFunction1.stuff_done_once() called")
class MyFunction2(RunOnceFactory()):
def stuff_done_once(self, *args, **kwargs):
print("MyFunction2.stuff_done_once() called")
for _ in range(10):
MyFunction1() # will only call its stuff_done_once() method once
MyFunction2() # ditto
Output:
MyFunction1.stuff_done_once() called
MyFunction2.stuff_done_once() called
Note: You could make a function/class able to do stuff again by adding a reset() method to its subclass that reset the shared has_run attribute. It's also possible to pass regular and keyword arguments to the stuff_done_once() method when the functor is created and the method is called, if desired.
And, yes, it would be applicable given the information you added to your question.
Assuming there is some reason why myFunction() can't be called before the loop
from itertools import count
for i in count():
if i==0:
myFunction()
Here's an explicit way to code this up, where the state of which functions have been called is kept locally (so global state is avoided). I don't much like the non-explicit forms suggested in other answers: it's too surprising to see f() and for this not to mean that f() gets called.
This works by using dict.pop which looks up a key in a dict, removes the key from the dict, and takes a default value to use in case the key isn't found.
def do_nothing(*args, *kwargs):
pass
# A list of all the functions you want to run just once.
actions = [
my_function,
other_function
]
actions = dict((action, action) for action in actions)
while True:
if some_condition:
actions.pop(my_function, do_nothing)()
if some_other_condition:
actions.pop(other_function, do_nothing)()
I use cached_property decorator from functools to run just once and save the value. Example from the official documentation https://docs.python.org/3/library/functools.html
class DataSet:
def __init__(self, sequence_of_numbers):
self._data = tuple(sequence_of_numbers)
#cached_property
def stdev(self):
return statistics.stdev(self._data)
You can also use one of the standard library functools.lru_cache or functools.cache decorators in front of the function:
from functools import lru_cache
#lru_cache
def expensive_function():
return None
https://docs.python.org/3/library/functools.html
If I understand the updated question correctly, something like this should work
def function1():
print "function1 called"
def function2():
print "function2 called"
def function3():
print "function3 called"
called_functions = set()
while True:
n = raw_input("choose a function: 1,2 or 3 ")
func = {"1": function1,
"2": function2,
"3": function3}.get(n)
if func in called_functions:
print "That function has already been called"
else:
called_functions.add(func)
func()
You have all those 'junk variables' outside of your mainline while True loop. To make the code easier to read those variables can be brought inside the loop, right next to where they are used. You can also set up a variable naming convention for these program control switches. So for example:
# # _already_done checkpoint logic
try:
ran_this_user_request_already_done
except:
this_user_request()
ran_this_user_request_already_done = 1
Note that on the first execution of this code the variable ran_this_user_request_already_done is not defined until after this_user_request() is called.
A simple function you can reuse in many places in your code (based on the other answers here):
def firstrun(keyword, _keys=[]):
"""Returns True only the first time it's called with each keyword."""
if keyword in _keys:
return False
else:
_keys.append(keyword)
return True
or equivalently (if you like to rely on other libraries):
from collections import defaultdict
from itertools import count
def firstrun(keyword, _keys=defaultdict(count)):
"""Returns True only the first time it's called with each keyword."""
return not _keys[keyword].next()
Sample usage:
for i in range(20):
if firstrun('house'):
build_house() # runs only once
if firstrun(42): # True
print 'This will print.'
if firstrun(42): # False
print 'This will never print.'
I've taken a more flexible approach inspired by functools.partial function:
DO_ONCE_MEMORY = []
def do_once(id, func, *args, **kwargs):
if id not in DO_ONCE_MEMORY:
DO_ONCE_MEMORY.append(id)
return func(*args, **kwargs)
else:
return None
With this approach you are able to have more complex and explicit interactions:
do_once('foobar', print, "first try")
do_once('foo', print, "first try")
do_once('bar', print, "second try")
# first try
# second try
The exciting part about this approach it can be used anywhere and does not require factories - it's just a small memory tracker.
Depending on the situation, an alternative to the decorator could be the following:
from itertools import chain, repeat
func_iter = chain((myFunction,), repeat(lambda *args, **kwds: None))
while True:
next(func_iter)()
The idea is based on iterators, which yield the function once (or using repeat(muFunction, n) n-times), and then endlessly the lambda doing nothing.
The main advantage is that you don't need a decorator which sometimes complicates things, here everything happens in a single (to my mind) readable line. The disadvantage is that you have an ugly next in your code.
Performance wise there seems to be not much of a difference, on my machine both approaches have an overhead of around 130 ns.
If the condition check needs to happen only once you are in the loop, having a flag signaling that you have already run the function helps. In this case you used a counter, a boolean variable would work just as fine.
signal = False
count = 0
def callme():
print "I am being called"
while count < 2:
if signal == False :
callme()
signal = True
count +=1
I'm not sure that I understood your problem, but I think you can divide loop. On the part of the function and the part without it and save the two loops.

Categories

Resources