Minimize repetition between the same function definition sync and async [duplicate] - python

This question already has an answer here:
Make function asynchronous depending on parameter
(1 answer)
Closed 2 years ago.
I have 2 code paths, 1 sync and 1 async. I want them to have the same behavior except for their synchronicity flavor.
To do this, I'm trying to have as much code in common between them, as DRY as I can.
Here is my problem:
def my_func(conn, several, keyword, arguments, that, are, tedious, to, refactor, but, must, be, explicit):
query = template(keyword, arguments, that, are, tedious, to, refactor, but, must, be, explicit)
res = find_all_sync(conn, query)
return res
async def my_func(conn, several, keyword, arguments, that, are, tedious, to, refactor, but, must, be, explicit):
query = template(keyword, arguments, that, are, tedious, to, refactor, but, must, be, explicit)
res = await find_all_async(conn, query)
return res
As you see, I have an embedded logic to call find_all_sync vs find_all_async, and await the latter---so I can't simply write a sync and wrap it in an async wrapper. I internally call a slightly different function.
Most of the rest of my logic is contained in the templating step, but I can't find any way to further abstract the repetition of the tedious arguments while still having them listed explicitly. What I'd imagine is something like....
# how can I make this /definition/ choose correctly sync or async?
def _my_func(conn, several, keyword, arguments, that, are, tedious, to, refactor, but, must, be, excplicit, is_async=False):
query = template(keyword, arguments, that, are, tedious, to, refactor, but, must, be, excplicit)
if is_async:
# this doesn't work. Look my outer function is sync
res = await find_all_async(conn, query)
else:
res = find_all_sync(conn, query)
return res
Is there an easier way to have one code path for sync + async? Am I overlooking something simple? (I'm also open to slightly more complicated options like inspecting the arguments inside the function.)

There's a decorator in the curio package that allows a function to have an async and sync implementation, which can be found here:
https://github.com/dabeaz/curio/blob/master/curio/meta.py#L118
Here's the decorator implementation:
def awaitable(syncfunc):
'''
Decorator that allows an asynchronous function to be paired with a
synchronous function in a single function call. The selection of
which function executes depends on the calling context. For example:
def spam(sock, maxbytes): (A)
return sock.recv(maxbytes)
#awaitable(spam) (B)
async def spam(sock, maxbytes):
return await sock.recv(maxbytes)
In later code, you could use the spam() function in either a synchronous
or asynchronous context. For example:
def foo():
...
r = spam(s, 1024) # Calls synchronous function (A) above
...
async def bar():
...
r = await spam(s, 1024) # Calls async function (B) above
...
'''
def decorate(asyncfunc):
if inspect.signature(syncfunc) != inspect.signature(asyncfunc):
raise TypeError(f'{syncfunc.__name__} and async {asyncfunc.__name__} have different signatures')
#wraps(asyncfunc)
def wrapper(*args, **kwargs):
if from_coroutine():
return asyncfunc(*args, **kwargs)
else:
return syncfunc(*args, **kwargs)
wrapper._syncfunc = syncfunc
wrapper._asyncfunc = asyncfunc
wrapper._awaitable = True
wrapper.__doc__ = syncfunc.__doc__ or asyncfunc.__doc__
return wrapper
return decorate
(Note the use of inspect.signature to quickly check that the two functions are compatible.)
The method it uses to determine the calling context is tied-up in this function:
def from_coroutine(level=2, _cache={}):
f_code = _getframe(level).f_code
if f_code in _cache:
return _cache[f_code]
if f_code.co_flags & _CO_FROM_COROUTINE:
_cache[f_code] = True
return True
else:
# Comment: It's possible that we could end up here if one calls a function
# from the context of a list comprehension or a generator expression. For
# example:
#
# async def coro():
# ...
# a = [ func() for x in s ]
# ...
#
# Where func() is some function that we've wrapped with one of the decorators
# below. If so, the code object is nested and has a name such as <listcomp> or <genexpr>
if (f_code.co_flags & _CO_NESTED and f_code.co_name[0] == '<'):
return from_coroutine(level + 2)
else:
_cache[f_code] = False
return False
This involves some pretty gnarly frame hacks, but it should get you where you want to go. If you're only interested in determining if a function is a coroutine or not there's inspect.iscoroutinefunction:
>>> import inspect
>>> async def f():
... ...
...
>>> inspect.iscoroutinefunction(f)
True
>>> def g():
... ...
...
>>> inspect.iscoroutinefunction(g)
False

I think you should write either an async or a regular function and then use library like unsync so it will be executed in the event loop or thread/process executor according to its type.
Some examples from unsync doc:
#unsync
async def unsync_async():
await asyncio.sleep(1)
return 'I like decorators'
#unsync
def non_async_function(seconds):
time.sleep(seconds)
return 'Run concurrently!'

Related

Why was Python decorator chaining designed to work backwards? What is the logic behind this order?

To start with, my question here is about the semantics and the logic behind why the Python language was designed like this in the case of chained decorators. Please notice the nuance how this is different from the question
How decorators chaining work?
Link: How decorators chaining work? It seems quite a number of other users had the same doubts, about the call order of chained Python decorators. It is not like I can't add a __call__ and see the order for myself. I get this, my point is, why was it designed to start from the bottom, when it comes to chained Python decorators?
E.g.
def first_func(func):
def inner():
x = func()
return x * x
return inner
def second_func(func):
def inner():
x = func()
return 2 * x
return inner
#first_func
#second_func
def num():
return 10
print(num())
Quoting the documentation on decorators:
The decorator syntax is merely syntactic sugar, the following two function definitions are semantically equivalent:
def f(arg):
...
f = staticmethod(f)
#staticmethod
def f(arg):
...
From this it follows that the decoration in
#a
#b
#c
def fun():
...
is equivalent to
fun = a(b(c(fun)))
IOW, it was designed like that because it's just syntactic sugar.
For proof, let's just decorate an existing function and not return a new one:
def dec1(f):
print(f"dec1: got {vars(f)}")
f.dec1 = True
return f
def dec2(f):
print(f"dec2: got {vars(f)}")
f.dec2 = True
return f
#dec1
#dec2
def foo():
pass
print(f"Fully decked out: {vars(foo)}")
prints out
dec2: got {}
dec1: got {'dec2': True}
Fully decked out: {'dec2': True, 'dec1': True}
TL;DR
g(f(x)) means applying f to x first, then applying g to the output.
Omit the parentheses, add # before and line break after each function name:
#g
#f
x
(Syntax only valid if x is the definition of a function/class.)
Abstract explanation
The reasoning behind this design decision becomes fairly obvious IMHO, if you remember what the decorator syntax - in its most abstract and general form - actually means. So I am going to try the abstract approach to explain this.
It is all about syntax
To be clear here, the distinguishing factor in the concept of the "decorator" is not the object underneath it (so to speak) nor the operation it performs. It is the special syntax and the restrictions for it. Thus, a decorator at its core is nothing more than feature of Python grammar.
The decorator syntax requires a target to be decorated. Initially (see PEP 318) the target could only be function definitions; later class definitions were also allowed to be decorated (see PEP 3129).
Minimal valid syntax
Syntactically, this is valid Python:
def f(): pass
#f
class Target: pass # or `def target: pass`
However, this will (perhaps unsuprisingly) cause a TypeError upon execution. As has been reiterated multiple times here and in other posts on this platform, the above is equivalent to this:
def f(): pass
class Target: pass
Target = f(Target)
Minimal working decorator
The TypeError stems from the fact that f lacks a positional argument. This is the obvious logical restriction imposed by what a decorator is supposed to do. Thus, to achieve not only syntactically valid code, but also have it run without errors, this is sufficient:
def f(x): pass
#f
class Target: pass
This is still not very useful, but it is enough for the most general form of a working decorator.
Decoration is just application of a function to the target and assigning the output to the target's name.
Chaining functions ⇒ Chaining decorators
We can ignore the target and what it is or does and focus only on the decorator. Since it merely stands for applying a function, the order of operations comes into play, as soon as we have more than one. What is the order of operation, when we chain functions?
def f(x): pass
def g(x): pass
class Target: pass
Target = g(f(Target))
Well, just like in the composition of purely mathematical functions, this implies that we apply f to Target first and then apply g to the result of f. Despite g appearing first (i.e. further left), it is not what is applied first.
Since stacking decorators is equivalent to nesting functions, it seems obvious to define the order of operation the same way. This time, we just skip the parentheses, add an # symbol in front of the function name and a line break after it.
def f(x): pass
def g(x): pass
#g
#f
class Target: pass
But, why though?
If after the explanation above (and reading the PEPs for historic background), the reasoning behind the order of operation is still not clear or still unintuitive, there is not really any good answer left, other than "because the devs thought it made sense, so get used to it".
PS
I thought I'd add a few things for additional context based on all the comments around your question.
Decoration vs. calling a decorated function
A source of confusion seems to be the distinction between what happens when applying the decorator versus calling the decorated function.
Notice that in my examples above I never actually called target itself (the class or function being decorated). Decoration is itself a function call. Adding #f above the target is calling the f and passing the target to it as the first positional argument.
A "decorated function" might not even be a function
The distinction is very important because nowhere does it say that a decorator actually needs to return a callable (function or class). f being just a function means it can return whatever it wants. This is again valid and working Python code:
def f(x): return 3.14
#f
def target(): return "foo"
try:
target()
except Exception as e:
print(repr(e))
print(target)
Output:
TypeError("'float' object is not callable")
3.14
Notice that the name target does not even refer to a function anymore. It just holds the 3.14 returned by the decorator. Thus, we cannot even call target. The entire function behind it is essentially lost immediately before it is even available to the global namespace. That is because f just completely ignores its first positional argument x.
Replacing a function
Expanding this further, if we want, we can have f return a function. Not doing that seems very strange, considering it is used to decorate a function. But it doesn't have to be related to the target at all. Again, this is fine:
def bar(): return "bar"
def f(x): return bar
#f
def target(): return "foo"
print(target())
print(target is bar)
Output:
bar
True
It comes down to convention
The way decorators are actually overwhelmingly used out in the wild, is in a way that still keeps a reference to the target being decorated around somewhere. In practice it can be as simple as this:
def f(x):
print(f"applied `f({x.__name__})`")
return
#f
def target(): return "foo"
Just running this piece of code outputs applied f(target). Again, notice that we don't call target here, we only called f. But now, the decorated function is still target, so we could add the call print(target()) at the bottom and that would output foo after the other output produced by f.
The fact that most decorators don't just throw away their target comes down to convention. You (as a developer) would not expect your function/class to simply be thrown away completely, when you use a decorator.
Decoration with wrapping
This is why real-life decorators typically either return the reference to the target at the end outright (like in the last example) or they return a different callable, but that callable itself calls the target, meaning a reference to the target is kept in that new callable's local namespace . These functions are what is usually referred to as wrappers:
def f(x):
print(f"applied `f({x.__name__})`")
def wrapper():
print(f"wrapper executing with {locals()=}")
return x()
return wrapper
#f
def target(): return "foo"
print(f"{target()=}")
print(f"{target.__name__=}")
Output:
applied `f(target)`
wrapper executing with locals()={'x': <function target at 0x7f1b2f78f250>}
target()='foo'
target.__name__='wrapper'
As you can see, what the decorator left us is wrapper, not what we originally defined as target. And the wrapper is what we call, when we write target().
Wrapping wrappers
This is the kind of behavior we typically expect, when we use decorators. And therefore it is not surprising that multiple decorators stacked together behave the way they do. The are called from the inside out (as explained above) and each adds its own wrapper around what it receives from the one applied before:
def f(x):
print(f"applied `f({x.__name__})`")
def wrapper_from_f():
print(f"wrapper_from_f executing with {locals()=}")
return x()
return wrapper_from_f
def g(x):
print(f"applied `g({x.__name__})`")
def wrapper_from_g():
print(f"wrapper_from_g executing with {locals()=}")
return x()
return wrapper_from_g
#g
#f
def target(): return "foo"
print(f"{target()=}")
print(f"{target.__name__=}")
Output:
applied `f(target)`
applied `g(wrapper_from_f)`
wrapper_from_g executing with locals()={'x': <function f.<locals>.wrapper_from_f at 0x7fbfc8d64f70>}
wrapper_from_f executing with locals()={'x': <function target at 0x7fbfc8d65630>}
target()='foo'
target.__name__='wrapper_from_g'
This shows very clearly the difference between the order in which the decorators are called and the order in which the wrapped/wrapping functions are called.
After the decoration is done, we are left with wrapper_from_g, which is referenced by our target name in global namespace. When we call it, wrapper_from_g executes and calls wrapper_from_f, which in turn calls the original target.

How to add object to a list after each getter (how to repeat similar logic in getters)

# Message Object Creation File
messageList = []
def getMessageObjectA():
msg = MessageCreator(msgAttribute1, msgAttribute2)
msgList.append(msg)
return msg
def getMessageObjectB():
msg = MessageCreator(msgAttribute3, msgAttribute4)
msgList.append(msg)
return msg
def getMessageObjectC():
msg = MessageCreator(msgAttribute5, msgAttribute6)
msgList.append(msg)
return msg
def clearMessages():
for msg in messageList:
# logic to clear messages
# Test Script #1
import MessageObjects as MsgObj
a = MsgObj.getMessageObjectA()
c = MsgObj.getMessageObjectC()
# Do stuff
MsgObj.clearMessages()
# Do more stuff
# Test Script #223423423
import MessageObjects as MsgObj
e = MsgObj.getMessageObjectE()
u = MsgObj.getMessageObjectU()
y = MsgObj.getMessageObjectY()
# Do stuff
MsgObj.clearMessages()
# Do more stuff
In the actual code, I will have over a hundred getMessageObject() functions. And in certain places, I will only call some of those getMessageObject() functions depending on what is needed, which is why I have those getters.
Adding this line msgList.append(msg) inside every function introduces human programming error and possibly unnecessarily adds to the length of the source code file.
How do I have every getter call msgList.append(msg)? Is there some sort of fancy way to wrap all of this logic in a wrapper function that I'm not thinking of? I'm pretty sure decorators won't work because they don't see the variables inside the function, and I would have to repeat those decorators too for every function I make.
NOTE: Answer has to be in Python2. Can't use Python3 at work.
NOTE: The intent is for these getters() to be inside a Constants-like file, where our many different test scripts call these getters.
The simplest solution is just to generalize the function. The only difference between each getItem# function is the arguments passed to GenerateItem. Just pass that data in to getItem:
def getItem(arg1, arg2):
item = GenerateItem(arg1, arg2)
itemList.append(item)
return item
a = getItem(val1, val2)
b = getItem(val3, val4)
If you need functions with specific names, just create aliases. This can be done easily using functools.partial:
from functools import partial
getItemA = partial(getItem, val1, val2)
getItemB = partial(getItem, val3, val4)
a = getItemA()
b = getItemB()
The arguments are partially applied to getItem, and a 0-arity function is returned and placed in the alias.
Of course though, manually hardcoding all these also leads to sources of error. You may want to reconsider how things are setup if this is necessary.
Why should the itemList be populated inside getters in the first place?
What you can do is, when you are calling such getters, add a line for the appending the respective item to the list
a = getItemA()
itemList.append(a)

Make callers use functools.partial or call a factory function?

I've read several arguments comparing partial vs lambda, but most of them talked about how partial is more flexible (not limited to expressions) and gives info about the wrapped function. But I want to consider this from the caller's perspective. Here's my situation.
I have a function that takes a 1-argument modifier function. A request is passed into that modifier function to be modified:
def my_func(request, modifier):
modifier(request)
I'm also building some utilities that makes it easier to create parameterized modifier functions, e.g. adding/modifying URL params to the request. I thought of two ways of doing it, but not sure which one is better.
Option 1
def add_params(request, params):
for param in params:
# Manipulate the request with param.
This way, callers can use functools.partial to bind the params, like this:
modifier = functools.partial(add_params, params={'abc':'123'})
Option 2
def add_params(params):
def func(request):
for param in params:
# Modify request with param.
return func
Then callers use it like this:
modifier = add_params({'abc':'123'})
Question
If I don't care about function introspection, are there any downsides to using option 2? Would option 2 run into late binding issues? (Although my use case doesn't run into that). I really like how option 2 is easier for callers to use.
The two functions are completely isomorphic to each other from a mathematical perspective (though their efficiency may vary):
# Option 1
(Request, Params) -> None
# Option 2
Params -> (Request -> None)
For your purpose, I would say option 2 offers the most convenience since the function is already curried, so you can not only avoid partial but can also compose them easily:
import functools
def compose(*fs):
return functools.reduce(lambda f, g: lambda x: f(g(x)), fs)
modifier = compose(add_params({'abc':'123'}),
add_params({'def':'456'}))
If you ever want to call the function directly you can always do:
add_params({'abc':'123'})(request)
which is not really all that involved compared to option 1:
add_params(request, {'abc':'123'})
Late binding shouldn't pose an issue unless you use variables from outside the function, and if you do there's always a way to work around it.
Unfortunately option 2 has the disadvantage of being annoying to define, but this can be simplified using decorators:
def curry_request(f):
def wrapper(*args, **kwargs):
def inner(request):
f(request, *args, **kwargs)
return inner
return wrapper
#curry_request
def add_params(request, params):
# do something
def partial(func, *args, **keywords):
def newfunc(*fargs, **fkeywords):
newkeywords = keywords.copy()
newkeywords.update(fkeywords)
return func(*(args + fargs), **newkeywords)
newfunc.func = func
newfunc.args = args
newfunc.keywords = keywords
return newfunc
Compared with partial function implementation code,your option 2 is also good,i don't think it has any downsides in your situation.But functools.partial is a common way to result in a simplified signature,
if you want to ruturn a new partial function for another function,you can still invoke partial func.if you want to use option 2 model,you may need to implement a new function

cleaning up nested function calls

I have written several functions that run sequentially, each one taking as its input the output of the previous function so in order to run it, I have to run this line of code
make_list(cleanup(get_text(get_page(URL))))
and I just find that ugly and inefficient, is there a better way to do sequential function calls?
Really, this is the same as any case where you want to refactor commonly-used complex expressions or statements: just turn the expression or statement into a function. The fact that your expression happens to be a composition of function calls doesn't make any difference (but see below).
So, the obvious thing to do is to write a wrapper function that composes the functions together in one place, so everywhere else you can make a simple call to the wrapper:
def get_page_list(url):
return make_list(cleanup(get_text(get_page(url))))
things = get_page_list(url)
stuff = get_page_list(another_url)
spam = get_page_list(eggs)
If you don't always call the exact same chain of functions, you can always factor out into the pieces that you frequently call. For example:
def get_clean_text(page):
return cleanup(get_text(page))
def get_clean_page(url):
return get_clean_text(get_page(url))
This refactoring also opens the door to making the code a bit more verbose but a lot easier to debug, since it only appears once instead of multiple times:
def get_page_list(url):
page = get_page(url)
text = get_text(page)
cleantext = cleanup(text)
return make_list(cleantext)
If you find yourself needing to do exactly this kind of refactoring of composed functions very often, you can always write a helper that generates the refactored functions. For example:
def compose1(*funcs):
#wraps(funcs[0])
def composed(arg):
for func in reversed(funcs):
arg = func(arg)
return arg
return composed
get_page_list = compose1(make_list, cleanup, get_text, get_page)
If you want a more complicated compose function (that, e.g., allows passing multiple args/return values around), it can get a bit complicated to design, so you might want to look around on PyPI and ActiveState for the various existing implementations.
You could try something like this. I always like separating train wrecks(the book "Clean Code" calls those nested functions train wrecks). This is easier to read and debug. Remember you probably spend twice as long reading your code than writing it so make it easier to read. You will thank yourself later.
url = get_page(URL)
url_text = get_text(url)
make_list(cleanup(url_text))
# you can also encapsulate that into its own function
def build_page_list_from_url(url):
url = get_page(URL)
url_text = get_text(url)
return make_list(cleanup(url_text))
Options:
Refactor: implement this series of function calls as one, aptly-named method.
Look into decorators. They're syntactic sugar for 'chaining' functions in this way. E.g. implement cleanup and make_list as a decorators, then decorate get_text with them.
Compose the functions. See code in this answer.
You could shorten constructs like that with something like the following:
class ChainCalls(object):
def __init__(self, *funcs):
self.funcs = funcs
def __call__(self, *args, **kwargs):
result = self.funcs[-1](*args, **kwargs)
for func in self.funcs[-2::-1]:
result = func(result)
return result
def make_list(arg): return 'make_list(%s)' % arg
def cleanup(arg): return 'cleanup(%s)' % arg
def get_text(arg): return 'get_text(%s)' % arg
def get_page(arg): return 'get_page(%r)' % arg
mychain = ChainCalls(make_list, cleanup, get_text, get_page)
print( mychain('http://is.gd') )
Output:
make_list(cleanup(get_text(get_page('http://is.gd'))))

Efficient way of having a function only execute once in a loop

At the moment, I'm doing stuff like the following, which is getting tedious:
run_once = 0
while 1:
if run_once == 0:
myFunction()
run_once = 1:
I'm guessing there is some more accepted way of handling this stuff?
What I'm looking for is having a function execute once, on demand. For example, at the press of a certain button. It is an interactive app which has a lot of user controlled switches. Having a junk variable for every switch, just for keeping track of whether it has been run or not, seemed kind of inefficient.
I would use a decorator on the function to handle keeping track of how many times it runs.
def run_once(f):
def wrapper(*args, **kwargs):
if not wrapper.has_run:
wrapper.has_run = True
return f(*args, **kwargs)
wrapper.has_run = False
return wrapper
#run_once
def my_function(foo, bar):
return foo+bar
Now my_function will only run once. Other calls to it will return None. Just add an else clause to the if if you want it to return something else. From your example, it doesn't need to return anything ever.
If you don't control the creation of the function, or the function needs to be used normally in other contexts, you can just apply the decorator manually as well.
action = run_once(my_function)
while 1:
if predicate:
action()
This will leave my_function available for other uses.
Finally, if you need to only run it once twice, then you can just do
action = run_once(my_function)
action() # run once the first time
action.has_run = False
action() # run once the second time
Another option is to set the func_code code object for your function to be a code object for a function that does nothing. This should be done at the end of your function body.
For example:
def run_once():
# Code for something you only want to execute once
run_once.func_code = (lambda:None).func_code
Here run_once.func_code = (lambda:None).func_code replaces your function's executable code with the code for lambda:None, so all subsequent calls to run_once() will do nothing.
This technique is less flexible than the decorator approach suggested in the accepted answer, but may be more concise if you only have one function you want to run once.
Run the function before the loop. Example:
myFunction()
while True:
# all the other code being executed in your loop
This is the obvious solution. If there's more than meets the eye, the solution may be a bit more complicated.
I'm assuming this is an action that you want to be performed at most one time, if some conditions are met. Since you won't always perform the action, you can't do it unconditionally outside the loop. Something like lazily retrieving some data (and caching it) if you get a request, but not retrieving it otherwise.
def do_something():
[x() for x in expensive_operations]
global action
action = lambda : None
action = do_something
while True:
# some sort of complex logic...
if foo:
action()
There are many ways to do what you want; however, do note that it is quite possible that —as described in the question— you don't have to call the function inside the loop.
If you insist in having the function call inside the loop, you can also do:
needs_to_run= expensive_function
while 1:
…
if needs_to_run: needs_to_run(); needs_to_run= None
…
I've thought of another—slightly unusual, but very effective—way to do this that doesn't require decorator functions or classes. Instead it just uses a mutable keyword argument, which ought to work in most versions of Python. Most of the time these are something to be avoided since normally you wouldn't want a default argument value to change from call-to-call—but that ability can be leveraged in this case and used as a cheap storage mechanism. Here's how that would work:
def my_function1(_has_run=[]):
if _has_run: return
print("my_function1 doing stuff")
_has_run.append(1)
def my_function2(_has_run=[]):
if _has_run: return
print("my_function2 doing some other stuff")
_has_run.append(1)
for i in range(10):
my_function1()
my_function2()
print('----')
my_function1(_has_run=[]) # Force it to run.
Output:
my_function1 doing stuff
my_function2 doing some other stuff
----
my_function1 doing stuff
This could be simplified a little further by doing what #gnibbler suggested in his answer and using an iterator (which were introduced in Python 2.2):
from itertools import count
def my_function3(_count=count()):
if next(_count): return
print("my_function3 doing something")
for i in range(10):
my_function3()
print('----')
my_function3(_count=count()) # Force it to run.
Output:
my_function3 doing something
----
my_function3 doing something
Here's an answer that doesn't involve reassignment of functions, yet still prevents the need for that ugly "is first" check.
__missing__ is supported by Python 2.5 and above.
def do_once_varname1():
print 'performing varname1'
return 'only done once for varname1'
def do_once_varname2():
print 'performing varname2'
return 'only done once for varname2'
class cdict(dict):
def __missing__(self,key):
val=self['do_once_'+key]()
self[key]=val
return val
cache_dict=cdict(do_once_varname1=do_once_varname1,do_once_varname2=do_once_varname2)
if __name__=='__main__':
print cache_dict['varname1'] # causes 2 prints
print cache_dict['varname2'] # causes 2 prints
print cache_dict['varname1'] # just 1 print
print cache_dict['varname2'] # just 1 print
Output:
performing varname1
only done once for varname1
performing varname2
only done once for varname2
only done once for varname1
only done once for varname2
One object-oriented approach and make your function a class, aka as a "functor", whose instances automatically keep track of whether they've been run or not when each instance is created.
Since your updated question indicates you may need many of them, I've updated my answer to deal with that by using a class factory pattern. This is a bit unusual, and it may have been down-voted for that reason (although we'll never know for sure because they never left a comment). It could also be done with a metaclass, but it's not much simpler.
def RunOnceFactory():
class RunOnceBase(object): # abstract base class
_shared_state = {} # shared state of all instances (borg pattern)
has_run = False
def __init__(self, *args, **kwargs):
self.__dict__ = self._shared_state
if not self.has_run:
self.stuff_done_once(*args, **kwargs)
self.has_run = True
return RunOnceBase
if __name__ == '__main__':
class MyFunction1(RunOnceFactory()):
def stuff_done_once(self, *args, **kwargs):
print("MyFunction1.stuff_done_once() called")
class MyFunction2(RunOnceFactory()):
def stuff_done_once(self, *args, **kwargs):
print("MyFunction2.stuff_done_once() called")
for _ in range(10):
MyFunction1() # will only call its stuff_done_once() method once
MyFunction2() # ditto
Output:
MyFunction1.stuff_done_once() called
MyFunction2.stuff_done_once() called
Note: You could make a function/class able to do stuff again by adding a reset() method to its subclass that reset the shared has_run attribute. It's also possible to pass regular and keyword arguments to the stuff_done_once() method when the functor is created and the method is called, if desired.
And, yes, it would be applicable given the information you added to your question.
Assuming there is some reason why myFunction() can't be called before the loop
from itertools import count
for i in count():
if i==0:
myFunction()
Here's an explicit way to code this up, where the state of which functions have been called is kept locally (so global state is avoided). I don't much like the non-explicit forms suggested in other answers: it's too surprising to see f() and for this not to mean that f() gets called.
This works by using dict.pop which looks up a key in a dict, removes the key from the dict, and takes a default value to use in case the key isn't found.
def do_nothing(*args, *kwargs):
pass
# A list of all the functions you want to run just once.
actions = [
my_function,
other_function
]
actions = dict((action, action) for action in actions)
while True:
if some_condition:
actions.pop(my_function, do_nothing)()
if some_other_condition:
actions.pop(other_function, do_nothing)()
I use cached_property decorator from functools to run just once and save the value. Example from the official documentation https://docs.python.org/3/library/functools.html
class DataSet:
def __init__(self, sequence_of_numbers):
self._data = tuple(sequence_of_numbers)
#cached_property
def stdev(self):
return statistics.stdev(self._data)
You can also use one of the standard library functools.lru_cache or functools.cache decorators in front of the function:
from functools import lru_cache
#lru_cache
def expensive_function():
return None
https://docs.python.org/3/library/functools.html
If I understand the updated question correctly, something like this should work
def function1():
print "function1 called"
def function2():
print "function2 called"
def function3():
print "function3 called"
called_functions = set()
while True:
n = raw_input("choose a function: 1,2 or 3 ")
func = {"1": function1,
"2": function2,
"3": function3}.get(n)
if func in called_functions:
print "That function has already been called"
else:
called_functions.add(func)
func()
You have all those 'junk variables' outside of your mainline while True loop. To make the code easier to read those variables can be brought inside the loop, right next to where they are used. You can also set up a variable naming convention for these program control switches. So for example:
# # _already_done checkpoint logic
try:
ran_this_user_request_already_done
except:
this_user_request()
ran_this_user_request_already_done = 1
Note that on the first execution of this code the variable ran_this_user_request_already_done is not defined until after this_user_request() is called.
A simple function you can reuse in many places in your code (based on the other answers here):
def firstrun(keyword, _keys=[]):
"""Returns True only the first time it's called with each keyword."""
if keyword in _keys:
return False
else:
_keys.append(keyword)
return True
or equivalently (if you like to rely on other libraries):
from collections import defaultdict
from itertools import count
def firstrun(keyword, _keys=defaultdict(count)):
"""Returns True only the first time it's called with each keyword."""
return not _keys[keyword].next()
Sample usage:
for i in range(20):
if firstrun('house'):
build_house() # runs only once
if firstrun(42): # True
print 'This will print.'
if firstrun(42): # False
print 'This will never print.'
I've taken a more flexible approach inspired by functools.partial function:
DO_ONCE_MEMORY = []
def do_once(id, func, *args, **kwargs):
if id not in DO_ONCE_MEMORY:
DO_ONCE_MEMORY.append(id)
return func(*args, **kwargs)
else:
return None
With this approach you are able to have more complex and explicit interactions:
do_once('foobar', print, "first try")
do_once('foo', print, "first try")
do_once('bar', print, "second try")
# first try
# second try
The exciting part about this approach it can be used anywhere and does not require factories - it's just a small memory tracker.
Depending on the situation, an alternative to the decorator could be the following:
from itertools import chain, repeat
func_iter = chain((myFunction,), repeat(lambda *args, **kwds: None))
while True:
next(func_iter)()
The idea is based on iterators, which yield the function once (or using repeat(muFunction, n) n-times), and then endlessly the lambda doing nothing.
The main advantage is that you don't need a decorator which sometimes complicates things, here everything happens in a single (to my mind) readable line. The disadvantage is that you have an ugly next in your code.
Performance wise there seems to be not much of a difference, on my machine both approaches have an overhead of around 130 ns.
If the condition check needs to happen only once you are in the loop, having a flag signaling that you have already run the function helps. In this case you used a counter, a boolean variable would work just as fine.
signal = False
count = 0
def callme():
print "I am being called"
while count < 2:
if signal == False :
callme()
signal = True
count +=1
I'm not sure that I understood your problem, but I think you can divide loop. On the part of the function and the part without it and save the two loops.

Categories

Resources