I have an automation test, which uses function that creates screenshots to a folder. This function is called by multiple screenshot instances. On every test run, a new folder is created, so I don't care about counter reset. In order to reflect the order at which these screenshots are taken, I had to come up with names that could be sorted by order. This is my solution:
def make_screenshot_file(file_name):
order = Counter().count
test_suites_path = _make_job_directory()
return make_writable_file(os.path.join(test_suites_path,'screenshot',file_name % order))
class Counter():
__counter_instance = None
def __init__(self):
if Counter.__counter_instance is None:
self.count = 1
Counter.__counter_instance = self
else:
Counter.__counter_instance.count += 1
self.count = Counter.__counter_instance.count
It works fine for me. But I keep thinking that there should be an easier way to solve this problem. Is there? And if singleton is the only way, could my code be optimized in any way?
What you're trying to do here is simulate a global variable.
There is no good reason to do that. If you really want a global variable, make it explicitly a global variable.
You could create a simple Counter class that increments count by 1 each time you access it, and then create a global instance of it. But the standard library already gives you something like that for free, in itertools.count, as DSM explains in a comment.
So:
import itertools
_counter = itertools.count()
def make_screenshot_file(file_name):
order = next(_counter)
test_suites_path = _make_job_directory()
return make_writable_file(os.path.join(test_suites_path,'screenshot',file_name % order))
I'm not sure why you're so worried about how much storage or time this takes up, because I can't conceive of any program where it could possibly matter whether you were using 8 bytes or 800 for a single object you could never have more than one or, or whether it took 3ns or 3us to access it when you only do so a handful of times.
But if you are worried, as you can see from the source, count is implemented in C, it's pretty memory-efficient, and if you don't do anything fancy with it, it comes down to basically a single PyNumber_Add to generate each number, which is a lot less than interpreting a few lines of code.
Since you asked, here's how you could radically simplify your existing code by using a _count class attribute instead of a __counter_instance class attribute:
class Counter():
_count = 0
def count(self):
Counter._count += 1
return Counter.count
Of course now you have to to Counter().count() instead of just Counter().count—but you can fix that trivially with #property if it matters.
It's worth pointing out that it's a really bad idea to use a classic class instead of a new-style class (by passing nothing inside the parens), and if you do want a classic class you should leave the parens off, and most Python programmers will associate the name Counter with the class collections.Counter, and there's no reason count couldn't be a #classmethod or #staticmethod… at which point this is exactly Andrew T.'s answer. Which, as he points out, is much simpler than what you're doing, and no more or less Pythonic.
But really, all of this is no better than just making _count a module-level global and adding a module-level count() function that increments and returns it.
why not just do
order = time.time()
or do something like
import glob #glob is used for unix like path expansion
order = len(glob.glob(os.path.join(test_suites_path,"screenshot","%s*"%filename))
Using static methods and variables. Not very pythonic, but simpler.
def make_screenshot_file(file_name):
order = Counter.count() #Note the move of the parens
test_suites_path = _make_job_directory()
return make_writable_file(os.path.join(test_suites_path,'screenshot',file_name % order))
class Counter():
count_n = 0
#staticmethod
def count():
Counter.count_n += 1
return Counter.count_n
print Counter.count()
print Counter.count()
print Counter.count()
print Counter.count()
print Counter.count()
atarzwell#freeman:~/src$ python so.py
1
2
3
4
5
Well , you can use this solution, just make sure you never initialize the order kwarg!
Mutable Kwargs in function's work like classes global variables. And the value isn't reset to default between calls, as you might think at first!
def make_screenshot_file(file_name , order=[0]):
order[0] = order[0] + 1
test_suites_path = _make_job_directory()
return make_writable_file(os.path.join(test_suites_path,'screenshot',file_name % order[0]))
Related
Apologies if this is a dumb question, but I've not found an elegant workaround for this issue yet. Basically, when using the concurent.futures module, non-static methods of classes look like they should work fine, I didn't see anything in the docs for the module that would imply they wouldn't work fine, and the module produces no errors when running - and even produces the expected results in many cases!
However, I've noticed that the module seems to not respect updates to iterable fields made in the parent thread, even when those updates occur before starting any child processes. Here's an example of what I mean:
import concurrent.futures
class Thing:
data_list = [0, 0, 0]
data_number = 0
def foo(self, num):
return sum(self.data_list) * num
def bar(self, num):
return num * self.data_number
if __name__ == '__main__':
thing = Thing()
thing.data_list[0] = 1
thing.data_number = 1
with concurrent.futures.ProcessPoolExecutor() as executor:
results = executor.map(thing.foo, range(3))
print('result of changing list:')
for result in results:
print(result)
results = executor.map(thing.bar, range(3))
print('result of changing number:')
for result in results:
print(result)
I would expect the result here to be
result of changing list:
0
1
2
result of changing number:
0
1
2
but instead I get
result of changing list:
0
0
0
result of changing number:
0
1
2
So for some reason, things work as expected for the field that's just an integer, but not at all as expected for the field that's a list. The implication is that the updates made to the list are not respected when the child processes are called, even though the updates to the simpler fields are. I've tried this with dicts as well with the same issue, and I suspect that this is a problem for all iterables.
Is there any way to make this work as expected, allowing for updates to iterable fields to be respected by child processes? It seems bizarre that multiprocessing for non-static methods would be half-implemented like this, but I'm hoping that I'm just missing something!
The problem has nothing to do with "respecting iterable fields", but it is a rather subtle issue. In your main process you have:
thing.data_list[0] = 1 # first assignment
thing.data_number = 1 # second assignmment
Rather than:
Thing.data_list[0] = 1 # first assignment
Thing.data_number = 1 # second assignment
As far as the first assignment is concerned, there isn't any material difference because with either version you are not modifying a class attribute but rather an element within a list that happens to be referenced by a class attribute. In other words, Thing.data_list is still pointing to the same list; this reference has not been changed. This is an important distinction.
But in the second assignment with your version of the code you have essentially modified a class attribute via the instance's self reference. When you do that, you are creating a new instance attribute with the same name data_number.
Your class members functions foo and bar are attempting to access class attributes via self. The Thing instance, thing will be pickled across to the new address space but in the new address space when the Thing is un-pickled, by default new class attributes will be created and initialized to their default values unless you add special pickle rules. But instance attributes should be successfully transmitted, such as your newly created data_number. And that's why the 'result of changing number:' prints out as you expected, i.e. you are actually accessing the instance attribute data_number in bar.
Change bar to the following and you will see that everything will print out as 0:
def bar(self, num):
return num * Thing.data_number
So, I have a code snippet to count the number of function calls, and while browsing for potential optimal solutions, I came across a similar question posted here a few years ago:
is there a way to track the number of times a function is called?
One of the solutions listed in the thread above matched mine, but there was a subtle difference. When I posted my solution and asked about the potential pitfalls in my code, my comment was deleted even though mine was a solution to the question. So, I am hoping this one isn't closed as a duplicate, because frankly I don't know where to turn to.
Here was my solution:
def counter(func):
counter.count=0
def wrap(*args,**kwargs):
counter.count += 1
print "Number of times {} has been called so far
{}".format(func.__name__,counter.count)
func(*args,**kwargs)
return wrap
#counter
def temp(name):
print "Calling {}".format(name)
My counter is defined as an attribute to the decorator 'counter', instead of the wrapper function 'wrap'. The code works as currently defined. But, is there a scenario where it may fail? Am I overlooking something here?
If you use this decorator on two separate functions, they'll share the same call count. That's not what you want. Also, every time this decorator is applied to a new function, it will reset the shared call count.
Aside from that, you've also forgotten to pass through the return value of func in wrap, and you can't stick an unescaped line break in the middle of a non-triple-quoted string literal like that.
With slight modification to your code you can make your counter decorator independent:
def counter(func, counter_dict={}):
counter_dict[func]=0
def wrap(*args,**kwargs):
counter_dict[func] += 1
print("Number of times {} has been called so far {}".format(func.__name__, counter_dict[func]))
return func(*args,**kwargs)
return wrap
#counter
def temp1(name):
print("Calling temp1 {}".format(name))
#counter
def temp2(name):
print("Calling temp2 {}".format(name))
temp1('1')
temp1('2')
temp2('3')
Prints:
Number of times temp1 has been called so far 1
Calling temp1 1
Number of times temp1 has been called so far 2
Calling temp1 2
Number of times temp2 has been called so far 1
Calling temp2 3
This appears simple, but I can't find a good solution.
It's the old 'pass by reference'/ 'pass by value' / 'pass by object reference' problem. I understand what is happening, but I can't find a good work around.
I am aware of solutions for small problems, but my state is very large and extremely expensive to save/ recalculate. Given these constraints, I can't find a solution.
Here is some simple pseudocode to illustrate what I would like to do (if Python would let me pass by reference):
class A:
def __init__(self,x):
self.g=x
self.changes=[]
def change_something(self,what,new): # I want to pass 'what' by reference
old=what # and then de-reference it here to read the value
self.changes.append([what,old]) # store a reference
what=new # dereference and change the value
def undo_changes():
for c in self.changes:
c[0]=c[1] # dereference and restore old value
Edit: Adding some more pseudocode to show how I would like the use the above
test=A(1) # initialise test.g as 1
print(test.g)
out: 1
test.change_something(test.g,2)
# if my imaginary code above functioned as described in its comments,
# this would change test.g to 2 and store the old value in the list of changes
print(test.g)
out: 2
test.undo_changes()
print(test.g)
out: 1
Obviously the above code doesnt work in python due to being 'pass by object reference'. Also I'd like to be able to undo a single change, not just all of them as in the code above.
The thing is... I can't seem to find a good work around. There are solutions out there like these:
Do/Undo using command pattern in Python
making undo in python
Which involve storing a stack of commands. 'Undo' then involves removing the last command and then re-building the final state by taking the initial state and re-applying everything but the last command. My state is too large for this to be feasible, the issues are:
The state is very large. Saving it entirely is prohibitively expensive.
'Do' operations are costly (making recalculating from a saved state infeasible).
Do operations are also non-deterministic, relying on random input.
Undo operations are very frequent
I have one idea, which is to ensure that EVERYTHING is stored in lists, and writing my code so that everything is stored, read from and written to these lists. Then in the code above I can pass the list name and list index every time I want to read/write a variable.
Essentially this amounts to building my own memory architecture and C-style pointer system within Python!
This works, but seems a little... ridiculous? Surely there is a better way?
Please check if it helps....
class A:
def __init__(self,x):
self.g=x
self.changes={}
self.changes[str(x)] = {'init':x, 'old':x, 'new':x} #or make a key by your choice(immutable)
def change_something(self,what,new): # I want to pass 'what' by reference
self.changes[what]['new'] = new #add changed value to your dict
what=new # dereference and change the value
def undo_changes():
what = self.changes[what]['old'] #retrieve/changed to the old value
self.changes[what]['new'] = self.changes[what]['old'] #change latest new val to old val as you reverted your changes
for each change you can update the change_dictionary. Onlhy thing you have to figure out is "how to create entry for what as a key in self.change dictionary", I just made it str(x), just check the type(what) and how to make it a key in your case.
Okay so I have come up with an answer... but it's ugly! I doubt it's the best solution. It uses exec() which I am told is bad practice and to be avoided if at all possible. EDIT: see below!
Old code using exec():
class A:
def __init__(self,x):
self.old=0
self.g=x
self.h=x*10
self.changes=[]
def change_something(self,what,new):
whatstr='self.'+what
exec('self.old='+whatstr)
self.changes.append([what,self.old])
exec(whatstr+'=new')
def undo_changes(self):
for c in self.changes:
exec('self.'+c[0]+'=c[1]')
def undo_last_change(self):
c = self.changes[-1]
exec('self.'+c[0]+'=c[1]')
self.changes.pop()
Thanks to barny, here's a much nicer version using getattr and setattr:
class A:
def __init__(self,x):
self.g=x
self.h=x*10
self.changes=[]
def change_something(self,what,new):
self.changes.append([what,getattr(self,what)])
setattr(self,what,new)
def undo_changes(self):
for c in self.changes:
setattr(self,c[0],c[1])
def undo_last_change(self):
c = self.changes[-1]
setattr(self,c[0],c[1])
self.changes.pop()
To demonstrate, the input:
print("demonstrate changing one value")
b=A(1)
print('g=',b.g)
b.change_something('g',2)
print('g=',b.g)
b.undo_changes()
print('g=',b.g)
print("\ndemonstrate changing two values and undoing both")
b.change_something('h',3)
b.change_something('g',4)
print('g=', b.g, 'h=',b.h)
b.undo_changes()
print('g=', b.g, 'h=',b.h)
print("\ndemonstrate changing two values and undoing one")
b.change_something('h',30)
b.change_something('g',40)
print('g=', b.g, 'h=',b.h)
b.undo_last_change()
print('g=', b.g, 'h=',b.h)
returns:
demonstrate changing one value
g= 1
g= 2
g= 1
demonstrate changing two values and undoing both
g= 4 h= 3
g= 1 h= 10
demonstrate changing two values and undoing one
g= 40 h= 30
g= 1 h= 30
EDIT 2: Actually... after further testing, my initial version with exec() has some advantages over the second. If the class contains a second class, or list, or whatever, the exec() version has no trouble updating a list within a class within a class, however the second version will fail.
I have 2 solutions to a recursion problem that I need for a function (actually a method). I want it to be recursive, but I want to set the recursion limit to 10 and reset it after the function is called (or not mess with recursion limit at all). Can anyone think of a better way to do this or recommend using one over the others? I'm leaning towards the context manager because it keeps my code cleaner and no setting the tracebacklimit, but there might be caveats?
import sys
def func(i=1):
print i
if i > 10:
import sys
sys.tracebacklimit = 1
raise ValueError("Recursion Limit")
i += 1
func(i)
class recursion_limit(object):
def __init__(self, val):
self.val = val
self.old_val = sys.getrecursionlimit()
def __enter__(self):
sys.setrecursionlimit(self.val)
def __exit__(self, *args):
sys.setrecursionlimit(self.old_val)
raise ValueError("Recursion Limit")
def func2(i=1):
"""
Call as
with recursion_limit(12):
func2()
"""
print i
i += 1
func2(i)
if __name__ == "__main__":
# print 'Running func1'
# func()
with recursion_limit(12):
func2()
I do see some odd behavior though with the context manager. If I put in main
with recursion_limit(12):
func2()
It prints 1 to 10. If I do the same from the interpreter it prints 1 to 11. I assume there is something going on under the hood when I import things?
EDIT: For posterity this is what I have come up with for a function that knows its call depth. I doubt I'd use it in any production code, but it gets the job done.
import sys
import inspect
class KeepTrack(object):
def __init__(self):
self.calldepth = sys.maxint
def func(self):
zero = len(inspect.stack())
if zero < self.calldepth:
self.calldepth = zero
i = len(inspect.stack())
print i - self.calldepth
if i - self.calldepth < 9:
self.func()
keeping_track = KeepTrack()
keeping_track.func()
You shouldn't change the system recursion limit at all. You should code your function to know how deep it is, and end the recursion when it gets too deep.
The reason the recursion limit seems differently applied in your program and the interpreter is because they have different tops of stack: the functions invoked in the interpreter to get to the point of running your code.
While somewhat tangential (I'd have put it in a comment, but I don't think there's room), it should be noted that setrecursionlimit is somewhat misleadingly named - it actually sets the maximum stack depth:
http://docs.python.org/library/sys.html#sys.setrecursionlimit
That's why the function behaves differently depending on where you call it from. Also, if func2 were to make a stdlib call (or whatever) that ended up calling a number of functions such that it added more than N to the stack, the exception would trigger early.
Also also, I wouldn't change the sys.tracebacklimit either; that will have an effect on the rest of your program. Go with Ned's answer.
ignoring the more general issues, it looks like you can get the current frame depth by looking at the length of inspect.getouterframes(). that would give you a "zero point" from which you can set the depth limit (disclaimer: i haven't tried this).
edit: or len(inspect.stack()) - it's not clear to me what the difference is. i would be interested in knowing if this works, and whether they were different.
I'd definitely choose the first approach, it is simpler and self explaining. After all the recursion limit is your explicit choice, so why obfuscate it?
At the moment, I'm doing stuff like the following, which is getting tedious:
run_once = 0
while 1:
if run_once == 0:
myFunction()
run_once = 1:
I'm guessing there is some more accepted way of handling this stuff?
What I'm looking for is having a function execute once, on demand. For example, at the press of a certain button. It is an interactive app which has a lot of user controlled switches. Having a junk variable for every switch, just for keeping track of whether it has been run or not, seemed kind of inefficient.
I would use a decorator on the function to handle keeping track of how many times it runs.
def run_once(f):
def wrapper(*args, **kwargs):
if not wrapper.has_run:
wrapper.has_run = True
return f(*args, **kwargs)
wrapper.has_run = False
return wrapper
#run_once
def my_function(foo, bar):
return foo+bar
Now my_function will only run once. Other calls to it will return None. Just add an else clause to the if if you want it to return something else. From your example, it doesn't need to return anything ever.
If you don't control the creation of the function, or the function needs to be used normally in other contexts, you can just apply the decorator manually as well.
action = run_once(my_function)
while 1:
if predicate:
action()
This will leave my_function available for other uses.
Finally, if you need to only run it once twice, then you can just do
action = run_once(my_function)
action() # run once the first time
action.has_run = False
action() # run once the second time
Another option is to set the func_code code object for your function to be a code object for a function that does nothing. This should be done at the end of your function body.
For example:
def run_once():
# Code for something you only want to execute once
run_once.func_code = (lambda:None).func_code
Here run_once.func_code = (lambda:None).func_code replaces your function's executable code with the code for lambda:None, so all subsequent calls to run_once() will do nothing.
This technique is less flexible than the decorator approach suggested in the accepted answer, but may be more concise if you only have one function you want to run once.
Run the function before the loop. Example:
myFunction()
while True:
# all the other code being executed in your loop
This is the obvious solution. If there's more than meets the eye, the solution may be a bit more complicated.
I'm assuming this is an action that you want to be performed at most one time, if some conditions are met. Since you won't always perform the action, you can't do it unconditionally outside the loop. Something like lazily retrieving some data (and caching it) if you get a request, but not retrieving it otherwise.
def do_something():
[x() for x in expensive_operations]
global action
action = lambda : None
action = do_something
while True:
# some sort of complex logic...
if foo:
action()
There are many ways to do what you want; however, do note that it is quite possible that —as described in the question— you don't have to call the function inside the loop.
If you insist in having the function call inside the loop, you can also do:
needs_to_run= expensive_function
while 1:
…
if needs_to_run: needs_to_run(); needs_to_run= None
…
I've thought of another—slightly unusual, but very effective—way to do this that doesn't require decorator functions or classes. Instead it just uses a mutable keyword argument, which ought to work in most versions of Python. Most of the time these are something to be avoided since normally you wouldn't want a default argument value to change from call-to-call—but that ability can be leveraged in this case and used as a cheap storage mechanism. Here's how that would work:
def my_function1(_has_run=[]):
if _has_run: return
print("my_function1 doing stuff")
_has_run.append(1)
def my_function2(_has_run=[]):
if _has_run: return
print("my_function2 doing some other stuff")
_has_run.append(1)
for i in range(10):
my_function1()
my_function2()
print('----')
my_function1(_has_run=[]) # Force it to run.
Output:
my_function1 doing stuff
my_function2 doing some other stuff
----
my_function1 doing stuff
This could be simplified a little further by doing what #gnibbler suggested in his answer and using an iterator (which were introduced in Python 2.2):
from itertools import count
def my_function3(_count=count()):
if next(_count): return
print("my_function3 doing something")
for i in range(10):
my_function3()
print('----')
my_function3(_count=count()) # Force it to run.
Output:
my_function3 doing something
----
my_function3 doing something
Here's an answer that doesn't involve reassignment of functions, yet still prevents the need for that ugly "is first" check.
__missing__ is supported by Python 2.5 and above.
def do_once_varname1():
print 'performing varname1'
return 'only done once for varname1'
def do_once_varname2():
print 'performing varname2'
return 'only done once for varname2'
class cdict(dict):
def __missing__(self,key):
val=self['do_once_'+key]()
self[key]=val
return val
cache_dict=cdict(do_once_varname1=do_once_varname1,do_once_varname2=do_once_varname2)
if __name__=='__main__':
print cache_dict['varname1'] # causes 2 prints
print cache_dict['varname2'] # causes 2 prints
print cache_dict['varname1'] # just 1 print
print cache_dict['varname2'] # just 1 print
Output:
performing varname1
only done once for varname1
performing varname2
only done once for varname2
only done once for varname1
only done once for varname2
One object-oriented approach and make your function a class, aka as a "functor", whose instances automatically keep track of whether they've been run or not when each instance is created.
Since your updated question indicates you may need many of them, I've updated my answer to deal with that by using a class factory pattern. This is a bit unusual, and it may have been down-voted for that reason (although we'll never know for sure because they never left a comment). It could also be done with a metaclass, but it's not much simpler.
def RunOnceFactory():
class RunOnceBase(object): # abstract base class
_shared_state = {} # shared state of all instances (borg pattern)
has_run = False
def __init__(self, *args, **kwargs):
self.__dict__ = self._shared_state
if not self.has_run:
self.stuff_done_once(*args, **kwargs)
self.has_run = True
return RunOnceBase
if __name__ == '__main__':
class MyFunction1(RunOnceFactory()):
def stuff_done_once(self, *args, **kwargs):
print("MyFunction1.stuff_done_once() called")
class MyFunction2(RunOnceFactory()):
def stuff_done_once(self, *args, **kwargs):
print("MyFunction2.stuff_done_once() called")
for _ in range(10):
MyFunction1() # will only call its stuff_done_once() method once
MyFunction2() # ditto
Output:
MyFunction1.stuff_done_once() called
MyFunction2.stuff_done_once() called
Note: You could make a function/class able to do stuff again by adding a reset() method to its subclass that reset the shared has_run attribute. It's also possible to pass regular and keyword arguments to the stuff_done_once() method when the functor is created and the method is called, if desired.
And, yes, it would be applicable given the information you added to your question.
Assuming there is some reason why myFunction() can't be called before the loop
from itertools import count
for i in count():
if i==0:
myFunction()
Here's an explicit way to code this up, where the state of which functions have been called is kept locally (so global state is avoided). I don't much like the non-explicit forms suggested in other answers: it's too surprising to see f() and for this not to mean that f() gets called.
This works by using dict.pop which looks up a key in a dict, removes the key from the dict, and takes a default value to use in case the key isn't found.
def do_nothing(*args, *kwargs):
pass
# A list of all the functions you want to run just once.
actions = [
my_function,
other_function
]
actions = dict((action, action) for action in actions)
while True:
if some_condition:
actions.pop(my_function, do_nothing)()
if some_other_condition:
actions.pop(other_function, do_nothing)()
I use cached_property decorator from functools to run just once and save the value. Example from the official documentation https://docs.python.org/3/library/functools.html
class DataSet:
def __init__(self, sequence_of_numbers):
self._data = tuple(sequence_of_numbers)
#cached_property
def stdev(self):
return statistics.stdev(self._data)
You can also use one of the standard library functools.lru_cache or functools.cache decorators in front of the function:
from functools import lru_cache
#lru_cache
def expensive_function():
return None
https://docs.python.org/3/library/functools.html
If I understand the updated question correctly, something like this should work
def function1():
print "function1 called"
def function2():
print "function2 called"
def function3():
print "function3 called"
called_functions = set()
while True:
n = raw_input("choose a function: 1,2 or 3 ")
func = {"1": function1,
"2": function2,
"3": function3}.get(n)
if func in called_functions:
print "That function has already been called"
else:
called_functions.add(func)
func()
You have all those 'junk variables' outside of your mainline while True loop. To make the code easier to read those variables can be brought inside the loop, right next to where they are used. You can also set up a variable naming convention for these program control switches. So for example:
# # _already_done checkpoint logic
try:
ran_this_user_request_already_done
except:
this_user_request()
ran_this_user_request_already_done = 1
Note that on the first execution of this code the variable ran_this_user_request_already_done is not defined until after this_user_request() is called.
A simple function you can reuse in many places in your code (based on the other answers here):
def firstrun(keyword, _keys=[]):
"""Returns True only the first time it's called with each keyword."""
if keyword in _keys:
return False
else:
_keys.append(keyword)
return True
or equivalently (if you like to rely on other libraries):
from collections import defaultdict
from itertools import count
def firstrun(keyword, _keys=defaultdict(count)):
"""Returns True only the first time it's called with each keyword."""
return not _keys[keyword].next()
Sample usage:
for i in range(20):
if firstrun('house'):
build_house() # runs only once
if firstrun(42): # True
print 'This will print.'
if firstrun(42): # False
print 'This will never print.'
I've taken a more flexible approach inspired by functools.partial function:
DO_ONCE_MEMORY = []
def do_once(id, func, *args, **kwargs):
if id not in DO_ONCE_MEMORY:
DO_ONCE_MEMORY.append(id)
return func(*args, **kwargs)
else:
return None
With this approach you are able to have more complex and explicit interactions:
do_once('foobar', print, "first try")
do_once('foo', print, "first try")
do_once('bar', print, "second try")
# first try
# second try
The exciting part about this approach it can be used anywhere and does not require factories - it's just a small memory tracker.
Depending on the situation, an alternative to the decorator could be the following:
from itertools import chain, repeat
func_iter = chain((myFunction,), repeat(lambda *args, **kwds: None))
while True:
next(func_iter)()
The idea is based on iterators, which yield the function once (or using repeat(muFunction, n) n-times), and then endlessly the lambda doing nothing.
The main advantage is that you don't need a decorator which sometimes complicates things, here everything happens in a single (to my mind) readable line. The disadvantage is that you have an ugly next in your code.
Performance wise there seems to be not much of a difference, on my machine both approaches have an overhead of around 130 ns.
If the condition check needs to happen only once you are in the loop, having a flag signaling that you have already run the function helps. In this case you used a counter, a boolean variable would work just as fine.
signal = False
count = 0
def callme():
print "I am being called"
while count < 2:
if signal == False :
callme()
signal = True
count +=1
I'm not sure that I understood your problem, but I think you can divide loop. On the part of the function and the part without it and save the two loops.