Dealing with many similar functions in python - python

So, I'm programming a game in python that has a lottery system. To this I have a function called lottery which is played whenever the lottery is used. This lottery system get a random number from 1 - 50 and then calls 1 of 50 lottery event functions which are every so slightly different.
The problem is that this function is basically unreadable, as it defines 51 separated functions, and the lottery function uses a list of 50 different if states to check which outcome it should use.
I've tried rewriting this code I don't know how many times now. First, I tried to refactor it using the built in typing module's overload descriptor, but overload requires the parameters to be different types for every different overload. This just led to me adding 50 different type classes which was even less efficient.
I then tried calling the function from the builtin globals dictionary, but my IDE would flag an error when running, even if the code worked. here's some pseudocode, since I no longer have the original code:
def lottery()
lotnumb = random number(range(50))
if lotnumb == 1:
lotevent1()
elif lotnumb == 2:
lotevent2()
elif lotnumb == 3:
lotevent3()
elif lotnumb == 4:
lotevent4()
elif lotnumb == 5:
lotevent5()
elif lotnumb == 6:
lotevent6()
elif lotnumb == 7:
lotevent7()
// this continues all the way to if statement number 50

You could easily put all the functions into a list and index it but if you want something more complex or just dont want to use them in a list here is a solution.
This seems to me like a situation where you could use a the __getattribute__ method in python classes. The way you could do this is by creating a lottery_event class and within it define each of your functions.
It would look like this.
class LotteryEvents:
#staticmethod
def lotevent1():
pass
#staticmethod
def lotevent2():
pass
#staticmethod
def lotevent3():
pass
# and so on...
Then what you can do is use python's dunder method __getattribute__ to be able to access each function. Calling __getattribute__ with the name of the function you want to use will return that function. You can add this as a method in the class like so.
class LotteryEvents:
#classmethod
def getLotEvent(cls, num):
# gets the function and stores it to the variable loteventfunc
loteventfunc = cls.__getattribute__(f'lotevent{num}')
return loteventfunc
# you could just call it from here
#staticmethod
def lotevent1():
pass
#staticmethod
def lotevent2():
pass
#staticmethod
def lotevent3():
pass
# and so on...
# then to call it you do this
random_num = random.randint(1, 50)
func = LotteryEvents.getLotEvent(random_num)
func()
There are many other ways to solve this problem, but this is how I would most likely tackle something like this. It does not remove having to define each function but it does remove that awful mess of else ifs and looks cleaner. If it still takes up too much space in your file I would recommend putting this class into a separate file and importing it into your project. Even without this solution putting it all in a different file could help organize your code.

Without knwoing more details about what is actually inside your loteventX functions, what I can suggest you is to add them into a list and randomly choose one and call, from inside you lottery function. Like this:
lottery_events = [
lotevent1,
lotevent2,
lotevent3,
lotevent1,
...
]
def lottery():
lotevent = random.choice(lottery_events)
lotevent()
However, I think there might be a simpler solution where you can parameterize your loteventX functions and not need to implement each of them independently. Please share more details if you want to explore that option.

import random
def f_1():
print("1")
def f_2():
print("2")
def f_3():
print("3")
def f_4():
print("4")
def applyFunction(func):
func()
if __name__=="__main__":
f_list = [f_1,f_2,f_3,f_4]
applyFunction(random.choice(f_list))

Related

I can't put "continue" command in a definition?

Let's say,
def sample():
if a==1:
print(a)
else:
continue
for i in language:
a=i
sample()
I want to use this function in a loop, but the continue command gives me an error because there is no loop. What can I do?
Return a boolean from the function and based on the return value make continue or not because continue must be within a loop
continue keyword in python is only available in for or while loops. Also block defined variables like a are not available on the global scope.
I don't know what you want to achieve but assuming your code, you want to extract a condition into a function, something like this:
def condition(a):
return a == 1
def sample(a):
print(a)
for i in language:
a=i
if condition(a):
sample(a)
else:
continue
There are several best-practice patterns of exactly how to do this, depending on your needs.
0. Factor your code better
Before doing any of the below, stop and ask yourself if you can just do this instead:
def sample(a):
print(a)
for i in language:
if i != 1:
continue
sample(i)
This is so much better:
it's clearer to the reader (everything you need to understand the loop's control flow is entirely local to the loop - it's right there in the loop, we don't have to look anywhere else farther away like a function definition to know when or why or how the loop will do the next thing),
it's cleaner (less boilerplate code than any of the solutions below),
it's more efficient, technically (not that this should matter until you measure a performance problem, but this might appeal to you; going into a function and coming back out of it, plus somehow telling the loop outside the function to continue - that's more work to achieve the same thing), and
it's simpler (objectively: there is less code complected together - the loop behavior is no longer tied to the body of the sample function, for example).
But, if you must:
1. Add boolean return
The simplest change that works with your example is to return a boolean:
def sample(a):
if a==1:
print(a)
else:
return True
return False
for i in language:
if sample(i):
continue
However, don't just mindlessly always use True for continue - for each function, use the one that fits with the function. In fact, in well-factored code, the boolean return value will make sense without even knowing that you are using it in some loop to continue or not.
For example, if you have a function called check_if_valid, then the boolean return value just makes sense without any loops - it tells you if the input is valid - and at the same time, either of these loops is sensible depending on context:
for thing in thing_list:
if check_if_valid(thing):
continue
... # do something to fix the invalid things
for thing in thing_list:
if not check_if_valid(thing):
continue
... # do something only with valid things
2. Reuse existing return
If your function already returns something, or you can rethink your code so that returns make sense, then you can ask yourself: is there a good way to decide to continue based on that return value?
For example, let's say inside your sample function you were actually trying to do something like this:
def sample(a):
record = select_from_database(a)
if record.status == 1:
print(record)
else:
continue
Well then you can rewrite it like this:
def sample(a):
record = select_from_database(a)
if record.status == 1:
print(record)
return record
for i in language:
record = sample(a)
if record.status != 1:
continue
Of course in this simple example, it's cleaner to just not have the sample function, but I am trusting that your sample function is justifiably more complex.
3. Special "continue" return
If no existing return value makes sense, or you don't want to couple the loop to the return value of your function, the next simplest pattern is to create and return a special unique "sentinel" object instance:
_continue = object()
def sample(a):
if a==1:
print(a)
else:
return _continue
for i in language:
result = sample(i):
if result = _continue:
continue
(If this is part of a module's API, which is something that you are saying if you name it like sample instead of like _sample, then I would name the sentinel value continue_ rather than _continue... But I also would not make something like this part of an API unless I absolutely had to.)
(If you're using a type checker and it complains about returning an object instance conflicting with your normal return value, you can make a Continue class and return an instance of that instead of an instance of object(). Then the type hinting for the function return value can be a type union between your normal return type and the Continue type. If you have multiple control flow constructs in your code that you want to smuggle across function call lines like this.)
4. Wrap return value (and "monads")
Sometimes, if the type union thing isn't good enough for some reason, you may want to create a wrapper object, and have it store either your original return value, or indicate control flow. I only mention this option for completeness, without examples, because I think the previous options are better most of the time in Python. But if you take the time to learn about "Option types" and "maybe monads", it's kinda like that.
(Also, notice that in all of my examples, I fixed your backdoor argument passing through a global variable to be an explicit clearly passed argument. This makes the code easier to understand, predict, and verify for correctness - you might not see that yet but keep an eye out for implicit state passing making code harder to follow and keep correct as you grow as a developer, read more code by others, and deal with bugs.)
It is because the scope of the function doesn't know we are in a loop. You have to put the continue keyword inside the loop
continue keyword cannot be used inside a function. It must be inside the loop. There is a similar question here. Maybe you can do something like the following.
language = [1,1,1,2,3]
a = 1
def sample():
if a == 1:
print(a)
return False
else:
return True
for i in language:
if sample():
continue
else:
a = i
OR something like this:
language = [1,1,1,2,3]
a = 1
def gen(base):
for item in base:
if a == 1:
yield a
else:
continue
for i in gen(language):
a = i
print(a)

Structuring Python Code for Data Analysis

I wrote code for a data analysis project, but it's becoming unwieldy and I'd like to find a better way of structuring it so I can share it with others.
For the sake of brevity, I have something like the following:
def process_raw_text(txt_file):
# do stuff
return token_text
def tag_text(token_text):
# do stuff
return tagged
def bio_tag(tagged):
# do stuff
return bio_tagged
def restructure(bio_tagged):
# do stuff
return(restructured)
print(restructured)
Basically I'd like the program to run through all of the functions sequentially and print the output.
In looking into ways to structure this, I read up on classes like the following:
class Calculator():
def add(x, y):
return x + y
def subtract(x, y):
return x - y
This seems useful when structuring a project to allow individual functions to be called separately, such as the add function with Calculator.add(x,y), but I'm not sure it's what I want.
Is there something I should be looking into for a sequential run of functions (that are meant to structure the data flow and provide readability)? Ideally, I'd like all functions to be within "something" I could call once, that would in turn run everything within it.
Chain together the output from each function as the input to the next:
def main():
print restructure(bio_tag(tag_text(process_raw_text(txt_file))
if __name__ == '__main__':
main()
#SvenMarnach makes a nice suggestion. A more general solution is to realise that this idea of repeatedly using the output as the input for the next in a sequence is exactly what the reduce function does. We want to start with some input txt_file:
def main():
pipeline = [process_raw_text, tag_text, bio_tag, restructure]
print reduce(apply, pipeline, txt_file)
There's nothing preventing you from creating a class (or set of classes) that represent that you want to manage with implementations that will call the functions you need in a sequence.
class DataAnalyzer():
# ...
def your_method(self, **kwargs):
# call sequentially, or use the 'magic' proposed by others
# but internally to your class and not visible to clients
pass
The functions themselves could remain private within the module, which seem to be implementation details.
you can implement a simple dynamic pipeline just using modules and functions.
my_module.py
def 01_process_raw_text(txt_file):
# do stuff
return token_text
def 02_tag_text(token_text):
# do stuff
return tagged
my_runner.py
import my_module
if __name__ == '__main__':
funcs = sorted([x in my_module.__dict__.iterkeys() if re.match('\d*.*', x)])
data = initial_data
for f in funcs:
data = my_module.__dict__[f](data)

Global Function block?

I am currently making a game in Python. Whenever you want help in the game, you just type help and you can read the help section.
The only problem is, I need to add a function block for each level.
def level_01():
choice = raw_input('>>>: ')
if choice=='help':
level_01_help()
def level_012():
choice = raw_input('>>>: ')
if choice=='help':
level_02_help()
So I was wondering if is possible to make a global function block for all the levels?
When you enter help, you get to help(), and then it automatically goes back to the function block you just came from.
I really hope you understand what I mean, and I would really appreciate all the help I could get.
You can actually pass the help function as a paramater, meaning your code can become:
def get_choice(help_func):
choice = raw_input('>>>: ')
if choice == 'help':
help_func()
else:
return choice
def level_01():
choice = get_choice(level_01_help)
def level_02():
choice = get_choice(level_02_help)
Ideally you should have a separate Module for all interface related tasks, so that the game and interface will be two seperate entities. This should make those 2911 lines a bit more legible, and if you decide to change Interfaces (from Command Line to Tkinter or Pygame for example) you will have a much much easier time of it. Just my 2¢
A really nice way to handle this kind of problem is with the built in python help. If you add docstrings to your function, they are stored in a special attribute of the function object called doc. You can get to them in code like this:
def example():
'''This is an example'''
print example.__doc__
>> This is an example
you can get to them in code the same way:
def levelOne():
'''It is a dark and stormy night. You can look for shelter or call for help'''
choice = raw_input('>>>: ')
if choice=='help':
return levelOne.__doc__
Doing it this way is a nice way of keeping the relationship between your code and content cleaner (although purists might object that it means you can't use pythons built-in help function for programmer-to-programmer documentation)
I think in the long run you will probably find that levels want to be classes, rather than functions - that way you can store state (did somebody find the key in level 1? is the monster in level 2 alive) and do maximum code reuse. A rough outline would be like this:
class Level(object):
HELP = 'I am a generic level'
def __init__(self, name, **exits):
self.Name = name
self.Exits = exits # this is a dictionary (the two stars)
# so you can have named objects pointing to other levels
def prompt(self):
choice = raw_input(self.Name + ": ")
if choice == 'help':
self.help()
# do other stuff here, returning to self.prompt() as long as you're in this level
return None # maybe return the name or class of the next level when Level is over
def help(self):
print self.HELP
# you can create levels that have custom content by overriding the HELP and prompt() methods:
class LevelOne (Level):
HELP = '''You are in a dark room, with one door to the north.
You can go north or search'''
def prompt(self):
choice = raw_input(self.Name + ": ")
if choice == 'help':
self.help() # this is free - it's defined in Level
if choice == 'go north':
return self.Exits['north']
A better solution would be to store the level you are on as a variable and have the help function handle all of the help stuff.
Example:
def help(level):
# do whatever helpful stuff goes here
print "here is the help for level", level
def level(currentLevel):
choice = raw_input('>>>: ')
if choice=='help':
help(currentLevel)
if ...: # level was beaten
level(currentLevel + 1) # move on to the next one
Sure, it's always possible to generalize. But with the little information you provide (and assuming "help" is the only common functionality), the original code is extremely straightforward. I wouldn't sacrifice this property only to save 1 line of code per level.

Using singleton as a counter

I have an automation test, which uses function that creates screenshots to a folder. This function is called by multiple screenshot instances. On every test run, a new folder is created, so I don't care about counter reset. In order to reflect the order at which these screenshots are taken, I had to come up with names that could be sorted by order. This is my solution:
def make_screenshot_file(file_name):
order = Counter().count
test_suites_path = _make_job_directory()
return make_writable_file(os.path.join(test_suites_path,'screenshot',file_name % order))
class Counter():
__counter_instance = None
def __init__(self):
if Counter.__counter_instance is None:
self.count = 1
Counter.__counter_instance = self
else:
Counter.__counter_instance.count += 1
self.count = Counter.__counter_instance.count
It works fine for me. But I keep thinking that there should be an easier way to solve this problem. Is there? And if singleton is the only way, could my code be optimized in any way?
What you're trying to do here is simulate a global variable.
There is no good reason to do that. If you really want a global variable, make it explicitly a global variable.
You could create a simple Counter class that increments count by 1 each time you access it, and then create a global instance of it. But the standard library already gives you something like that for free, in itertools.count, as DSM explains in a comment.
So:
import itertools
_counter = itertools.count()
def make_screenshot_file(file_name):
order = next(_counter)
test_suites_path = _make_job_directory()
return make_writable_file(os.path.join(test_suites_path,'screenshot',file_name % order))
I'm not sure why you're so worried about how much storage or time this takes up, because I can't conceive of any program where it could possibly matter whether you were using 8 bytes or 800 for a single object you could never have more than one or, or whether it took 3ns or 3us to access it when you only do so a handful of times.
But if you are worried, as you can see from the source, count is implemented in C, it's pretty memory-efficient, and if you don't do anything fancy with it, it comes down to basically a single PyNumber_Add to generate each number, which is a lot less than interpreting a few lines of code.
Since you asked, here's how you could radically simplify your existing code by using a _count class attribute instead of a __counter_instance class attribute:
class Counter():
_count = 0
def count(self):
Counter._count += 1
return Counter.count
Of course now you have to to Counter().count() instead of just Counter().count—but you can fix that trivially with #property if it matters.
It's worth pointing out that it's a really bad idea to use a classic class instead of a new-style class (by passing nothing inside the parens), and if you do want a classic class you should leave the parens off, and most Python programmers will associate the name Counter with the class collections.Counter, and there's no reason count couldn't be a #classmethod or #staticmethod… at which point this is exactly Andrew T.'s answer. Which, as he points out, is much simpler than what you're doing, and no more or less Pythonic.
But really, all of this is no better than just making _count a module-level global and adding a module-level count() function that increments and returns it.
why not just do
order = time.time()
or do something like
import glob #glob is used for unix like path expansion
order = len(glob.glob(os.path.join(test_suites_path,"screenshot","%s*"%filename))
Using static methods and variables. Not very pythonic, but simpler.
def make_screenshot_file(file_name):
order = Counter.count() #Note the move of the parens
test_suites_path = _make_job_directory()
return make_writable_file(os.path.join(test_suites_path,'screenshot',file_name % order))
class Counter():
count_n = 0
#staticmethod
def count():
Counter.count_n += 1
return Counter.count_n
print Counter.count()
print Counter.count()
print Counter.count()
print Counter.count()
print Counter.count()
atarzwell#freeman:~/src$ python so.py
1
2
3
4
5
Well , you can use this solution, just make sure you never initialize the order kwarg!
Mutable Kwargs in function's work like classes global variables. And the value isn't reset to default between calls, as you might think at first!
def make_screenshot_file(file_name , order=[0]):
order[0] = order[0] + 1
test_suites_path = _make_job_directory()
return make_writable_file(os.path.join(test_suites_path,'screenshot',file_name % order[0]))

Efficient way of having a function only execute once in a loop

At the moment, I'm doing stuff like the following, which is getting tedious:
run_once = 0
while 1:
if run_once == 0:
myFunction()
run_once = 1:
I'm guessing there is some more accepted way of handling this stuff?
What I'm looking for is having a function execute once, on demand. For example, at the press of a certain button. It is an interactive app which has a lot of user controlled switches. Having a junk variable for every switch, just for keeping track of whether it has been run or not, seemed kind of inefficient.
I would use a decorator on the function to handle keeping track of how many times it runs.
def run_once(f):
def wrapper(*args, **kwargs):
if not wrapper.has_run:
wrapper.has_run = True
return f(*args, **kwargs)
wrapper.has_run = False
return wrapper
#run_once
def my_function(foo, bar):
return foo+bar
Now my_function will only run once. Other calls to it will return None. Just add an else clause to the if if you want it to return something else. From your example, it doesn't need to return anything ever.
If you don't control the creation of the function, or the function needs to be used normally in other contexts, you can just apply the decorator manually as well.
action = run_once(my_function)
while 1:
if predicate:
action()
This will leave my_function available for other uses.
Finally, if you need to only run it once twice, then you can just do
action = run_once(my_function)
action() # run once the first time
action.has_run = False
action() # run once the second time
Another option is to set the func_code code object for your function to be a code object for a function that does nothing. This should be done at the end of your function body.
For example:
def run_once():
# Code for something you only want to execute once
run_once.func_code = (lambda:None).func_code
Here run_once.func_code = (lambda:None).func_code replaces your function's executable code with the code for lambda:None, so all subsequent calls to run_once() will do nothing.
This technique is less flexible than the decorator approach suggested in the accepted answer, but may be more concise if you only have one function you want to run once.
Run the function before the loop. Example:
myFunction()
while True:
# all the other code being executed in your loop
This is the obvious solution. If there's more than meets the eye, the solution may be a bit more complicated.
I'm assuming this is an action that you want to be performed at most one time, if some conditions are met. Since you won't always perform the action, you can't do it unconditionally outside the loop. Something like lazily retrieving some data (and caching it) if you get a request, but not retrieving it otherwise.
def do_something():
[x() for x in expensive_operations]
global action
action = lambda : None
action = do_something
while True:
# some sort of complex logic...
if foo:
action()
There are many ways to do what you want; however, do note that it is quite possible that —as described in the question— you don't have to call the function inside the loop.
If you insist in having the function call inside the loop, you can also do:
needs_to_run= expensive_function
while 1:
…
if needs_to_run: needs_to_run(); needs_to_run= None
…
I've thought of another—slightly unusual, but very effective—way to do this that doesn't require decorator functions or classes. Instead it just uses a mutable keyword argument, which ought to work in most versions of Python. Most of the time these are something to be avoided since normally you wouldn't want a default argument value to change from call-to-call—but that ability can be leveraged in this case and used as a cheap storage mechanism. Here's how that would work:
def my_function1(_has_run=[]):
if _has_run: return
print("my_function1 doing stuff")
_has_run.append(1)
def my_function2(_has_run=[]):
if _has_run: return
print("my_function2 doing some other stuff")
_has_run.append(1)
for i in range(10):
my_function1()
my_function2()
print('----')
my_function1(_has_run=[]) # Force it to run.
Output:
my_function1 doing stuff
my_function2 doing some other stuff
----
my_function1 doing stuff
This could be simplified a little further by doing what #gnibbler suggested in his answer and using an iterator (which were introduced in Python 2.2):
from itertools import count
def my_function3(_count=count()):
if next(_count): return
print("my_function3 doing something")
for i in range(10):
my_function3()
print('----')
my_function3(_count=count()) # Force it to run.
Output:
my_function3 doing something
----
my_function3 doing something
Here's an answer that doesn't involve reassignment of functions, yet still prevents the need for that ugly "is first" check.
__missing__ is supported by Python 2.5 and above.
def do_once_varname1():
print 'performing varname1'
return 'only done once for varname1'
def do_once_varname2():
print 'performing varname2'
return 'only done once for varname2'
class cdict(dict):
def __missing__(self,key):
val=self['do_once_'+key]()
self[key]=val
return val
cache_dict=cdict(do_once_varname1=do_once_varname1,do_once_varname2=do_once_varname2)
if __name__=='__main__':
print cache_dict['varname1'] # causes 2 prints
print cache_dict['varname2'] # causes 2 prints
print cache_dict['varname1'] # just 1 print
print cache_dict['varname2'] # just 1 print
Output:
performing varname1
only done once for varname1
performing varname2
only done once for varname2
only done once for varname1
only done once for varname2
One object-oriented approach and make your function a class, aka as a "functor", whose instances automatically keep track of whether they've been run or not when each instance is created.
Since your updated question indicates you may need many of them, I've updated my answer to deal with that by using a class factory pattern. This is a bit unusual, and it may have been down-voted for that reason (although we'll never know for sure because they never left a comment). It could also be done with a metaclass, but it's not much simpler.
def RunOnceFactory():
class RunOnceBase(object): # abstract base class
_shared_state = {} # shared state of all instances (borg pattern)
has_run = False
def __init__(self, *args, **kwargs):
self.__dict__ = self._shared_state
if not self.has_run:
self.stuff_done_once(*args, **kwargs)
self.has_run = True
return RunOnceBase
if __name__ == '__main__':
class MyFunction1(RunOnceFactory()):
def stuff_done_once(self, *args, **kwargs):
print("MyFunction1.stuff_done_once() called")
class MyFunction2(RunOnceFactory()):
def stuff_done_once(self, *args, **kwargs):
print("MyFunction2.stuff_done_once() called")
for _ in range(10):
MyFunction1() # will only call its stuff_done_once() method once
MyFunction2() # ditto
Output:
MyFunction1.stuff_done_once() called
MyFunction2.stuff_done_once() called
Note: You could make a function/class able to do stuff again by adding a reset() method to its subclass that reset the shared has_run attribute. It's also possible to pass regular and keyword arguments to the stuff_done_once() method when the functor is created and the method is called, if desired.
And, yes, it would be applicable given the information you added to your question.
Assuming there is some reason why myFunction() can't be called before the loop
from itertools import count
for i in count():
if i==0:
myFunction()
Here's an explicit way to code this up, where the state of which functions have been called is kept locally (so global state is avoided). I don't much like the non-explicit forms suggested in other answers: it's too surprising to see f() and for this not to mean that f() gets called.
This works by using dict.pop which looks up a key in a dict, removes the key from the dict, and takes a default value to use in case the key isn't found.
def do_nothing(*args, *kwargs):
pass
# A list of all the functions you want to run just once.
actions = [
my_function,
other_function
]
actions = dict((action, action) for action in actions)
while True:
if some_condition:
actions.pop(my_function, do_nothing)()
if some_other_condition:
actions.pop(other_function, do_nothing)()
I use cached_property decorator from functools to run just once and save the value. Example from the official documentation https://docs.python.org/3/library/functools.html
class DataSet:
def __init__(self, sequence_of_numbers):
self._data = tuple(sequence_of_numbers)
#cached_property
def stdev(self):
return statistics.stdev(self._data)
You can also use one of the standard library functools.lru_cache or functools.cache decorators in front of the function:
from functools import lru_cache
#lru_cache
def expensive_function():
return None
https://docs.python.org/3/library/functools.html
If I understand the updated question correctly, something like this should work
def function1():
print "function1 called"
def function2():
print "function2 called"
def function3():
print "function3 called"
called_functions = set()
while True:
n = raw_input("choose a function: 1,2 or 3 ")
func = {"1": function1,
"2": function2,
"3": function3}.get(n)
if func in called_functions:
print "That function has already been called"
else:
called_functions.add(func)
func()
You have all those 'junk variables' outside of your mainline while True loop. To make the code easier to read those variables can be brought inside the loop, right next to where they are used. You can also set up a variable naming convention for these program control switches. So for example:
# # _already_done checkpoint logic
try:
ran_this_user_request_already_done
except:
this_user_request()
ran_this_user_request_already_done = 1
Note that on the first execution of this code the variable ran_this_user_request_already_done is not defined until after this_user_request() is called.
A simple function you can reuse in many places in your code (based on the other answers here):
def firstrun(keyword, _keys=[]):
"""Returns True only the first time it's called with each keyword."""
if keyword in _keys:
return False
else:
_keys.append(keyword)
return True
or equivalently (if you like to rely on other libraries):
from collections import defaultdict
from itertools import count
def firstrun(keyword, _keys=defaultdict(count)):
"""Returns True only the first time it's called with each keyword."""
return not _keys[keyword].next()
Sample usage:
for i in range(20):
if firstrun('house'):
build_house() # runs only once
if firstrun(42): # True
print 'This will print.'
if firstrun(42): # False
print 'This will never print.'
I've taken a more flexible approach inspired by functools.partial function:
DO_ONCE_MEMORY = []
def do_once(id, func, *args, **kwargs):
if id not in DO_ONCE_MEMORY:
DO_ONCE_MEMORY.append(id)
return func(*args, **kwargs)
else:
return None
With this approach you are able to have more complex and explicit interactions:
do_once('foobar', print, "first try")
do_once('foo', print, "first try")
do_once('bar', print, "second try")
# first try
# second try
The exciting part about this approach it can be used anywhere and does not require factories - it's just a small memory tracker.
Depending on the situation, an alternative to the decorator could be the following:
from itertools import chain, repeat
func_iter = chain((myFunction,), repeat(lambda *args, **kwds: None))
while True:
next(func_iter)()
The idea is based on iterators, which yield the function once (or using repeat(muFunction, n) n-times), and then endlessly the lambda doing nothing.
The main advantage is that you don't need a decorator which sometimes complicates things, here everything happens in a single (to my mind) readable line. The disadvantage is that you have an ugly next in your code.
Performance wise there seems to be not much of a difference, on my machine both approaches have an overhead of around 130 ns.
If the condition check needs to happen only once you are in the loop, having a flag signaling that you have already run the function helps. In this case you used a counter, a boolean variable would work just as fine.
signal = False
count = 0
def callme():
print "I am being called"
while count < 2:
if signal == False :
callme()
signal = True
count +=1
I'm not sure that I understood your problem, but I think you can divide loop. On the part of the function and the part without it and save the two loops.

Categories

Resources