Related
How can I validate that a function includes a return keyword? I frequently forget the return line, so I am worried that the users of my package will too when they provide a function-based input.
def appler():
a = "apple"
# `return` is missing
def bananer():
b = "banana"
return b
I could parse the actual code string of the function for a final line that includes "return" but that isn't very robust (it could be triggered by comments).
def validate_funk(funk):
if condition_to_check_that_it_contains_rtrn:
pass
else:
raise ValueError(f"Yikes - The function you provided not contain a `return` statement:\n\n{funk}")
>>> validate_funk(appler)
#triggers ValueError
>>> validate_funk(bananer)
# passes
EDIT: ideally without running the function.
What you actually care about is probably not the return statement itself, but that the function returns something of a certain type. This you can most easily accomplish by using type hints (PEP 484):
def appler() -> str:
a = "apple"
# `return` is missing
def bananer() -> str:
b = "banana"
return b
Now, running a static analysis tool like mypy or Pyre (or many others): will emit a warning about a wrong return type in function appler (expected str, got NoneType).
Look for sabik's answer for a more general answer. Writing (unit) tests is another good practice that catches many more issues and - if done well - is an invest in code maintainability.
A function without return statement returns None by default.
>>> def abc():
pass
>>> print(abc())
None
>>>
You can add a check using this:
def validate_func(function):
if function() == None:
raise ValueError("Yikes - Does not contain a `return` statement")
There are few cons though.
You have to execute the function
It wont work if you are returning None in a function
Not much practical but yea, that is one way. You can also get a list of local functions or a list of methods in a class and loop through them without having to check each function individually.
For the question as asked, the ast module will let you check that.
However, it doesn't seem very useful just by itself - as others have pointed out, a function without a return is valid (it returns None), and just because a function does have a return doesn't mean that it returns the correct value, or even any value (the return could be in an if statement).
There are a couple of standard ways of dealing with this:
Unit tests - separate code that calls your function with various combinations of inputs (possibly just one, possibly hundreds) and checks that the answers match the ones you calculated manually, or otherwise satisfy requirements.
A more general implementation of the idea of checking for a return statement is "lint", in the case of Python pylint; that looks through your code and checks for various patterns that look like they could be mistakes. A side benefit is that it already exists and it checks dozens of common patterns.
Another, different more general implementation is the mypy type checker; that not only checks that there's a return statement, but also that it returns the correct type, as annotated in the header of the function.
Typically, these would be used together with a "gated trunk" development process; manual changes to the main version are forbidden, and only changes which pass the tests, lint and/or mypy are accepted into the main version.
As others have mentioned, simply calling the function is not enough: a return statement might only be present in a conditional, and thus, specific input would need to be passed in order to execute the return statement. That, too, is not a good indicator of the presence of a return, since it could return None, causing greater ambiguity. Instead, the inspect and ast module can be used:
Test functions:
def appler():
a = "apple"
# `return` is missing
def bananer():
b = "banana"
return b
def deeper_test(val, val1):
if val and val1:
if val+val1 == 10:
return
def gen_func(v):
for i in v:
if isinstance(i, list):
yield from gen_func(i)
else:
yield i
inspect.getsource returns the entire source of the function as a string, which can then be passed to ast.parse. From there, the syntax tree can be recursively traversed, searching for the presence of a return statement:
import inspect, ast
fs = [appler, bananer, deeper_test, gen_func]
def has_return(f_obj):
return isinstance(f_obj, ast.Return) or \
any(has_return(i) for i in getattr(f_obj, 'body', []))
result = {i.__name__:has_return(ast.parse(inspect.getsource(i))) for i in fs}
Output:
{'appler': False, 'bananer': True, 'deeper_test': True, 'gen_func': False}
With a defined validate_funk:
def validate_funk(f):
if not has_return(ast.parse(inspect.getsource(f))):
raise ValueError(f"function '{f.__name__}' does not contain a `return` statement")
return True
Notes:
This solution does not require the test functions to be called.
The solution must be run in a file. If it is run in the shell, an OSError will be raised. For the file, see this Github Gist.
You can simplify return checking with a decorator:
def ensure_return(func):
def wrapper(*args, **kwargs):
res = func(*args, **kwargs)
if res is None:
raise ValueError(f'{func} did not return a value')
return res
return wrapper
#ensure_return
def appler():
a = "apple"
# `return` is missing
#ensure_return
def bananer():
b = "banana"
return b
then:
>>> appler()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 5, in wrapper
ValueError: <function appler at 0x7f99d1a01160> did not return a value
>>> bananer()
'banana'
I'm trying to create a function that chains results from multiple arguments.
def hi(string):
print(string)<p>
return hi
Calling hi("Hello")("World") works and becomes Hello \n World as expected.
the problem is when I want to append the result as a single string, but
return string + hi produces an error since hi is a function.
I've tried using __str__ and __repr__ to change how hi behaves when it has not input. But this only creates a different problem elsewhere.
hi("Hello")("World") = "Hello"("World") -> Naturally produces an error.
I understand why the program cannot solve it, but I cannot find a solution to it.
You're running into difficulty here because the result of each call to the function must itself be callable (so you can chain another function call), while at the same time also being a legitimate string (in case you don't chain another function call and just use the return value as-is).
Fortunately Python has you covered: any type can be made to be callable like a function by defining a __call__ method on it. Built-in types like str don't have such a method, but you can define a subclass of str that does.
class hi(str):
def __call__(self, string):
return hi(self + '\n' + string)
This isn't very pretty and is sorta fragile (i.e. you will end up with regular str objects when you do almost any operation with your special string, unless you override all methods of str to return hi instances instead) and so isn't considered very Pythonic.
In this particular case it wouldn't much matter if you end up with regular str instances when you start using the result, because at that point you're done chaining function calls, or should be in any sane world. However, this is often an issue in the general case where you're adding functionality to a built-in type via subclassing.
To a first approximation, the question in your title can be answered similarly:
class add(int): # could also subclass float
def __call__(self, value):
return add(self + value)
To really do add() right, though, you want to be able to return a callable subclass of the result type, whatever type it may be; it could be something besides int or float. Rather than trying to catalog these types and manually write the necessary subclasses, we can dynamically create them based on the result type. Here's a quick-and-dirty version:
class AddMixIn(object):
def __call__(self, value):
return add(self + value)
def add(value, _classes={}):
t = type(value)
if t not in _classes:
_classes[t] = type("add_" + t.__name__, (t, AddMixIn), {})
return _classes[t](value)
Happily, this implementation works fine for strings, since they can be concatenated using +.
Once you've started down this path, you'll probably want to do this for other operations too. It's a drag copying and pasting basically the same code for every operation, so let's write a function that writes the functions for you! Just specify a function that actually does the work, i.e., takes two values and does something to them, and it gives you back a function that does all the class munging for you. You can specify the operation with a lambda (anonymous function) or a predefined function, such as one from the operator module. Since it's a function that takes a function and returns a function (well, a callable object), it can also be used as a decorator!
def chainable(operation):
class CallMixIn(object):
def __call__(self, value):
return do(operation(self, value))
def do(value, _classes={}):
t = type(value)
if t not in _classes:
_classes[t] = type(t.__name__, (t, CallMixIn), {})
return _classes[t](value)
return do
add = chainable(lambda a, b: a + b)
# or...
import operator
add = chainable(operator.add)
# or as a decorator...
#chainable
def add(a, b): return a + b
In the end it's still not very pretty and is still sorta fragile and still wouldn't be considered very Pythonic.
If you're willing to use an additional (empty) call to signal the end of the chain, things get a lot simpler, because you just need to return functions until you're called with no argument:
def add(x):
return lambda y=None: x if y is None else add(x+y)
You call it like this:
add(3)(4)(5)() # 12
You are getting into some deep, Haskell-style, type-theoretical issues by having hi return a reference to itself. Instead, just accept multiple arguments and concatenate them in the function.
def hi(*args):
return "\n".join(args)
Some example usages:
print(hi("Hello", "World"))
print("Hello\n" + hi("World"))
I have been working at learning Python over the last week and it has been going really well, however I have now been introduced to custom functions and I sort of hit a wall. While I understand the basics of it, such as:
def helloworld():
print("Hello World!")
helloworld()
I know this will print "Hello World!".
However, when it comes to getting information from one function to another, I find that confusing. ie: function1 and function2 have to work together to perform a task. Also, when to use the return command.
Lastly, when I have a list or a dictionary inside of a function. I'll make something up just as an example.
def my_function():
my_dict = {"Key1":Value1,
"Key2":Value2,
"Key3":Value3,
"Key4":Value4,}
How would I access the key/value and be able to change them from outside of the function? ie: If I had a program that let you input/output player stats or a character attributes in a video game.
I understand bits and pieces of this, it just confuses me when they have different functions calling on each other.
Also, since this was my first encounter with the custom functions. Is this really ambitious to pursue and this could be the reason for all of my confusion? Since this is the most complex program I have seen yet.
Functions in python can be both, a regular procedure and a function with a return value. Actually, every Python's function will return a value, which might be None.
If a return statement is not present, then your function will be executed completely and leave normally following the code flow, yielding None as a return value.
def foo():
pass
foo() == None
>>> True
If you have a return statement inside your function. The return value will be the return value of the expression following it. For example you may have return None and you'll be explicitly returning None. You can also have return without anything else and there you'll be implicitly returning None, or, you can have return 3 and you'll be returning value 3. This may grow in complexity.
def foo():
print('hello')
return
print('world')
foo()
>>>'hello'
def add(a,b):
return a + b
add(3,4)
>>>7
If you want a dictionary (or any object) you created inside a function, just return it:
def my_function():
my_dict = {"Key1":Value1,
"Key2":Value2,
"Key3":Value3,
"Key4":Value4,}
return my_dict
d = my_function()
d['Key1']
>>> Value1
Those are the basics of function calling. There's even more. There are functions that return functions (also treated as decorators. You can even return multiple values (not really, you'll be just returning a tuple) and a lot a fun stuff :)
def two_values():
return 3,4
a,b = two_values()
print(a)
>>>3
print(b)
>>>4
Hope this helps!
The primary way to pass information between functions is with arguments and return values. Functions can't see each other's variables. You might think that after
def my_function():
my_dict = {"Key1":Value1,
"Key2":Value2,
"Key3":Value3,
"Key4":Value4,}
my_function()
my_dict would have a value that other functions would be able to see, but it turns out that's a really brittle way to design a language. Every time you call my_function, my_dict would lose its old value, even if you were still using it. Also, you'd have to know all the names used by every function in the system when picking the names to use when writing a new function, and the whole thing would rapidly become unmanageable. Python doesn't work that way; I can't think of any languages that do.
Instead, if a function needs to make information available to its caller, return the thing its caller needs to see:
def my_function():
return {"Key1":"Value1",
"Key2":"Value2",
"Key3":"Value3",
"Key4":"Value4",}
print(my_function()['Key1']) # Prints Value1
Note that a function ends when its execution hits a return statement (even if it's in the middle of a loop); you can't execute one return now, one return later, keep going, and return two things when you hit the end of the function. If you want to do that, keep a list of things you want to return and return the list when you're done.
You send information into and out of functions with arguments and return values, respectively. This function, for example:
def square(number):
"""Return the square of a number."""
return number * number
... recieves information through the number argument, and sends information back with the return ... statement. You can use it like this:
>>> x = square(7)
>>> print(x)
49
As you can see, we passed the value 7 to the function, and it returned the value 49 (which we stored in the variable x).
Now, lets say we have another function:
def halve(number):
"""Return half of a number."""
return number / 2.0
We can send information between two functions in a couple of different ways.
Use a temporary variable:
>>> tmp = square(6)
>>> halve(tmp)
18.0
use the first function directly as an argument to the second:
>>> halve(square(8))
32.0
Which of those you use will depend partly on personal taste, and partly on how complicated the thing you're trying to do is.
Even though they have the same name, the number variables inside square() and halve() are completely separate from each other, and they're invisible outside those functions:
>>> number
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'number' is not defined
So, it's actually impossible to "see" the variable my_dict in your example function. What you would normally do is something like this:
def my_function(my_dict):
# do something with my_dict
return my_dict
... and define my_dict outside the function.
(It's actually a little bit more complicated than that - dict objects are mutable (which just means they can change), so often you don't actually need to return them. However, for the time being it's probably best to get used to returning everything, just to be safe).
I'm trying to make a function that does different things when called on different argument types. Specifically, one of the functions should have the signature
def myFunc(string, string):
and the other should have the signature
def myFunc(list):
How can I do this, given that I'm not allowed to specify whether the arguments are strings or lists?
Python does not support overloading, even by the argument count. You need to do:
def foo(string_or_list, string = None):
if isinstance(string_or_list, list):
...
else:
...
which is pretty silly, or just rethink your design to not have to overload.
There is a recipe at http://code.activestate.com/recipes/577065-type-checking-function-overloading-decorator/ which does what you want;
basically, you wrap each version of your function with #takes and #returns type declarations; when you call the function, it tries each version until it finds one that does not throw a type error.
Edit: here is a cut-down version; it's probably not a good thing to do, but if you gotta, here's how:
from collections import defaultdict
def overloaded_function(overloads):
"""
Accepts a sequence of ((arg_types,), fn) pairs
Creates a dispatcher function
"""
dispatch_table = defaultdict(list)
for arg_types,fn in overloads:
dispatch_table[len(arg_types)].append([list(arg_types),fn])
def dispatch(*args):
for arg_types,fn in dispatch_table[len(args)]:
if all(isinstance(arg, arg_type) for arg,arg_type in zip(args,arg_types)):
return fn(*args)
raise TypeError("could not find an overloaded function to match this argument list")
return dispatch
and here's how it works:
def myfn_string_string(s1, s2):
print("Got the strings {} and {}".format(s1, s2))
def myfn_list(lst):
print("Got the list {}".format(lst))
myfn = overloaded_function([
((basestring, basestring), myfn_string_string),
((list,), myfn_list)
])
myfn("abcd", "efg") # prints "Got the strings abcd and efg"
myfn(["abc", "def"]) # prints "Got the list ['abc', 'def']"
myfn(123) # raises TypeError
*args is probably the better way, but you could do something like:
def myFunc(arg1, arg2=None):
if arg2 is not None:
#do this
else:
#do that
But that's probably a terrible way of doing it.
Not a perfect solution, but if the second string argument will never legitimately be None, you could try:
def myFunc( firstArg, secondArg = None ):
if secondArg is None:
# only one arg provided, try treating firstArg as a list
else:
# two args provided, try treating them both as strings
Define it as taking variable arguments:
def myFunc(*args):
Then you can check the amount and type of the arguments via len and isinstance, and route the call to the appropriate case-specific function.
It may make for clearer code, however, if you used optional named arguments. It would be better still if you didn't use overloading at all, it's kinda not python's way.
You can't - for instance a class instance method can be inserted in run-time.
If you had multiple __init__ for a class for instance, you'd be better off with multiple #classmethod's such as from_strings or from_sequence
At the moment, I'm doing stuff like the following, which is getting tedious:
run_once = 0
while 1:
if run_once == 0:
myFunction()
run_once = 1:
I'm guessing there is some more accepted way of handling this stuff?
What I'm looking for is having a function execute once, on demand. For example, at the press of a certain button. It is an interactive app which has a lot of user controlled switches. Having a junk variable for every switch, just for keeping track of whether it has been run or not, seemed kind of inefficient.
I would use a decorator on the function to handle keeping track of how many times it runs.
def run_once(f):
def wrapper(*args, **kwargs):
if not wrapper.has_run:
wrapper.has_run = True
return f(*args, **kwargs)
wrapper.has_run = False
return wrapper
#run_once
def my_function(foo, bar):
return foo+bar
Now my_function will only run once. Other calls to it will return None. Just add an else clause to the if if you want it to return something else. From your example, it doesn't need to return anything ever.
If you don't control the creation of the function, or the function needs to be used normally in other contexts, you can just apply the decorator manually as well.
action = run_once(my_function)
while 1:
if predicate:
action()
This will leave my_function available for other uses.
Finally, if you need to only run it once twice, then you can just do
action = run_once(my_function)
action() # run once the first time
action.has_run = False
action() # run once the second time
Another option is to set the func_code code object for your function to be a code object for a function that does nothing. This should be done at the end of your function body.
For example:
def run_once():
# Code for something you only want to execute once
run_once.func_code = (lambda:None).func_code
Here run_once.func_code = (lambda:None).func_code replaces your function's executable code with the code for lambda:None, so all subsequent calls to run_once() will do nothing.
This technique is less flexible than the decorator approach suggested in the accepted answer, but may be more concise if you only have one function you want to run once.
Run the function before the loop. Example:
myFunction()
while True:
# all the other code being executed in your loop
This is the obvious solution. If there's more than meets the eye, the solution may be a bit more complicated.
I'm assuming this is an action that you want to be performed at most one time, if some conditions are met. Since you won't always perform the action, you can't do it unconditionally outside the loop. Something like lazily retrieving some data (and caching it) if you get a request, but not retrieving it otherwise.
def do_something():
[x() for x in expensive_operations]
global action
action = lambda : None
action = do_something
while True:
# some sort of complex logic...
if foo:
action()
There are many ways to do what you want; however, do note that it is quite possible that —as described in the question— you don't have to call the function inside the loop.
If you insist in having the function call inside the loop, you can also do:
needs_to_run= expensive_function
while 1:
…
if needs_to_run: needs_to_run(); needs_to_run= None
…
I've thought of another—slightly unusual, but very effective—way to do this that doesn't require decorator functions or classes. Instead it just uses a mutable keyword argument, which ought to work in most versions of Python. Most of the time these are something to be avoided since normally you wouldn't want a default argument value to change from call-to-call—but that ability can be leveraged in this case and used as a cheap storage mechanism. Here's how that would work:
def my_function1(_has_run=[]):
if _has_run: return
print("my_function1 doing stuff")
_has_run.append(1)
def my_function2(_has_run=[]):
if _has_run: return
print("my_function2 doing some other stuff")
_has_run.append(1)
for i in range(10):
my_function1()
my_function2()
print('----')
my_function1(_has_run=[]) # Force it to run.
Output:
my_function1 doing stuff
my_function2 doing some other stuff
----
my_function1 doing stuff
This could be simplified a little further by doing what #gnibbler suggested in his answer and using an iterator (which were introduced in Python 2.2):
from itertools import count
def my_function3(_count=count()):
if next(_count): return
print("my_function3 doing something")
for i in range(10):
my_function3()
print('----')
my_function3(_count=count()) # Force it to run.
Output:
my_function3 doing something
----
my_function3 doing something
Here's an answer that doesn't involve reassignment of functions, yet still prevents the need for that ugly "is first" check.
__missing__ is supported by Python 2.5 and above.
def do_once_varname1():
print 'performing varname1'
return 'only done once for varname1'
def do_once_varname2():
print 'performing varname2'
return 'only done once for varname2'
class cdict(dict):
def __missing__(self,key):
val=self['do_once_'+key]()
self[key]=val
return val
cache_dict=cdict(do_once_varname1=do_once_varname1,do_once_varname2=do_once_varname2)
if __name__=='__main__':
print cache_dict['varname1'] # causes 2 prints
print cache_dict['varname2'] # causes 2 prints
print cache_dict['varname1'] # just 1 print
print cache_dict['varname2'] # just 1 print
Output:
performing varname1
only done once for varname1
performing varname2
only done once for varname2
only done once for varname1
only done once for varname2
One object-oriented approach and make your function a class, aka as a "functor", whose instances automatically keep track of whether they've been run or not when each instance is created.
Since your updated question indicates you may need many of them, I've updated my answer to deal with that by using a class factory pattern. This is a bit unusual, and it may have been down-voted for that reason (although we'll never know for sure because they never left a comment). It could also be done with a metaclass, but it's not much simpler.
def RunOnceFactory():
class RunOnceBase(object): # abstract base class
_shared_state = {} # shared state of all instances (borg pattern)
has_run = False
def __init__(self, *args, **kwargs):
self.__dict__ = self._shared_state
if not self.has_run:
self.stuff_done_once(*args, **kwargs)
self.has_run = True
return RunOnceBase
if __name__ == '__main__':
class MyFunction1(RunOnceFactory()):
def stuff_done_once(self, *args, **kwargs):
print("MyFunction1.stuff_done_once() called")
class MyFunction2(RunOnceFactory()):
def stuff_done_once(self, *args, **kwargs):
print("MyFunction2.stuff_done_once() called")
for _ in range(10):
MyFunction1() # will only call its stuff_done_once() method once
MyFunction2() # ditto
Output:
MyFunction1.stuff_done_once() called
MyFunction2.stuff_done_once() called
Note: You could make a function/class able to do stuff again by adding a reset() method to its subclass that reset the shared has_run attribute. It's also possible to pass regular and keyword arguments to the stuff_done_once() method when the functor is created and the method is called, if desired.
And, yes, it would be applicable given the information you added to your question.
Assuming there is some reason why myFunction() can't be called before the loop
from itertools import count
for i in count():
if i==0:
myFunction()
Here's an explicit way to code this up, where the state of which functions have been called is kept locally (so global state is avoided). I don't much like the non-explicit forms suggested in other answers: it's too surprising to see f() and for this not to mean that f() gets called.
This works by using dict.pop which looks up a key in a dict, removes the key from the dict, and takes a default value to use in case the key isn't found.
def do_nothing(*args, *kwargs):
pass
# A list of all the functions you want to run just once.
actions = [
my_function,
other_function
]
actions = dict((action, action) for action in actions)
while True:
if some_condition:
actions.pop(my_function, do_nothing)()
if some_other_condition:
actions.pop(other_function, do_nothing)()
I use cached_property decorator from functools to run just once and save the value. Example from the official documentation https://docs.python.org/3/library/functools.html
class DataSet:
def __init__(self, sequence_of_numbers):
self._data = tuple(sequence_of_numbers)
#cached_property
def stdev(self):
return statistics.stdev(self._data)
You can also use one of the standard library functools.lru_cache or functools.cache decorators in front of the function:
from functools import lru_cache
#lru_cache
def expensive_function():
return None
https://docs.python.org/3/library/functools.html
If I understand the updated question correctly, something like this should work
def function1():
print "function1 called"
def function2():
print "function2 called"
def function3():
print "function3 called"
called_functions = set()
while True:
n = raw_input("choose a function: 1,2 or 3 ")
func = {"1": function1,
"2": function2,
"3": function3}.get(n)
if func in called_functions:
print "That function has already been called"
else:
called_functions.add(func)
func()
You have all those 'junk variables' outside of your mainline while True loop. To make the code easier to read those variables can be brought inside the loop, right next to where they are used. You can also set up a variable naming convention for these program control switches. So for example:
# # _already_done checkpoint logic
try:
ran_this_user_request_already_done
except:
this_user_request()
ran_this_user_request_already_done = 1
Note that on the first execution of this code the variable ran_this_user_request_already_done is not defined until after this_user_request() is called.
A simple function you can reuse in many places in your code (based on the other answers here):
def firstrun(keyword, _keys=[]):
"""Returns True only the first time it's called with each keyword."""
if keyword in _keys:
return False
else:
_keys.append(keyword)
return True
or equivalently (if you like to rely on other libraries):
from collections import defaultdict
from itertools import count
def firstrun(keyword, _keys=defaultdict(count)):
"""Returns True only the first time it's called with each keyword."""
return not _keys[keyword].next()
Sample usage:
for i in range(20):
if firstrun('house'):
build_house() # runs only once
if firstrun(42): # True
print 'This will print.'
if firstrun(42): # False
print 'This will never print.'
I've taken a more flexible approach inspired by functools.partial function:
DO_ONCE_MEMORY = []
def do_once(id, func, *args, **kwargs):
if id not in DO_ONCE_MEMORY:
DO_ONCE_MEMORY.append(id)
return func(*args, **kwargs)
else:
return None
With this approach you are able to have more complex and explicit interactions:
do_once('foobar', print, "first try")
do_once('foo', print, "first try")
do_once('bar', print, "second try")
# first try
# second try
The exciting part about this approach it can be used anywhere and does not require factories - it's just a small memory tracker.
Depending on the situation, an alternative to the decorator could be the following:
from itertools import chain, repeat
func_iter = chain((myFunction,), repeat(lambda *args, **kwds: None))
while True:
next(func_iter)()
The idea is based on iterators, which yield the function once (or using repeat(muFunction, n) n-times), and then endlessly the lambda doing nothing.
The main advantage is that you don't need a decorator which sometimes complicates things, here everything happens in a single (to my mind) readable line. The disadvantage is that you have an ugly next in your code.
Performance wise there seems to be not much of a difference, on my machine both approaches have an overhead of around 130 ns.
If the condition check needs to happen only once you are in the loop, having a flag signaling that you have already run the function helps. In this case you used a counter, a boolean variable would work just as fine.
signal = False
count = 0
def callme():
print "I am being called"
while count < 2:
if signal == False :
callme()
signal = True
count +=1
I'm not sure that I understood your problem, but I think you can divide loop. On the part of the function and the part without it and save the two loops.