Is there a way to stop class instances from calling each other? - python

I have multiple class instances that call each other's functions. I also have a system that detects if these functions call each other for too long (to avoid stack overflow). However, when it detects that, there's nothing it can do to actually stop them, so they just keep running until they reach recursion limit. Here's a simpler example:
class test:
def activateOther(self, other):
sleep(2)
print('Activated Other Function', id(self))
other.activateOther(self)
t = test()
t_ = test()
t.activateOther(t_)
del t, t_ # Even after deleting the variables/references, they continue running
Is there a way to actually stop these functions from running endlessly and hitting the recursion limit? If not, I suppose I'll try to add a variable to each class indicating whether they should continue running or not.

Indeed, this is a typical recursion issue. There must be a condition in the code for when the recursion is stopped. The easiest is to introduce a depth parameter:
class test:
def activateOther(self, other, depth=0):
if depth > 88:
return
sleep(2)
print('Activated Other Function', id(self))
other.activateOther(self, depth + 1)
t = test()
t_ = test()
t.activateOther(t_)
The actual condition, and whether the depth counter will do, depends of course on your application.

Related

How to test a function which must infinitely loop in some cases?

I am facing a problem, I have developed a function that must, in some cases, end in an "infinite loop".
Here is an example:
def my_function(a=False):
if not a:
return True
else:
while True:
pass
In order to test my code, I need to deal with this function.
I know the Halting concept so I want my test to wait 5 seconds for a return value.
If there is not, it must shut down the function and return a specific value 'is_looping'.
For this, I want to use the python mock.
To give you and idea, I share you this code:
import pytest
import mock
def my_function(a=False):
if not a:
return True
else:
while True:
pass
def test_my_function():
my_function=mock(return_value='is_looping')
if __name__ == '__main__':
pytest.main(['-vv'])
My problem is that I don't know how to tell my mock object to adopt this comportment.
Can you help me? Have you got another solution that fits my requirements?

Which Python idiom has been used to call functions in this example code?

I'm trying to figure out, which Python idiom has been used at next lines.
state = state0
while state:
state = state()
I'm confused why here is state0 instead of state0()? And what this line
state = state()
is doing? Why there isn't state0() ?
from random import random
from time import sleep
def state0():
print("state0")
# delay and decision path to simulate some application logic
sleep(.5)
if random()>.5:
return state1
else:
return state2
def state1():
print("state1")
# delay and decision path to simulate some application logic
sleep(.5)
if random()>.5:
return state0
else:
return state2
def state2():
print("state2")
# delay and decision path to simulate some application logic
sleep(.5)
if random()>.5:
return state0
else:
return None
state=state0 # initial state
while state:
state=state() # launch state machine
print("Done with states")
Python allows for variables to be function references. What is happening in your example is essentially a longer version of this. At first, the function state0 is assigned to state. Since there are no brackets there, Python does not call the function, but rather assigns the function to the variable.
The while loop in your example simply checks whether state is anything not defined as falsey (i.e. not None, empty list, etc.). In every iteration of the loop, the variable state is assigned a new function by calling the function it was previously assigned to up until you reach the point where state2 returns None which breaks the while loop.
As others have mentioned, you can assign function to a variable and execute it later as needed.
Here is a simple example of how it might be used (though a bit artificial).
Suppose that you really want to write a recursive function that computes factorial. Easy enough, you can write something like this.
def fact(n, res=1):
return res if n==0 else fact(n-1, res=res*n)
The problem is that if you try to execute this on big n, let's say 10000, this will cause the callstack overflow and you will receive
RecursionError: maximum recursion depth exceeded in comparison
One way to work around this problem is to return a function instead of the result and then execute it on your own.
def fact(n, res=1):
return res if n==0 else lambda: fact(n-1, res=res*n)
Now you can call it like this.
fact(5)()()()()()
Which will give you the correct result. The advantage is that you are not limited by the size of the callstack anymore. Obviously, you wouldn't want to write all those braces so you can write a loop to do it for you. This loop will check whether the result is callable and based on that it will either execute the function or return a result.
def loop(func, n):
res = func(n)
while hasattr(res, "__call__"):
res = res()
return res
Now you can use your fact function with n being 10000, without overflowing the callstack by calling
loop(fact, 10000)

Python Twisted's DeferredLock

Can someone provide an example and explain when and how to use Twisted's DeferredLock.
I have a DeferredQueue and I think I have a race condition I want to prevent, but I'm unsure how to combine the two.
Use a DeferredLock when you have a critical section that is asynchronous and needs to be protected from overlapping (one might say "concurrent") execution.
Here is an example of such an asynchronous critical section:
class NetworkCounter(object):
def __init__(self):
self._count = 0
def next(self):
self._count += 1
recording = self._record(self._count)
def recorded(ignored):
return self._count
recording.addCallback(recorded)
return recording
def _record(self, value):
return http.GET(
b"http://example.com/record-count?value=%d" % (value,))
See how two concurrent uses of the next method will produce "corrupt" results:
from __future__ import print_function
counter = NetworkCounter()
d1 = counter.next()
d2 = counter.next()
d1.addCallback(print, "d1")
d2.addCallback(print, "d2")
Gives the result:
2 d1
2 d2
This is because the second call to NetworkCounter.next begins before the first call to that method has finished using the _count attribute to produce its result. The two operations share the single attribute and produce incorrect output as a consequence.
Using a DeferredLock instance will solve this problem by preventing the second operation from beginning until the first operation has completed. You can use it like this:
class NetworkCounter(object):
def __init__(self):
self._count = 0
self._lock = DeferredLock()
def next(self):
return self._lock.run(self._next)
def _next(self):
self._count += 1
recording = self._record(self._count)
def recorded(ignored):
return self._count
recording.addCallback(recorded)
return recording
def _record(self, value):
return http.GET(
b"http://example.com/record-count?value=%d" % (value,))
First, notice that the NetworkCounter instance creates its own DeferredLock instance. Each instance of DeferredLock is distinct and operates independently from any other instance. Any code that participates in the use of a critical section needs to use the same DeferredLock instance in order for that critical section to be protected. If two NetworkCounter instances somehow shared state then they would also need to share a DeferredLock instance - not create their own private instance.
Next, see how DeferredLock.run is used to call the new _next method (into which all of the application logic has been moved). NetworkCounter (nor the application code using NetworkCounter) does not call the method that contains the critical section. DeferredLock is given responsibility for doing this. This is how DeferredLock can prevent the critical section from being run by multiple operations at the "same" time. Internally, DeferredLock will keep track of whether an operation has started and not yet finished. It can only keep track of operation completion if the operation's completion is represented as a Deferred though. If you are familiar with Deferreds, you probably already guessed that the (hypothetical) HTTP client API in this example, http.GET, is returning a Deferred that fires when the HTTP request has completed. If you are not familiar with them yet, you should go read about them now.
Once the Deferred that represents the result of the operation fires - in other words, once the operation is done, DeferredLock will consider the critical section "out of use" and allow another operation to begin executing it. It will do this by checking to see if any code has tried to enter the critical section while the critical section was in use and if so it will run the function for that operation.
Third, notice that in order to serialize access to the critical section, DeferredLock.run must return a Deferred. If the critical section is in use and DeferredLock.run is called it cannot start another operation. Therefore, instead, it creates and returns a new Deferred. When the critical section goes out of use, the next operation can start and when that operation completes, the Deferred returned by the DeferredLock.run call will get its result. This all ends up looking rather transparent to any users who are already expecting a Deferred - it just means the operation appears to take a little longer to complete (though the truth is that it likely takes the same amount of time to complete but has it wait a while before it starts - the effect on the wall clock is the same though).
Of course, you can achieve a concurrent-use safe NetworkCounter more easily than all this by simply not sharing state in the first place:
class NetworkCounter(object):
def __init__(self):
self._count = 0
def next(self):
self._count += 1
result = self._count
recording = self._record(self._count)
def recorded(ignored):
return result
recording.addCallback(recorded)
return recording
def _record(self, value):
return http.GET(
b"http://example.com/record-count?value=%d" % (value,))
This version moves the state used by NetworkCounter.next to produce a meaningful result for the caller out of the instance dictionary (ie, it is no longer an attribute of the NetworkCounter instance) and into the call stack (ie, it is now a closed over variable associated with the actual frame that implements the method call). Since each call creates a new frame and a new closure, concurrent calls are now independent and no locking of any sort is required.
Finally, notice that even though this modified version of NetworkCounter.next still uses self._count which is shared amongst all calls to next on a single NetworkCounter instance this can't cause any problems for the implementation when it is used concurrently. In a cooperative multitasking system such as the one primarily used with Twisted, there are never context switches in the middle of functions or operations. There cannot be a context switch from one operation to another in between the self._count += 1 and result = self._count lines. They will always execute atomically and you don't need locks around them to avoid re-entrancy or concurrency induced corruption.
These last two points - avoiding concurrency bugs by avoiding shared state and the atomicity of code inside a function - combined means that DeferredLock isn't often particularly useful. As a single data point, in the roughly 75 KLOC in my current work project (heavily Twisted based), there are no uses of DeferredLock.

set python recursion limit for a function

I have 2 solutions to a recursion problem that I need for a function (actually a method). I want it to be recursive, but I want to set the recursion limit to 10 and reset it after the function is called (or not mess with recursion limit at all). Can anyone think of a better way to do this or recommend using one over the others? I'm leaning towards the context manager because it keeps my code cleaner and no setting the tracebacklimit, but there might be caveats?
import sys
def func(i=1):
print i
if i > 10:
import sys
sys.tracebacklimit = 1
raise ValueError("Recursion Limit")
i += 1
func(i)
class recursion_limit(object):
def __init__(self, val):
self.val = val
self.old_val = sys.getrecursionlimit()
def __enter__(self):
sys.setrecursionlimit(self.val)
def __exit__(self, *args):
sys.setrecursionlimit(self.old_val)
raise ValueError("Recursion Limit")
def func2(i=1):
"""
Call as
with recursion_limit(12):
func2()
"""
print i
i += 1
func2(i)
if __name__ == "__main__":
# print 'Running func1'
# func()
with recursion_limit(12):
func2()
I do see some odd behavior though with the context manager. If I put in main
with recursion_limit(12):
func2()
It prints 1 to 10. If I do the same from the interpreter it prints 1 to 11. I assume there is something going on under the hood when I import things?
EDIT: For posterity this is what I have come up with for a function that knows its call depth. I doubt I'd use it in any production code, but it gets the job done.
import sys
import inspect
class KeepTrack(object):
def __init__(self):
self.calldepth = sys.maxint
def func(self):
zero = len(inspect.stack())
if zero < self.calldepth:
self.calldepth = zero
i = len(inspect.stack())
print i - self.calldepth
if i - self.calldepth < 9:
self.func()
keeping_track = KeepTrack()
keeping_track.func()
You shouldn't change the system recursion limit at all. You should code your function to know how deep it is, and end the recursion when it gets too deep.
The reason the recursion limit seems differently applied in your program and the interpreter is because they have different tops of stack: the functions invoked in the interpreter to get to the point of running your code.
While somewhat tangential (I'd have put it in a comment, but I don't think there's room), it should be noted that setrecursionlimit is somewhat misleadingly named - it actually sets the maximum stack depth:
http://docs.python.org/library/sys.html#sys.setrecursionlimit
That's why the function behaves differently depending on where you call it from. Also, if func2 were to make a stdlib call (or whatever) that ended up calling a number of functions such that it added more than N to the stack, the exception would trigger early.
Also also, I wouldn't change the sys.tracebacklimit either; that will have an effect on the rest of your program. Go with Ned's answer.
ignoring the more general issues, it looks like you can get the current frame depth by looking at the length of inspect.getouterframes(). that would give you a "zero point" from which you can set the depth limit (disclaimer: i haven't tried this).
edit: or len(inspect.stack()) - it's not clear to me what the difference is. i would be interested in knowing if this works, and whether they were different.
I'd definitely choose the first approach, it is simpler and self explaining. After all the recursion limit is your explicit choice, so why obfuscate it?

Efficient way of having a function only execute once in a loop

At the moment, I'm doing stuff like the following, which is getting tedious:
run_once = 0
while 1:
if run_once == 0:
myFunction()
run_once = 1:
I'm guessing there is some more accepted way of handling this stuff?
What I'm looking for is having a function execute once, on demand. For example, at the press of a certain button. It is an interactive app which has a lot of user controlled switches. Having a junk variable for every switch, just for keeping track of whether it has been run or not, seemed kind of inefficient.
I would use a decorator on the function to handle keeping track of how many times it runs.
def run_once(f):
def wrapper(*args, **kwargs):
if not wrapper.has_run:
wrapper.has_run = True
return f(*args, **kwargs)
wrapper.has_run = False
return wrapper
#run_once
def my_function(foo, bar):
return foo+bar
Now my_function will only run once. Other calls to it will return None. Just add an else clause to the if if you want it to return something else. From your example, it doesn't need to return anything ever.
If you don't control the creation of the function, or the function needs to be used normally in other contexts, you can just apply the decorator manually as well.
action = run_once(my_function)
while 1:
if predicate:
action()
This will leave my_function available for other uses.
Finally, if you need to only run it once twice, then you can just do
action = run_once(my_function)
action() # run once the first time
action.has_run = False
action() # run once the second time
Another option is to set the func_code code object for your function to be a code object for a function that does nothing. This should be done at the end of your function body.
For example:
def run_once():
# Code for something you only want to execute once
run_once.func_code = (lambda:None).func_code
Here run_once.func_code = (lambda:None).func_code replaces your function's executable code with the code for lambda:None, so all subsequent calls to run_once() will do nothing.
This technique is less flexible than the decorator approach suggested in the accepted answer, but may be more concise if you only have one function you want to run once.
Run the function before the loop. Example:
myFunction()
while True:
# all the other code being executed in your loop
This is the obvious solution. If there's more than meets the eye, the solution may be a bit more complicated.
I'm assuming this is an action that you want to be performed at most one time, if some conditions are met. Since you won't always perform the action, you can't do it unconditionally outside the loop. Something like lazily retrieving some data (and caching it) if you get a request, but not retrieving it otherwise.
def do_something():
[x() for x in expensive_operations]
global action
action = lambda : None
action = do_something
while True:
# some sort of complex logic...
if foo:
action()
There are many ways to do what you want; however, do note that it is quite possible that —as described in the question— you don't have to call the function inside the loop.
If you insist in having the function call inside the loop, you can also do:
needs_to_run= expensive_function
while 1:
…
if needs_to_run: needs_to_run(); needs_to_run= None
…
I've thought of another—slightly unusual, but very effective—way to do this that doesn't require decorator functions or classes. Instead it just uses a mutable keyword argument, which ought to work in most versions of Python. Most of the time these are something to be avoided since normally you wouldn't want a default argument value to change from call-to-call—but that ability can be leveraged in this case and used as a cheap storage mechanism. Here's how that would work:
def my_function1(_has_run=[]):
if _has_run: return
print("my_function1 doing stuff")
_has_run.append(1)
def my_function2(_has_run=[]):
if _has_run: return
print("my_function2 doing some other stuff")
_has_run.append(1)
for i in range(10):
my_function1()
my_function2()
print('----')
my_function1(_has_run=[]) # Force it to run.
Output:
my_function1 doing stuff
my_function2 doing some other stuff
----
my_function1 doing stuff
This could be simplified a little further by doing what #gnibbler suggested in his answer and using an iterator (which were introduced in Python 2.2):
from itertools import count
def my_function3(_count=count()):
if next(_count): return
print("my_function3 doing something")
for i in range(10):
my_function3()
print('----')
my_function3(_count=count()) # Force it to run.
Output:
my_function3 doing something
----
my_function3 doing something
Here's an answer that doesn't involve reassignment of functions, yet still prevents the need for that ugly "is first" check.
__missing__ is supported by Python 2.5 and above.
def do_once_varname1():
print 'performing varname1'
return 'only done once for varname1'
def do_once_varname2():
print 'performing varname2'
return 'only done once for varname2'
class cdict(dict):
def __missing__(self,key):
val=self['do_once_'+key]()
self[key]=val
return val
cache_dict=cdict(do_once_varname1=do_once_varname1,do_once_varname2=do_once_varname2)
if __name__=='__main__':
print cache_dict['varname1'] # causes 2 prints
print cache_dict['varname2'] # causes 2 prints
print cache_dict['varname1'] # just 1 print
print cache_dict['varname2'] # just 1 print
Output:
performing varname1
only done once for varname1
performing varname2
only done once for varname2
only done once for varname1
only done once for varname2
One object-oriented approach and make your function a class, aka as a "functor", whose instances automatically keep track of whether they've been run or not when each instance is created.
Since your updated question indicates you may need many of them, I've updated my answer to deal with that by using a class factory pattern. This is a bit unusual, and it may have been down-voted for that reason (although we'll never know for sure because they never left a comment). It could also be done with a metaclass, but it's not much simpler.
def RunOnceFactory():
class RunOnceBase(object): # abstract base class
_shared_state = {} # shared state of all instances (borg pattern)
has_run = False
def __init__(self, *args, **kwargs):
self.__dict__ = self._shared_state
if not self.has_run:
self.stuff_done_once(*args, **kwargs)
self.has_run = True
return RunOnceBase
if __name__ == '__main__':
class MyFunction1(RunOnceFactory()):
def stuff_done_once(self, *args, **kwargs):
print("MyFunction1.stuff_done_once() called")
class MyFunction2(RunOnceFactory()):
def stuff_done_once(self, *args, **kwargs):
print("MyFunction2.stuff_done_once() called")
for _ in range(10):
MyFunction1() # will only call its stuff_done_once() method once
MyFunction2() # ditto
Output:
MyFunction1.stuff_done_once() called
MyFunction2.stuff_done_once() called
Note: You could make a function/class able to do stuff again by adding a reset() method to its subclass that reset the shared has_run attribute. It's also possible to pass regular and keyword arguments to the stuff_done_once() method when the functor is created and the method is called, if desired.
And, yes, it would be applicable given the information you added to your question.
Assuming there is some reason why myFunction() can't be called before the loop
from itertools import count
for i in count():
if i==0:
myFunction()
Here's an explicit way to code this up, where the state of which functions have been called is kept locally (so global state is avoided). I don't much like the non-explicit forms suggested in other answers: it's too surprising to see f() and for this not to mean that f() gets called.
This works by using dict.pop which looks up a key in a dict, removes the key from the dict, and takes a default value to use in case the key isn't found.
def do_nothing(*args, *kwargs):
pass
# A list of all the functions you want to run just once.
actions = [
my_function,
other_function
]
actions = dict((action, action) for action in actions)
while True:
if some_condition:
actions.pop(my_function, do_nothing)()
if some_other_condition:
actions.pop(other_function, do_nothing)()
I use cached_property decorator from functools to run just once and save the value. Example from the official documentation https://docs.python.org/3/library/functools.html
class DataSet:
def __init__(self, sequence_of_numbers):
self._data = tuple(sequence_of_numbers)
#cached_property
def stdev(self):
return statistics.stdev(self._data)
You can also use one of the standard library functools.lru_cache or functools.cache decorators in front of the function:
from functools import lru_cache
#lru_cache
def expensive_function():
return None
https://docs.python.org/3/library/functools.html
If I understand the updated question correctly, something like this should work
def function1():
print "function1 called"
def function2():
print "function2 called"
def function3():
print "function3 called"
called_functions = set()
while True:
n = raw_input("choose a function: 1,2 or 3 ")
func = {"1": function1,
"2": function2,
"3": function3}.get(n)
if func in called_functions:
print "That function has already been called"
else:
called_functions.add(func)
func()
You have all those 'junk variables' outside of your mainline while True loop. To make the code easier to read those variables can be brought inside the loop, right next to where they are used. You can also set up a variable naming convention for these program control switches. So for example:
# # _already_done checkpoint logic
try:
ran_this_user_request_already_done
except:
this_user_request()
ran_this_user_request_already_done = 1
Note that on the first execution of this code the variable ran_this_user_request_already_done is not defined until after this_user_request() is called.
A simple function you can reuse in many places in your code (based on the other answers here):
def firstrun(keyword, _keys=[]):
"""Returns True only the first time it's called with each keyword."""
if keyword in _keys:
return False
else:
_keys.append(keyword)
return True
or equivalently (if you like to rely on other libraries):
from collections import defaultdict
from itertools import count
def firstrun(keyword, _keys=defaultdict(count)):
"""Returns True only the first time it's called with each keyword."""
return not _keys[keyword].next()
Sample usage:
for i in range(20):
if firstrun('house'):
build_house() # runs only once
if firstrun(42): # True
print 'This will print.'
if firstrun(42): # False
print 'This will never print.'
I've taken a more flexible approach inspired by functools.partial function:
DO_ONCE_MEMORY = []
def do_once(id, func, *args, **kwargs):
if id not in DO_ONCE_MEMORY:
DO_ONCE_MEMORY.append(id)
return func(*args, **kwargs)
else:
return None
With this approach you are able to have more complex and explicit interactions:
do_once('foobar', print, "first try")
do_once('foo', print, "first try")
do_once('bar', print, "second try")
# first try
# second try
The exciting part about this approach it can be used anywhere and does not require factories - it's just a small memory tracker.
Depending on the situation, an alternative to the decorator could be the following:
from itertools import chain, repeat
func_iter = chain((myFunction,), repeat(lambda *args, **kwds: None))
while True:
next(func_iter)()
The idea is based on iterators, which yield the function once (or using repeat(muFunction, n) n-times), and then endlessly the lambda doing nothing.
The main advantage is that you don't need a decorator which sometimes complicates things, here everything happens in a single (to my mind) readable line. The disadvantage is that you have an ugly next in your code.
Performance wise there seems to be not much of a difference, on my machine both approaches have an overhead of around 130 ns.
If the condition check needs to happen only once you are in the loop, having a flag signaling that you have already run the function helps. In this case you used a counter, a boolean variable would work just as fine.
signal = False
count = 0
def callme():
print "I am being called"
while count < 2:
if signal == False :
callme()
signal = True
count +=1
I'm not sure that I understood your problem, but I think you can divide loop. On the part of the function and the part without it and save the two loops.

Categories

Resources