How to jump out of a dead loop automatically in Python? - python

I have a "do..., until..." structure in Python as follows:
while True:
if foo() == bar():
break
It works fine (jumps out in the end) in most of the cases. However, in some of the cases where the condition is never met, it will get stuck there.
Figuring out what are these cases is kind of difficult, since it is essentially a random process behind. So I wish to set a "timeout" thing for the while loop.
Say, if the loop has been running for 1s, but still not yet stops, I wish the loop to terminate itself.
How may I do this?
Update: Here is the actual code:
while True:
possibleJunctions = junctionReachability[junctions.index(currentJunction)]
nextJunction = random.choice(filter(lambda (jx, jy): (jx - currentJunction[0]) * (endJunction[0] - currentJunction[0]) > 0 or (jy - currentJunction[1]) * (endJunction[1] - currentJunction[1]) > 0, possibleJunctions) or possibleJunctions)
if previousJunction != nextJunction: # never go back
junctionSequence.append(nextJunction)
previousJunction = currentJunction
currentJunction = nextJunction
if currentJunction == endJunction:
break

import time
loop_start = time.time()
while time.time() - loop_start <= 1:
if foo() == bar():
break

EDIT
Dan Doe's solution is simplest and best if your code is synchronous (just runs in a single thread) and you know that the foo and bar functions always terminate within some period of time.
If you have asynchronous code (like a GUI), or if the foo and bar functions you use to test for termination conditions can themselves take too long to complete, then read on.
Run the loop inside a separate thread/process. Run a timer in another process. Once the timer expires, set a flag that would cause the loop to terminate.
Something like this (warning: untested code):
import multiprocessing
import time
SECONDS = 10
event = multiprocessing.Event()
def worker():
"""Does stuff until work is complete, or until signaled to terminate by timer."""
while not event.is_set():
if foo() == bar():
break
def timer():
"""Signals the worker to terminate immediately."""
time.sleep(SECONDS)
event.set()
def main():
"""Kicks off subprocesses and waits for both of them to terminate."""
worker_process = multiprocessing.Process(target=worker)
timer_process = multiprocessing.Process(target=timer)
timer_process.start()
worker_process.start()
timer_process.join()
worker_process.join()
if __name__ == "__main__":
main()
If you were worried about the foo and bar functions taking too long to complete, you could explicitly terminate the worker process from within the timer process.

I recommend using a counter. This is a common trick to detect non-convergence.
maxiter = 10000
while True:
if stopCondition(): break
maxiter = maxiter - 1
if maxiter <= 0:
print >>sys.stderr, "Did not converge."
break
this requires the least overhead and usually adapts best to different CPUs: even on a faster CPU, you want the same termination behavior; instead of a time-based timeout.
However, it would be even better if you would detect being stuck e.g. with some criterion function that no longer improves.

Related

Python - Why doesn't multithreading increase the speed of my code?

I tried improving my code by running this with and without using two threads:
from threading import Lock
from threading import Thread
import time
start_time = time.clock()
arr_lock = Lock()
arr = range(5000)
def do_print():
# Disable arr access to other threads; they will have to wait if they need to read
a = 0
while True:
arr_lock.acquire()
if len(arr) > 0:
item = arr.pop(0)
print item
arr_lock.release()
b = 0
for a in range(30000):
b = b + 1
else:
arr_lock.release()
break
thread1 = Thread(target=do_print)
thread1.start()
thread1.join()
print time.clock() - start_time, "seconds"
When running 2 threads my code's run time increased. Does anyone know why this happened, or perhaps know a different way to increase the performance of my code?
The primary reason you aren't seeing any performance improvements with multiple threads is because your program only enables one thread to do anything useful at a time. The other thread is always blocked.
Two things:
Remove the print statement that's invoked inside the lock. print statements drastically impact performance and timing. Also, the I/O channel to stdout is essentially single threaded, so you've built another implicit lock into your code. So let's just remove the print statement.
Use a proper sleep technique instead of "spin locking" and counting up from 0 to 30000. That's just going to burn a core needlessly.
Try this as your main loop
while True:
arr_lock.acquire()
if len(arr) > 0:
item = arr.pop(0)
arr_lock.release()
time.sleep(0)
else:
arr_lock.release()
break
This should run slightly better... I would even advocate getting the sleep statement out altogether so you can just let each thread have a full quantum.
However, because each thread is either doing "nothing" (sleeping or blocked on acquire) or just doing a single pop call on the array while in the lock, the majority of the time spent is going to be in the acquire/release calls instead of actually operating on the array. Hence, multiple threads aren't going to make your program run faster.

Time of execution of a command while is running

I have a command in a line (Fit.perform() from import xspec, but never mind because the question is general and can be applicated also for other python commands) that takes a while to finish.
I simple want to know the time of execution while the command is running, so when it has not finished its execution yet.
This is necessary if I want to stop the command during its execution, for example because it is taking too much time to end.
So, I need something like this:
if **you_are_taking_so_much_time**:
do_something_else
It is not possible to use methods like time or timeit because they calculate the time only at the end of execution of a command and not while it is running.
Is it possible?
I'm using python 2.7 on MacOS.
You will have to use a monitor thread:
import threading
import time
done = False
def longfun():
global done
print("This will take some time.")
time.sleep(60)
done = True
def monitor():
global done
timeout = 10
print("Wait until timeout.")
while not done and timeout > 0:
time.sleep(1)
timeout -= 1
lt = threading.Thread(target=longfun)
lt.start()
mt = threading.Thread(target=monitor)
mt.start()
mt.join()
if done == False:
print("Long thread not done yet. Do something else.")
lt.join()
Note that this does wait until the 'long' thread is finished. You do not mention you want to stop the long running operation. If you do, you will have to correctly implement it in a thread, including start/stop/progress functionality (usually this works with a while loop that uses a running bit to see if it should continue.
like this:
import time,thread
def test_me(hargs):
func,args,timeout = hargs
start_time = time.time()
thread.start_newthread(func,args)
while True :
if My_expected_value:#where store this ?
print "well done !"
break
elif time.time() > (timeout + start_time) :
print "oh! to late, sorry !"
break
time.sleep(timeout/100)
thread.start_newthread(test_me,((func,args,timeout),))
important warnings : need use thread for Non-freezing application, got 3 thread for this: 1-main app, 2-test_me, 3- Your function(func)
Don't forget adding external variable to your function (for killing your function thread)

Timeout function if it takes too long

Forgive me, I am a newbie. I've surveyed some solution. But it is so hard for me to understand and to modify that. (Or maybe there is no solution in line with my imagination?). And I hope it can work on Ubuntu & Win7.
There is an example like this.
import random,time
def example():
while random.randint(0,10) != 1:
time.sleep(1)
print "down"
example()
And my imagination is...
If, the example() run over 10s, then rerun the example() again. (And maybe there is a place I can code anything else. Like I want to record the timeout event on TXT, and I can code the code at that place.)
Else, do nothing.
Is it possible to do that?
You can run a watch-dog in a separate thread that interrupts the main thread (that runs example) when it exceeds the time limit. Here is a possible implementation, with timeout lowered to 3s for ease of debugging:
import time, threading, thread
def watchdog_timer(state):
time.sleep(3)
if not state['completed']:
thread.interrupt_main()
def run_example():
while True:
state = {'completed': False}
watchdog = threading.Thread(target=watchdog_timer, args=(state,))
watchdog.daemon = True
watchdog.start()
try:
example()
state['completed'] = True
except KeyboardInterrupt:
# this would be the place to log the timeout event
pass
else:
break
I'm not sure if I fully understood what you want to achieve, but as you're constantly looping and only have one short and predictable blocking command, you could simply store the time when the loop started and then compare it to the current time once per loop iteration. If the difference exceeds your limit, do whatever you want:
import random,time
time_limit=10
def example():
time_start = time.time() # store current time (seconds since 1970)
while random.randint(0,10) != 1:
time.sleep(1)
if (time.time() >= time_start + time_limit): # compare with current time
print "canceled!"
break # break the while-loop
print "down"
example()

Schedule Tasks at Fixed Rate with Python Multiprocessing

I would like to run a function asynchronously in Python, calling the function repeatedly at a fixed time interval. This java class has functionality similar to what I want. I was hoping for something in python like:
pool = multiprocessing.Pool()
pool.schedule(func, args, period)
# other code to do while that runs in the background
pool.close()
pool.join()
Are there any packages which provide similar functionality? I would prefer something simple and lightweight.
How could I implement this functionality in python?
This post is similar, but asks for an in process solution. I want a multiprocess async solution.
Here is one possible solution. One caveat is that func needs to return faster than rate, else it wont be called as frequently as rate and if it ever gets quicker it will be scheduled faster than rate while it catches up. This approach seems like a lot of work, but then again parallel programming is often tough. I would appreciate a second look at the code to make sure I don't have a deadlock waiting somewhere.
import multiprocessing, time, math
def func():
print('hello its now {}'.format(time.time()))
def wrapper(f, period, event):
last = time.time() - period
while True:
now = time.time()
# returns True if event is set, otherwise False after timeout
if event.wait(timeout=(last + period - now)):
break
else:
f()
last += period
def main():
period = 2
# event is the poison pill, setting it breaks the infinite loop in wrapper
event = multiprocessing.Event()
process = multiprocessing.Process(target=wrapper, args=(func, period, event))
process.start()
# burn some cpu cycles, takes about 20 seconds on my machine
x = 7
for i in range(50000000):
x = math.sqrt(x**2)
event.set()
process.join()
print('x is {} by the way'.format(x))
if __name__ == '__main__':
main()

Why doesn't eventlet GreenPool call func after spawn_n unless waitall()?

This code prints nothing:
def foo(i):
print i
def main():
pool = eventlet.GreenPool(size=100)
for i in xrange(100):
pool.spawn_n(foo, i)
while True:
pass
But this code prints numbers:
def foo(i):
print i
def main():
pool = eventlet.GreenPool(size=100)
for i in xrange(100):
pool.spawn_n(foo, i)
pool.waitall()
while True:
pass
The only difference is pool.waitall(). In my mind, waitall() means wait until all greenthreads in the pool are finished working, but an infinite loop waits for every greenthread, so pool.waitall() is not necessary.
So why does this happen?
Reference: http://eventlet.net/doc/modules/greenpool.html#eventlet.greenpool.GreenPool.waitall
The threads created in an eventlet GreenPool are green threads. This means that they all exist within one thread at the operating-system level, and the Python interpreter handles switching between them. This switching can only happen when one thread either yields (deliberately provides an opportunity for other threads to run) or is waiting for I/O.
When your code runs:
while True:
pass
… that thread of execution is blocked – stuck on that code – and no other green threads can get scheduled.
When you instead run:
pool.waitall()
… eventlet makes sure that it yields while waiting.
You could emulate this same behaviour by modifying your while loop slightly to call the eventlet.sleep function, which yields:
while True:
eventlet.sleep()
This could be useful if you wanted to do something else in the while True: loop while waiting for the threads in your pool to complete. Otherwise, just use pool.waitall() – that’s what it’s for.

Categories

Resources