I'm trying to execute a python function with a timeout, I've found some good ideas on stackoverflow but they don't seem to work for me as I'm executing the python function from javascript (using Brython) and multithreading/sleep don't work well (no sleep function in javascript). Any idea relatively easy to implement that would allow me to terminate a function if its execution takes more than 10s (see logic below):
def function_to_execute:
print("function executing")
time_out=10
exec(function_to_execute)
time_function_started=time()
if time()>(time_function_startedtime_out) and function_to_execute not complete: (simplified for clarity)
function_to_execute.terminate()
Thanks,
The solution that I know is by using 2 thread worker and kill it. one worker for running the function, another one for maintain the running time.
I think you can use python-worker (link)
import time
from worker import worker
#worker
def run_with_limit(worker_object, timeout):
time.sleep(timeout)
worker_object.abort()
#worker
def my_controlled_function(a, b, c):
...
## then you can run it
run_with_limit(my_controlled_function(1, 2, 3), timeout=10)
if you dont want to use time.sleep you have an alternative like this
#worker
def run_with_limit(worker_object, timeout):
while 1:
if worker_object.work_time >= 10:
worker_object.abort()
break
Related
This seems like a particularly confusing question based on the other similar answers I found on SO. I have code similar to the following:
def parentFunction():
# Other code
while True:
var1, var2 = anotherFunction1() # Getting client details after listening on open port
threading.Thread(target = anotherFunction2, args=(var1, var2)).start()
childFunction(var1,var2)
print("PRINT #1: Running in Parent Function") # This only prints once for some reason
def childFunction(var1, var2):
threading.Timer(10, childFunction, args=(var1,var2)).start()
print("PRINT #2: Running in child function") # Prints every 10 seconds
# Other code
if (someConditionIsMet):
print("PRINT #3: Exiting")
end_process_and_exit_here()
So basically, when I ran the parentFunction(), I would go into a neverending loop where ever 10 seconds, my console would print "PRINT #2: Running in child function". When the someConditionIsMet was true, my console would print "PRINT #3: Exiting" but then it wouldn't exit. Hence, my loop would carry on forever. I am not sure if it's relevant, but parts of the code has a Threading.Lock as well.
Where I have written end_process_and_exit_here() above, I tried using several methods to kill a thread such as
Raising exceptions and setting flags - These assume that I have started my thread outside of my loop so it's not comparable.
Even this qn about looping threads assumes the thread isnt being looped
Killing using join or stop - stop() was not an option I could access. join() was available but it didn't work i.e. after it was called, the next thread (PRINT #2) continued printing.
Other answers suggesting the use of signals (1) (2), also didn't work.
Using sys.exit() or break in different parts of my code also did not result in the threads stopping.
Is there any method for me to easily exit from such a looping thread?
Note: I need to use threading and not multiprocessing.
You could use python-worker, simply add #worker above you function
pip install python-worker
from worker import worker
#worker
def anotherFunction2(var1,var2):
# your code here
pass
#worker
def parentFunction():
# Other code
while True:
var1, var2 = anotherFunction1() # Getting client details after listening on open port
function2Worker = anotherFunction2(var1,var2) # this will automatically run as thread since you put #worker above your function
childFunction(var1,var2)
print("PRINT #1: Running in Parent Function") # This only prints once for some reason
def childFunction(var1, var2):
parentWorker = parentFunction(var1, var2)
# Other code
if (someConditionIsMet):
parentWorker.abort()
So as an update, I have managed to resolve this issue. The problem with the other answer stated by me (shown below) is that just .cancel() by itself only seemed to work for one timer thread. But as can be seen in the problem, childFunction() itself calls childFunction() and can also be called by the parentFunction, meaning that there may be multiple timer threads.
What worked for my specific case was naming my threads as below:
t1 = threading.Timer(10, childFunction, args=(var1,var2,number))
t1.name = t1.name + "_timer" + str(number)
t1.start()
Thereafter, I could cancel all timer threads that were created from this process by:
for timerthread in threading.enumerate():
if timerthread.name.endswith('timer' + str(number)):
timerthread.cancel()
Below is the ORIGINAL METHOD I USED WHICH CAUSED MANY ISSUES:
I'm not certain if this is a bad practice (in fact I feel it may be based on the answers linked in the question saying that we should never 'kill a thread'). I'm sure there are reasons why this is not good and I'd appreciate anyone telling me why. However, the solution that ultimately worked for me was to use .cancel().
So first change would be to assign your thread Timer to a variable instead of calling it directly. So instead of threading.Timer(10, childFunction, args=(var1,var2)).start(), it should be
t = threading.Timer(10, childFunction, args=(var1,var2))
t.start()
Following that, instead of end_process_and_exit_here(), you should use t.cancel(). This seems to work and stops all threads mid-process. However, the bad thing is that it doesn't seem to carry on with other parts of the program.
I have a function that is used by multiple threads. Because of its nature, this function should only ever called once at a time. Multiple threads calling the function at the same time could be bad.
If the function is in use by a thread, other threads should have to wait for it to be free.
My background isn't coding so I'm not sure, but I believe this is called "locking" in the jargon? I tried Googling it up but did not find a simple example for Python3.
A simplified case:
def critical_function():
# How do I "lock" this function?
print('critical operation that should only be run once at a time')
def threaded_function():
while True:
# doing stuff and then
critical_function()
for i in range(0, 10):
threading.Thread(target=threaded_function).start()
from threading import Lock
critical_function_lock = Lock()
def critical_function():
with critical_function_lock:
# How do I "lock" this function?
print('critical operation that should only be run once at a time')
I looked online and found some SO discussing and ActiveState recipes for running some code with a timeout. It looks there are some common approaches:
Use thread that run the code, and join it with timeout. If timeout elapsed - kill the thread. This is not directly supported in Python (used private _Thread__stop function) so it is bad practice
Use signal.SIGALRM - but this approach not working on Windows!
Use subprocess with timeout - but this is too heavy - what if I want to start interruptible task often, I don't want fire process for each!
So, what is the right way? I'm not asking about workarounds (eg use Twisted and async IO), but actual way to solve actual problem - I have some function and I want to run it only with some timeout. If timeout elapsed, I want control back. And I want it to work on Linux and Windows.
A completely general solution to this really, honestly does not exist. You have to use the right solution for a given domain.
If you want timeouts for code you fully control, you have to write it to cooperate. Such code has to be able to break up into little chunks in some way, as in an event-driven system. You can also do this by threading if you can ensure nothing will hold a lock too long, but handling locks right is actually pretty hard.
If you want timeouts because you're afraid code is out of control (for example, if you're afraid the user will ask your calculator to compute 9**(9**9)), you need to run it in another process. This is the only easy way to sufficiently isolate it. Running it in your event system or even a different thread will not be enough. It is also possible to break things up into little chunks similar to the other solution, but requires very careful handling and usually isn't worth it; in any event, that doesn't allow you to do the same exact thing as just running the Python code.
What you might be looking for is the multiprocessing module. If subprocess is too heavy, then this may not suit your needs either.
import time
import multiprocessing
def do_this_other_thing_that_may_take_too_long(duration):
time.sleep(duration)
return 'done after sleeping {0} seconds.'.format(duration)
pool = multiprocessing.Pool(1)
print 'starting....'
res = pool.apply_async(do_this_other_thing_that_may_take_too_long, [8])
for timeout in range(1, 10):
try:
print '{0}: {1}'.format(duration, res.get(timeout))
except multiprocessing.TimeoutError:
print '{0}: timed out'.format(duration)
print 'end'
If it's network related you could try:
import socket
socket.setdefaulttimeout(number)
I found this with eventlet library:
http://eventlet.net/doc/modules/timeout.html
from eventlet.timeout import Timeout
timeout = Timeout(seconds, exception)
try:
... # execution here is limited by timeout
finally:
timeout.cancel()
For "normal" Python code, that doesn't linger prolongued times in C extensions or I/O waits, you can achieve your goal by setting a trace function with sys.settrace() that aborts the running code when the timeout is reached.
Whether that is sufficient or not depends on how co-operating or malicious the code you run is. If it's well-behaved, a tracing function is sufficient.
An other way is to use faulthandler:
import time
import faulthandler
faulthandler.enable()
try:
faulthandler.dump_tracebacks_later(3)
time.sleep(10)
finally:
faulthandler.cancel_dump_tracebacks_later()
N.B: The faulthandler module is part of stdlib in python3.3.
If you're running code that you expect to die after a set time, then you should write it properly so that there aren't any negative effects on shutdown, no matter if its a thread or a subprocess. A command pattern with undo would be useful here.
So, it really depends on what the thread is doing when you kill it. If its just crunching numbers who cares if you kill it. If its interacting with the filesystem and you kill it , then maybe you should really rethink your strategy.
What is supported in Python when it comes to threads? Daemon threads and joins. Why does python let the main thread exit if you've joined a daemon while its still active? Because its understood that someone using daemon threads will (hopefully) write the code in a way that it wont matter when that thread dies. Giving a timeout to a join and then letting main die, and thus taking any daemon threads with it, is perfectly acceptable in this context.
I've solved that in that way:
For me is worked great (in windows and not heavy at all) I'am hope it was useful for someone)
import threading
import time
class LongFunctionInside(object):
lock_state = threading.Lock()
working = False
def long_function(self, timeout):
self.working = True
timeout_work = threading.Thread(name="thread_name", target=self.work_time, args=(timeout,))
timeout_work.setDaemon(True)
timeout_work.start()
while True: # endless/long work
time.sleep(0.1) # in this rate the CPU is almost not used
if not self.working: # if state is working == true still working
break
self.set_state(True)
def work_time(self, sleep_time): # thread function that just sleeping specified time,
# in wake up it asking if function still working if it does set the secured variable work to false
time.sleep(sleep_time)
if self.working:
self.set_state(False)
def set_state(self, state): # secured state change
while True:
self.lock_state.acquire()
try:
self.working = state
break
finally:
self.lock_state.release()
lw = LongFunctionInside()
lw.long_function(10)
The main idea is to create a thread that will just sleep in parallel to "long work" and in wake up (after timeout) change the secured variable state, the long function checking the secured variable during its work.
I'm pretty new in Python programming, so if that solution has a fundamental errors, like resources, timing, deadlocks problems , please response)).
solving with the 'with' construct and merging solution from -
Timeout function if it takes too long to finish
this thread which work better.
import threading, time
class Exception_TIMEOUT(Exception):
pass
class linwintimeout:
def __init__(self, f, seconds=1.0, error_message='Timeout'):
self.seconds = seconds
self.thread = threading.Thread(target=f)
self.thread.daemon = True
self.error_message = error_message
def handle_timeout(self):
raise Exception_TIMEOUT(self.error_message)
def __enter__(self):
try:
self.thread.start()
self.thread.join(self.seconds)
except Exception, te:
raise te
def __exit__(self, type, value, traceback):
if self.thread.is_alive():
return self.handle_timeout()
def function():
while True:
print "keep printing ...", time.sleep(1)
try:
with linwintimeout(function, seconds=5.0, error_message='exceeded timeout of %s seconds' % 5.0):
pass
except Exception_TIMEOUT, e:
print " attention !! execeeded timeout, giving up ... %s " % e
I have two functions, draw_ascii_spinner and findCluster(companyid).
I would like to:
Run findCluster(companyid) in the backround and while its processing....
Run draw_ascii_spinner until findCluster(companyid) finishes
How do I begin to try to solve for this (Python 2.7)?
Use threads:
import threading, time
def wrapper(func, args, res):
res.append(func(*args))
res = []
t = threading.Thread(target=wrapper, args=(findcluster, (companyid,), res))
t.start()
while t.is_alive():
# print next iteration of ASCII spinner
t.join(0.2)
print res[0]
You can use multiprocessing. Or, if findCluster(companyid) has sensible stopping points, you can turn it into a generator along with draw_ascii_spinner, to do something like this:
for tick in findCluster(companyid):
ascii_spinner.next()
Generally, you will use Threads. Here is a simplistic approach which assumes, that there are only two threads: 1) the main thread executing a task, 2) the spinner thread:
#!/usr/bin/env python
import time
import thread
def spinner():
while True:
print '.'
time.sleep(1)
def task():
time.sleep(5)
if __name__ == '__main__':
thread.start_new_thread(spinner, ())
# as soon as task finishes (and so the program)
# spinner will be gone as well
task()
This can be done with threads. FindCluster runs in a separate thread and when done, it can simply signal another thread that is polling for a reply.
You'll want to do some research on threading, the general form is going to be this
Create a new thread for findCluster and create some way for the program to know the method is running - simplest in Python is just a global boolean
Run draw_ascii_spinner in a while loop conditioned on whether it is still running, you'll probably want to have this thread sleep for a short period of time between iterations
Here's a short tutorial in Python - http://linuxgazette.net/107/pai.html
Run findCluster() in a thread (the Threading module makes this very easy), and then draw_ascii_spinner until some condition is met.
Instead of using sleep() to set the pace of the spinner, you can wait on the thread's wait() with a timeout.
It is possible to have a working example? I am new in Python. I have 6 tasks to run in one python program. These 6 tasks should work in coordinations, meaning that one should start when another finishes. I saw the answers , but I couldn't adopted the codes you shared to my program.
I used "time.sleep" but I know that it is not good because I cannot know how much time it takes each time.
# Sending commands
for i in range(0,len(cmdList)): # port Sending commands
cmd = cmdList[i]
cmdFull = convert(cmd)
port.write(cmd.encode('ascii'))
# s = port.read(10)
print(cmd)
# Terminate the command + close serial port
port.write(cmdFull.encode('ascii'))
print('Termination')
port.close()
# time.sleep(1*60)
Is it possible to schedule an event in python without multithreading?
I am trying to obtain something like scheduling a function to execute every x seconds.
Maybe sched?
You could use a combination of signal.alarm and a signal handler for SIGALRM like so to repeat the function every 5 seconds.
import signal
def handler(sig, frame):
print ("I am done this time")
signal.alarm(5) #Schedule this to happen again.
signal.signal(signal.SIGALRM, handler)
signal.alarm(5)
The other option is to use the sched module that comes along with Python but I don't know whether it uses threads or not.
Sched is probably the way to go for this, as #eumiro points out. However, if you don't want to do that, then you could do this:
import time
while 1:
#call your event
time.sleep(x) #wait for x many seconds before calling the script again
Without threading it seldom makes sense to periodically call a function. Because your main thread is blocked by waiting - it simply does nothing.
However if you really want to do so:
import time
for x in range(3):
print('Loop start')
time.sleep(2)
print('Calling some function...')
I this what you really want?
You could use celery:
Celery is an open source asynchronous
task queue/job queue based on
distributed message passing. It is
focused on real-time operation, but
supports scheduling as well.
The execution units, called tasks, are
executed concurrently on one or more
worker nodes. Tasks can execute
asynchronously (in the background) or
synchronously (wait until ready).
and a code example:
You probably want to see some code by
now, so here’s an example task adding
two numbers:
from celery.decorators import task
#task
def add(x, y):
return x + y
You can execute the task in the
background, or wait for it to finish:
>>> result = add.delay(4, 4)
>>> result.wait() # wait for and return the result 8
This is of more general use than the problem you describe requires, though.