I have a function that should not take too long to run. I would like to set a timeout limit on it. I can find one proposed solutions on the internet. See the following SO post. Timeout on a function call
The solution uses signals, which is not available on Windows. There is a similar use of signals for making a ticker, which has a windows port as explained on this SO post: python: windows equivalent of SIGALRM this is not an answer to the timeout directly, but could be adapted to work for timeouts. It is written for python 2.7 though.
Since the answers are roughly 10 years old, my question is: Is there any more modern python (e.g. python 3.7) way to create a context manager/decorator/similar wrapper do make a "normal function" into a timeout-limited function on a windows system?
Here's a way to convert #Praveenkumar's answer into an easy-to-use decorator, which it seems you now would like to know:
import time
import concurrent.futures as futures
def timeout(timelimit):
def decorator(func):
def decorated(*args, **kwargs):
with futures.ThreadPoolExecutor(max_workers=1) as executor:
future = executor.submit(func, *args, **kwargs)
try:
result = future.result(timelimit)
except futures.TimeoutError:
print('Timedout!')
raise TimeoutError from None
else:
print(result)
executor._threads.clear()
futures.thread._threads_queues.clear()
return result
return decorated
return decorator
#timeout(3)
def test(n):
print(f'Sleeping for {n} seconds')
time.sleep(n)
# Real code here.
return 'Done'
test(2) # OK
test(5) # -> Causes timeout.
Related
I'm trying to execute a python function with a timeout, I've found some good ideas on stackoverflow but they don't seem to work for me as I'm executing the python function from javascript (using Brython) and multithreading/sleep don't work well (no sleep function in javascript). Any idea relatively easy to implement that would allow me to terminate a function if its execution takes more than 10s (see logic below):
def function_to_execute:
print("function executing")
time_out=10
exec(function_to_execute)
time_function_started=time()
if time()>(time_function_startedtime_out) and function_to_execute not complete: (simplified for clarity)
function_to_execute.terminate()
Thanks,
The solution that I know is by using 2 thread worker and kill it. one worker for running the function, another one for maintain the running time.
I think you can use python-worker (link)
import time
from worker import worker
#worker
def run_with_limit(worker_object, timeout):
time.sleep(timeout)
worker_object.abort()
#worker
def my_controlled_function(a, b, c):
...
## then you can run it
run_with_limit(my_controlled_function(1, 2, 3), timeout=10)
if you dont want to use time.sleep you have an alternative like this
#worker
def run_with_limit(worker_object, timeout):
while 1:
if worker_object.work_time >= 10:
worker_object.abort()
break
Specification of the problem:
I'm searching through really great amount of lines of a log file and I'm distributing those lines to groups in order to regular expressions(RegExses) I have stored using the re.match() function. Unfortunately some of my RegExses are too complicated and Python sometimes gets himself to backtracking hell. Due to this I need to protect it with some kind of timeout.
Problems:
re.match, I'm using, is Python's function and as I found out somewhere here on StackOverflow (I'm really sorry, I can not find the link now :-( ). It is very difficult to interrupt thread with running Python's library. For this reason threads are out of the game.
Because evaluating of re.match function takes relatively short time and I want to analyse with this function great amount of lines, I need some timeout function that wont't take too long to execute (this makes threads even less suitable, it takes really long time to initialise new thread) and can be set to less than one second.
For those reasons, answers here - Timeout on a function call
and here - Timeout function if it takes too long to finish with decorator (alarm - 1sec and more) are off the table.
I've spent this morning searching for solution to this question but I did not find any satisfactory answer.
Solution:
I've just modified a script posted here: Timeout function if it takes too long to finish.
And here is the code:
from functools import wraps
import errno
import os
import signal
class TimeoutError(Exception):
pass
def timeout(seconds=10, error_message=os.strerror(errno.ETIME)):
def decorator(func):
def _handle_timeout(signum, frame):
raise TimeoutError(error_message)
def wrapper(*args, **kwargs):
signal.signal(signal.SIGALRM, _handle_timeout)
signal.setitimer(signal.ITIMER_REAL,seconds) #used timer instead of alarm
try:
result = func(*args, **kwargs)
finally:
signal.alarm(0)
return result
return wraps(func)(wrapper)
return decorator
And then you can use it like this:
from timeout import timeout
from time import time
#timeout(0.01)
def loop():
while True:
pass
try:
begin = time.time()
loop()
except TimeoutError, e:
print "Time elapsed: {:.3f}s".format(time.time() - begin)
Which prints
Time elapsed: 0.010s
Specification of the problem:
I'm searching through really great amount of lines of a log file and I'm distributing those lines to groups in order to regular expressions(RegExses) I have stored using the re.match() function. Unfortunately some of my RegExses are too complicated and Python sometimes gets himself to backtracking hell. Due to this I need to protect it with some kind of timeout.
Problems:
re.match, I'm using, is Python's function and as I found out somewhere here on StackOverflow (I'm really sorry, I can not find the link now :-( ). It is very difficult to interrupt thread with running Python's library. For this reason threads are out of the game.
Because evaluating of re.match function takes relatively short time and I want to analyse with this function great amount of lines, I need some timeout function that wont't take too long to execute (this makes threads even less suitable, it takes really long time to initialise new thread) and can be set to less than one second.
For those reasons, answers here - Timeout on a function call
and here - Timeout function if it takes too long to finish with decorator (alarm - 1sec and more) are off the table.
I've spent this morning searching for solution to this question but I did not find any satisfactory answer.
Solution:
I've just modified a script posted here: Timeout function if it takes too long to finish.
And here is the code:
from functools import wraps
import errno
import os
import signal
class TimeoutError(Exception):
pass
def timeout(seconds=10, error_message=os.strerror(errno.ETIME)):
def decorator(func):
def _handle_timeout(signum, frame):
raise TimeoutError(error_message)
def wrapper(*args, **kwargs):
signal.signal(signal.SIGALRM, _handle_timeout)
signal.setitimer(signal.ITIMER_REAL,seconds) #used timer instead of alarm
try:
result = func(*args, **kwargs)
finally:
signal.alarm(0)
return result
return wraps(func)(wrapper)
return decorator
And then you can use it like this:
from timeout import timeout
from time import time
#timeout(0.01)
def loop():
while True:
pass
try:
begin = time.time()
loop()
except TimeoutError, e:
print "Time elapsed: {:.3f}s".format(time.time() - begin)
Which prints
Time elapsed: 0.010s
I am confused about how to play around with the asyncio module in Python 3.4. I have a searching API for a search engine, and want to each search request to be run either parallel, or asynchronously, so that I don't have to wait for one search finish to start another.
Here is my high-level searching API to build some objects with the raw search results. The search engine itself is using some kind of asyncio mechanism, so I won't bother with that.
# No asyncio module used here now
class search(object):
...
self.s = some_search_engine()
...
def searching(self, *args, **kwargs):
ret = {}
# do some raw searching according to args and kwargs and build the wrapped results
...
return ret
To try to async the requests, I wrote following test case to test how I can interact my stuff with the asyncio module.
# Here is my testing script
#asyncio.coroutine
def handle(f, *args, **kwargs):
r = yield from f(*args, **kwargs)
return r
s = search()
loop = asyncio.get_event_loop()
loop.run_until_complete(handle(s.searching, arg1, arg2, ...))
loop.close()
By running with pytest, it will return a RuntimeError: Task got bad yield : {results from searching...}, when it hits the line r = yield from ....
I also tried another way.
# same handle as above
def handle(..):
....
s = search()
loop = asyncio.get_event_loop()
tasks = [
asyncio.async(handle(s.searching, arg11, arg12, ...)),
asyncio.async(handle(s.searching, arg21, arg22, ...)),
...
]
loop.run_until_complete(asyncio.wait(tasks))
loop.close()
By running this test case by pytest, it passes but some weird exception from the search engine will raise. And it says Future/Task exception was never retrieved.
Things I wish to ask:
For my 1st try, is that the right way to use yield from, by returning the actual result from a function call?
I think I need to add some sleep to my 2nd test case to wait for the task finish, but how should I do that? And how can I get my function calls to return in my 2nd test case?
Is that a good way to implement asyncio with an existing module, by creating an async handler to handle requests?
If the answer to question 2 is NO, does every client calls to the class search needs to include loop = get_event_loop() this kind of stuffs to async the requests?
The problem is that you can't just call existing synchronous code as if it was an asyncio.coroutine and get asynchronous behavior. When you call yield from searching(...), you're only going to get asynchronous behavior if searching itself is actually an asyncio.coroutine, or at least returns an asyncio.Future. Right now, searching is just a regular synchronous function, so calling yield from searching(...) is just going to throw an error, because it doesn't return a Future or coroutine.
To get the behavior you want, you'll need to have an asynchronous version of searching in addition to a synchronous version (or just drop the synchronous version altogether if you don't need it). You have a few options to support both:
Rewrite searching as an asyncio.coroutine that it uses asyncio-compatible calls to do its I/O, rather than blocking I/O. This will make it work in an asyncio context, but it means you won't be able to call it directly in a synchronous context anymore. Instead, you'd need to also provide an alternative synchronous searching method that starts an asyncio event loop and calls return loop.run_until_complete(self.searching(...)). See this question for more details on that.
Keep your synchronous implementation of searching, and provide an alternative asynchronous API that uses BaseEventLoop.run_in_executor to run your the searching method in a background thread:
class search(object):
...
self.s = some_search_engine()
...
def searching(self, *args, **kwargs):
ret = {}
...
return ret
#asyncio.coroutine
def searching_async(self, *args, **kwargs):
loop = kwargs.get('loop', asyncio.get_event_loop())
try:
del kwargs['loop'] # assuming searching doesn't take loop as an arg
except KeyError:
pass
r = yield from loop.run_in_executor(None, self.searching, *args) # Passing None tells asyncio to use the default ThreadPoolExecutor
return r
Testing script:
s = search()
loop = asyncio.get_event_loop()
loop.run_until_complete(s.searching_async(arg1, arg2, ...))
loop.close()
This way, you can keep your synchronous code as is, and at least provide methods that can be used in asyncio code without blocking the event loop. It's not as clean a solution as it would be if you actually used asynchronous I/O in your code, but its better than nothing.
Provide two completely separate versions of searching, one that uses blocking I/O, and one that's asyncio-compatible. This gives ideal implementations for both contexts, but requires twice the work.
I looked online and found some SO discussing and ActiveState recipes for running some code with a timeout. It looks there are some common approaches:
Use thread that run the code, and join it with timeout. If timeout elapsed - kill the thread. This is not directly supported in Python (used private _Thread__stop function) so it is bad practice
Use signal.SIGALRM - but this approach not working on Windows!
Use subprocess with timeout - but this is too heavy - what if I want to start interruptible task often, I don't want fire process for each!
So, what is the right way? I'm not asking about workarounds (eg use Twisted and async IO), but actual way to solve actual problem - I have some function and I want to run it only with some timeout. If timeout elapsed, I want control back. And I want it to work on Linux and Windows.
A completely general solution to this really, honestly does not exist. You have to use the right solution for a given domain.
If you want timeouts for code you fully control, you have to write it to cooperate. Such code has to be able to break up into little chunks in some way, as in an event-driven system. You can also do this by threading if you can ensure nothing will hold a lock too long, but handling locks right is actually pretty hard.
If you want timeouts because you're afraid code is out of control (for example, if you're afraid the user will ask your calculator to compute 9**(9**9)), you need to run it in another process. This is the only easy way to sufficiently isolate it. Running it in your event system or even a different thread will not be enough. It is also possible to break things up into little chunks similar to the other solution, but requires very careful handling and usually isn't worth it; in any event, that doesn't allow you to do the same exact thing as just running the Python code.
What you might be looking for is the multiprocessing module. If subprocess is too heavy, then this may not suit your needs either.
import time
import multiprocessing
def do_this_other_thing_that_may_take_too_long(duration):
time.sleep(duration)
return 'done after sleeping {0} seconds.'.format(duration)
pool = multiprocessing.Pool(1)
print 'starting....'
res = pool.apply_async(do_this_other_thing_that_may_take_too_long, [8])
for timeout in range(1, 10):
try:
print '{0}: {1}'.format(duration, res.get(timeout))
except multiprocessing.TimeoutError:
print '{0}: timed out'.format(duration)
print 'end'
If it's network related you could try:
import socket
socket.setdefaulttimeout(number)
I found this with eventlet library:
http://eventlet.net/doc/modules/timeout.html
from eventlet.timeout import Timeout
timeout = Timeout(seconds, exception)
try:
... # execution here is limited by timeout
finally:
timeout.cancel()
For "normal" Python code, that doesn't linger prolongued times in C extensions or I/O waits, you can achieve your goal by setting a trace function with sys.settrace() that aborts the running code when the timeout is reached.
Whether that is sufficient or not depends on how co-operating or malicious the code you run is. If it's well-behaved, a tracing function is sufficient.
An other way is to use faulthandler:
import time
import faulthandler
faulthandler.enable()
try:
faulthandler.dump_tracebacks_later(3)
time.sleep(10)
finally:
faulthandler.cancel_dump_tracebacks_later()
N.B: The faulthandler module is part of stdlib in python3.3.
If you're running code that you expect to die after a set time, then you should write it properly so that there aren't any negative effects on shutdown, no matter if its a thread or a subprocess. A command pattern with undo would be useful here.
So, it really depends on what the thread is doing when you kill it. If its just crunching numbers who cares if you kill it. If its interacting with the filesystem and you kill it , then maybe you should really rethink your strategy.
What is supported in Python when it comes to threads? Daemon threads and joins. Why does python let the main thread exit if you've joined a daemon while its still active? Because its understood that someone using daemon threads will (hopefully) write the code in a way that it wont matter when that thread dies. Giving a timeout to a join and then letting main die, and thus taking any daemon threads with it, is perfectly acceptable in this context.
I've solved that in that way:
For me is worked great (in windows and not heavy at all) I'am hope it was useful for someone)
import threading
import time
class LongFunctionInside(object):
lock_state = threading.Lock()
working = False
def long_function(self, timeout):
self.working = True
timeout_work = threading.Thread(name="thread_name", target=self.work_time, args=(timeout,))
timeout_work.setDaemon(True)
timeout_work.start()
while True: # endless/long work
time.sleep(0.1) # in this rate the CPU is almost not used
if not self.working: # if state is working == true still working
break
self.set_state(True)
def work_time(self, sleep_time): # thread function that just sleeping specified time,
# in wake up it asking if function still working if it does set the secured variable work to false
time.sleep(sleep_time)
if self.working:
self.set_state(False)
def set_state(self, state): # secured state change
while True:
self.lock_state.acquire()
try:
self.working = state
break
finally:
self.lock_state.release()
lw = LongFunctionInside()
lw.long_function(10)
The main idea is to create a thread that will just sleep in parallel to "long work" and in wake up (after timeout) change the secured variable state, the long function checking the secured variable during its work.
I'm pretty new in Python programming, so if that solution has a fundamental errors, like resources, timing, deadlocks problems , please response)).
solving with the 'with' construct and merging solution from -
Timeout function if it takes too long to finish
this thread which work better.
import threading, time
class Exception_TIMEOUT(Exception):
pass
class linwintimeout:
def __init__(self, f, seconds=1.0, error_message='Timeout'):
self.seconds = seconds
self.thread = threading.Thread(target=f)
self.thread.daemon = True
self.error_message = error_message
def handle_timeout(self):
raise Exception_TIMEOUT(self.error_message)
def __enter__(self):
try:
self.thread.start()
self.thread.join(self.seconds)
except Exception, te:
raise te
def __exit__(self, type, value, traceback):
if self.thread.is_alive():
return self.handle_timeout()
def function():
while True:
print "keep printing ...", time.sleep(1)
try:
with linwintimeout(function, seconds=5.0, error_message='exceeded timeout of %s seconds' % 5.0):
pass
except Exception_TIMEOUT, e:
print " attention !! execeeded timeout, giving up ... %s " % e