I'm attempting to code a try-except loop that refreshes a webpage if it fails to load. Here's what I've done so far:
driver.get("url")
while True:
try:
<operation>
except:
driver.refresh()
I want to set this loop up so that if 5 seconds elapse and the operation is not executed (presumably because the page did not load), it attempts to refresh the page. Is there an exception we can incorporate in except that catches the time delay?
I would recommend reading this post Timeout function if it takes too long to finish.
The gist of it is that you can use signals to interrupt the code and raise an error, which you then catch.
In you example:
def _handle_timeout(signum,frame):
raise TimeoutError("Execution timed out")
driver.get("url")
signal.signal(signal.SIGALRM, _handle_timeout)
while True:
try:
signal.alarm(<timeout value>)
<operation>
signal.alarm(0)
except:
driver.refresh()
You can test this with the following snippet:
import time
import signal
def _handle_timeout(signum,frame):
raise TimeoutError("Execution timed out")
def test(timeout,execution_time):
signal.signal(signal.SIGALRM, _handle_timeout)
try:
signal.alarm(timeout)
time.sleep(execution_time)
signal.alarm(0)
except:
raise
else:
print "Executed successfully"
This will raise an error when execution_time > timeout.
As noted here in Python signal don't work even on Cygwin? the above code will not work on windows machines.
Related
After a server upgrade, I'm having an issue with a python script eating up all the server connections because it appears that after a timeout it doesn't actually end the loop. The code looks like:
if os.name == 'posix':
signal.signal(signal.SIGALRM, self.handle_timeout)
signal.alarm(__TIMEOUT__)
try:
self.inputline = self.rfile.readline()
except IOError:
continue
if os.name == 'posix':
signal.alarm(0)
The signal occurs and all that does is set a terminated variable to 1 and print a log about the time out. The while looks like while not self.terminated:. My guess is that because it has except IOError: the except doesn't occur and it's still sitting on the readline(). So the question is, what is the proper way to ensure the SIGALRM will cause the continue which will end the loop and exit the script?
TIA!!
In handle_timeout() you can raise an exception to get your try to wake up. Then catch that exception in your try/except:
def handle_timeout(signum, frame):
raise KeyboardInterrupt('received signal to exit')
while not terminated:
try:
self.inputline = self.rfile.readline()
except IOError, KeyboardInterrupt:
terminated = True
continue
I am trying to learn python's signal module. Please consider the example below:
def timeoutFn(func, args=(), kwargs={}, timeout_duration=1, default=None):
import signal
class TimeoutError(Exception):
pass
def handler(signum, frame):
print "Trying to raise exception"
raise TimeoutError
# set the timeout handler
signal.signal(signal.SIGALRM, handler)
signal.alarm(timeout_duration)
try:
result = func(*args, **kwargs)
except TimeoutError as exc:
result = default
finally:
signal.alarm(0)
return result
and,
import time
def foo():
for i in range(10):
time.sleep(0.5)
print "Sleeping"
On calling the function timeoutFn(foo) the following gets printed but it does raise the exception.
Shouldn't it raise the TimeoutError? But, all it prints is
Sleeping
Trying to raise exception
and program stops.
The exception has not been raised, because you catch it. Pay attention to the line:
except TimeoutError as exc:
result = default
The block means, that if the exception TimeoutError had been raised, result will be assigned to default (which is None in your example) and the scripts continues further without showing an exception.
Update:
signal.alarm doesn't stop the flow. So the exception will be raised by timeout and if the script will be by that time in a try block, than the exception will be caught. You can see better how does it work if you increase timeout_duration to 3. Than there will be more time to print several 'sleep' messages. It shows, that by the time when the exception is raised, the interpreter had entered the try block.
I cannot understand why sometimes I cannot catch the TimeOutError inside my flash_serial_buffer method.
When running my program I sometimes get a TimeOutError that is not caught and I cannot understand why. I indicate the code of the signal handlers and the methods where the TimeOutError is not caught. How could this be happening?
This is the code for my signal handler definition and callback function.
Basically if the time ends, the signal handler is called and raises a timeout error.
def signal_handler(signum, frame):
print "PUM"
raise TimedOutError("Time out Error")
signal.signal(signal.SIGALRM, signal_handler)
The flush serial buffer blocks if there is no answer to
answer = xbee.wait_read_frame()
The idea is to clean everything in the buffer until there aren’t any more messages. When there are no more messages, it just waits for the SIGALRM to explode and raises the timeout error.
def flush_serial_buffer(xbee):
# Flush coordinators serial buffer if problem happened before
logging.info(" Flashing serial buffer")
try:
signal.alarm(1) # Seconds
while True:
answer = xbee.wait_read_frame()
signal.alarm(1)
logging.error(" Mixed messages in buffer")
except TimedOutError:
signal.alarm(0) # Seconds
logging.error(" No more messages in buffer")
signal.alarm(0) # Supposedly it never leaves without using Except, but...
Is there a case where the TimeOutError might be raised, but not caught by the try: statement?
Here is my error class definition:
class TimedOutError(Exception):
pass
I was able to repeat the error again. I really cannot understand why the try does not catch the error it.
INFO:root: Flashing serial buffer
PUM
Traceback (most recent call last):
File "/home/ls/bin/pycharm-community-4.0.6/helpers/pydev/pydevd.py", line 1458, in trace_dispatch
if self._finishDebuggingSession and not self._terminationEventSent:
File "/home/ls/PiProjects/Deployeth/HW-RPI-API/devices.py", line 42, in signal_handler
raise TimedOutError("Time out Error")
TimedOutError: Time out Error
I would recommend in this case replacing the try and except code with this:
try:
signal.alarm(1) # Seconds
while True:
answer = xbee.wait_read_frame()
signal.alarm(1)
logging.error(" Mixed messages in buffer")
except:
signal.alarm(0) # Seconds
logging.error(" No more messages in buffer")
PS: You don't need to include try (whatever error) in your try and except statements.
I'm basing this off of the sample from https://docs.python.org/3/library/concurrent.futures.html#id1.
I've update the following:
data = future.result()
to this:
data = future.result(timeout=0.1)
The doc for concurrent.futures.Future.result states:
If the call hasn’t completed in timeout seconds, then a TimeoutError will be raised. timeout can be an int or float
(I know there is a timeout on the request, for 60, but in my real code I'm performing a different action that doesn't use a urllib request)
import concurrent.futures
import urllib.request
URLS = ['http://www.foxnews.com/',
'http://www.cnn.com/',
'http://europe.wsj.com/',
'http://www.bbc.co.uk/',
'http://some-made-up-domain.com/']
# Retrieve a single page and report the url and contents
def load_url(url, timeout):
conn = urllib.request.urlopen(url, timeout=timeout)
return conn.readall()
# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
# Start the load operations and mark each future with its URL
future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future]
try:
# The below timeout isn't raising the TimeoutError.
data = future.result(timeout=0.01)
except Exception as exc:
print('%r generated an exception: %s' % (url, exc))
else:
print('%r page is %d bytes' % (url, len(data)))
TimeoutError is raised if I set it on the call to as_completed, but I need to set the timeout on a per Future basis, not all of them as a whole.
Update
Thanks #jme, that works with a single Future, but not with multiples using the below. Do I need to yield at the beginning of the functions to allow the build-up of the futures dict? From the docs it sounds like the calls to submit shouldn't block.
import concurrent.futures
import time
import sys
def wait():
time.sleep(5)
return 42
with concurrent.futures.ThreadPoolExecutor(4) as executor:
waits = [wait, wait]
futures = {executor.submit(w): w for w in waits}
for future in concurrent.futures.as_completed(futures):
try:
future.result(timeout=1)
except concurrent.futures.TimeoutError:
print("Too long!")
sys.stdout.flush()
print(future.result())
The issue seems to be with the call to concurrent.futures.as_completed().
If I replace that with just a for loop, everything seems to work:
for wait, future in [(w, executor.submit(w)) for w in waits]:
...
I misinterpreted the doc for as_completed which states:
...yields futures as they complete (finished or were cancelled)...
as_completed will handle timeouts but as a whole, not on a per future basis.
The exception is being raised in the main thread, you just aren't seeing it because stdout hasn't been flushed. Try for example:
import concurrent.futures
import time
import sys
def wait():
time.sleep(5)
return 42
with concurrent.futures.ThreadPoolExecutor(4) as executor:
future = executor.submit(wait)
try:
future.result(timeout=1)
except concurrent.futures.TimeoutError:
print("Too long!")
sys.stdout.flush()
print(future.result())
Run this and you'll see "Too long!" appear after one second, but the interpreter will wait an additional four seconds for the threads to finish executing. Then you'll see 42 -- the result of wait() -- appear.
What does this mean? Setting a timeout doesn't kill the thread, and that's actually a good thing. What if the thread is holding a lock? If we kill it abruptly, that lock is never freed. No, it's much better to let the thread handle its own demise. Likewise, the purpose of future.cancel is to prevent a thread from starting, not to kill it.
I am following some example code to use asyncore here, only having set a timeout value for asyncore.loop as in the following full example:
import smtpd
import asyncore
class CustomSMTPServer(smtpd.SMTPServer):
def process_message(self, peer, mailfrom, rcpttos, data):
print 'Receiving message from:', peer
print 'Message addressed from:', mailfrom
print 'Message addressed to :', rcpttos
print 'Message length :', len(data)
return
server = CustomSMTPServer(('127.0.0.1', 1025), None)
asyncore.loop(timeout = 1)
I have expected that a timeout occurs after 1 second, but this is not the case. The code runs much longer for than one second. What am I missing here?
The timeout argument to asyncore.loop() is the amount of time the select.select call will wait for data. If there is no data before the timeout runs out it will loop and call select.select again.
Same for the channels idea. This does not mean open sockets but means active asyncore.dispatcher or asynchat.async_chat instances. If you want to stop the loop you will have to call the close() method on ALL instances registered.
In your case server.close() will close the instance/channel and remove it from the asyncore loop. If no more channels are active this loop will then terminate itself.
I really do not know if the timeout argument to asyncore.loop() really is meant to timeout the function call asyncore.loop() after the specified time, but here is a receipt to make that function timeout after a specified time (replacing the line with asyncore.loop() in the example code):
import signal
class TimeoutError(Exception): pass
# define the timeout handler
def handler(signum, frame):
raise TimeoutError()
# set the timeout handler and the signal duration
signal.signal(signal.SIGALRM, handler)
signal.alarm(1)
try:
asyncore.loop()
except TimeoutError as exc:
print "timeout"
finally:
signal.alarm(0)
The timeout of asyncore.loop() is the timeout for select().
It is not useful, because when select() timeout, it loops back, see the pseudo code:
while True:
do_something()
select(...)
do_something_else()
If I do simulation with firewall-ed sockets, in my Python 2.7.3 asyncore.loop() timeout 1 minute after no data is received from some socket.
I found very useful to have following method in asyncore.dispatcher "subclass":
def handle_error(self):
raise
In this way I had "proper" exception dump.
Because I do not wanted to have exception, later I changed it to something like:
def handle_error(self):
print "Error downloading %s" % self.host
pass
Now my code works correct, without exception.
I did not found a way to control the timeout.