Exception bypassing in Looping call Python - python

import sys
from twisted.internet import reactor, defer, task
from twisted.python import log
def periodic_task():
log.msg("periodic task running")
x = 10 / 0
def periodic_task_crashed(reason):
log.err(reason, "periodic_task broken")
log.startLogging(sys.stdout)
my_task = task.LoopingCall(periodic_task)
d = my_task.start(1)
d.addErrback(periodic_task_crashed)
reactor.run()
I am getting the output and it stops the script. is ther any way to continue run the script even if there is an exception error . to be frank instead of x = 10 / 0 I am doing some api calls . but when there is an error it stops the script. But what I want is to run the script even if there is an error and check again and again.

Just handle the exeption, use a try ... except block around the code you know might fail.
def periodic_task():
log.msg("periodic task running")
try:
x = 10 / 0
except Exception as error:
# Here you should at least log the error, exceptions do not should pass silently.
pass

To ensure that the script continues to run, even if there is an error, use a try / except block.
Within the except block, in order to, as specified in your query, ensure that the code checks the error, again and again, you'd use 'function recursion' to run the function again from within the function:
def periodic_task():
log.msg("periodic task running")
try:
x = 10 / 0 # include 'API calls' here
except: # include 'exception type'
periodic_task()
Although, there are many pitfalls with function recursion so be wary!

Related

Python run functions in sequence; if one fails, stop

What is the best practice to run functions in sequence?
I have 4 functions, if the previous one failed, do not run the following ones. In each function, I set global error and error = 1 when exception occurs. Then in main, I just use if statement to check the value of error. I think there should be a better way to do it.
def main():
engine = conn_engine()
if error == 0:
process_sql()
if error == 0:
append_new_rows_to_prod()
if error == 0:
send_email_log()
The canonical way is to raise an exception within the function. For example:
def process_sql():
# Do stuff
if stuff_failed:
raise ProcessSQLException("Error while processing SQL")
def append_new_rows_to_prod():
# Do other stuff
if other_stuff_failed:
raise AppendRowsException("Error while appending rows")
def main():
engine = conn_engine()
try:
process_sql()
append_new_rows_to_prod()
send_email_log()
except ProcessSQLException, AppendRowsException as e:
# Handle exception, or gracefully exit

Implementing retry decorator one method higher than exception

I am trying to implement the retry decorator on a serial query. A general idea of my code is shown below. I am struggling to get it to retry when the decorator is one method up in the hierarchy. How can I have the method be retried when it's one method up from the method that throws the exception?
One complication that is frustrating is my increment time per retry depends on the actual command. Some commands require more time than others. That's why I have the extra_time_per_retry passed in, and couldn't implement the retry decorator using the traditional #retry style.
FYI the _serial is created in the class on init via pySerial.
I got it to work with the retry decorator directly above the method that throws the exception. I would like it to be two above to keep my code clean.
I have tried feeding the retry decorator the exact exception type, but couldn't get it to work.
def _query_with_retries(self, cmd, extra_time_per_retry):
_retriable_query = retry(stop_max_attempt_number=5,
wait_incrementing_start=self._serial.timeout + extra_time_per_retry,
wait_incrementing_increment=10)(self._query)
return _retriable_query(cmd)
def _query(self, cmd):
cmd_msg = cmd + '\r'
self._serial.reset_input_buffer()
self._serial.reset_output_buffer()
self._serial.write(cmd_msg)
return self._readlines()
def _readlines(self):
response_str = self._serial.read_until('\r', 256) # Max 256 bytes
# Parse response here, if a bad one set bad_response = true
if bad_response:
raise ResponseError("Response had custom error xyz")
I guess you could pack your call to _readlines() into an exception handling block, reraising the error:
python 3.x
#retry
def _query(self, cmd):
cmd_msg = cmd + '\r'
self._serial.reset_input_buffer()
self._serial.reset_output_buffer()
self._serial.write(cmd_msg)
try:
answer = self._readlines()
except Exception as e:
raise e
return answer
python 2.x
#retry
def _query(self, cmd):
cmd_msg = cmd + '\r'
self._serial.reset_input_buffer()
self._serial.reset_output_buffer()
self._serial.write(cmd_msg)
try:
answer = self._readlines()
except Exception:
t, v, tb = sys.exc_info()
raise t, v, tb
return answer
This way, you catch the exception directly when it occurs, and raise it inside the method which will be retried. I am not sure whether this declutters enough for you, however it should work.
Some might complain about using a blank except Exception, however since I am reraising it always immediately, I do not see any harm.

How can I raise Exception using eventlet in Python?

I have a simply code:
import eventlet
def execute():
print("Start")
timeout = Timeout(3)
try:
print("First")
sleep(4)
print("Second")
except:
raise TimeoutException("Error")
finally:
timeout.cancel()
print("Third")
This code should throw TimeoutException, because code in 'try' block executing more than 3 seconds.
But this exception shallows in the loop. I can't see it in the output
This is output:
Start
First
Process finished with exit code 0
How can I raise this exception to the output?
Change sleep(4) to
eventlet.sleep(4)
This code will not output Start... because nobody calls execute(), also sleep is not defined. Show real code, I will edit answer.
For now, several speculations:
maybe you have from time import sleep, then it's a duplicate of Eventlet timeout not exiting and the problem is that you don't give Eventlet a chance to run and realize there was a timeout, solutions: eventlet.sleep() everywhere or eventlet.monkey_patch() once.
maybe you don't import sleep at all, then it's a NameError: sleep and all exceptions from execute are hidden by caller.
maybe you run this code with stderr redirected to file or /dev/null.
Let's also fix other issues.
try:
# ...
sleeep() # with 3 'e', invalid name
open('file', 'rb')
raise Http404
except:
# here you catch *all* exceptions
# in Python 2.x even SystemExit, KeyboardInterrupt, GeneratorExit
# - things you normally don't want to catch
# but in any Python version, also NameError, IOError, OSError,
# your application errors, etc, all irrelevant to timeout
raise TimeoutException("Error")
In Python 2.x you never write except: only except Exception:.
So let's catch only proper exceptions.
try:
# ...
execute_other() # also has Timeout, but shorter, it will fire first
except eventlet.Timeout:
# Now there may be other timeout inside `try` block and
# you will accidentally handle someone else's timeout.
raise TimeoutException("Error")
So let's verify that it was yours.
timeout = eventlet.Timeout(3)
try:
# ...
except eventlet.Timeout as t:
if t is timeout:
raise TimeoutException("Error")
# else, reraise and give owner of another timeout a chance to handle it
raise
Here's same code with shorter syntax:
with eventlet.Timeout(3, TimeoutException("Error")):
print("First")
eventlet.sleep(4)
print("Second")
print("Third")
I hope you really need to substitute one timeout exception for another.

Raise exception if script fails

I have a python script, tutorial.py. I want to run this script from a file test_tutorial.py, which is within my python test suite. If tutorial.py executes without any exceptions, I want the test to pass; if any exceptions are raised during execution of tutorial.py, I want the test to fail.
Here is how I am writing test_tutorial.py, which does not produce the desired behavior:
from os import system
test_passes = False
try:
system("python tutorial.py")
test_passes = True
except:
pass
assert test_passes
I find that the above control flow is incorrect: if tutorial.py raises an exception, then the assert line never executes.
What is the correct way to test if an external script raises an exception?
If there is no error s will be 0:
from os import system
s=system("python tutorial.py")
assert s == 0
Or use subprocess:
from subprocess import PIPE,Popen
s = Popen(["python" ,"tutorial.py"],stderr=PIPE)
_,err = s.communicate() # err will be empty string if the program runs ok
assert not err
Your try/except is catching nothing from the tutorial file, you can move everything outside the it and it will behave the same:
from os import system
test_passes = False
s = system("python tutorial.py")
test_passes = True
assert test_passes
from os import system
test_passes = False
try:
system("python tutorial.py")
test_passes = True
except:
pass
finally:
assert test_passes
This is going to solve your problem.
Finally block is going to process if any error is raised. Check this for more information.It's usually using for file process if it's not with open() method, to see the file is safely closed.

Python: Try three times a function until all failed [duplicate]

This question already has answers here:
is there a pythonic way to try something up to a maximum number of times?
(10 answers)
Closed 7 months ago.
I am writing in Python 2.7 and encounter the following situation. I would like to try calling a function three times. If all three times raise errors, I will raise the last error I get. If any one of the calls succeed, I will quit trying and continue immediately.
Here is what I have right now:
output = None
error = None
for _e in range(3):
error = None
try:
print 'trial %d!' % (_e + 1)
output = trial_function()
except Exception as e:
error = e
if error is None:
break
if error is not None:
raise error
Is there a better snippet that achieve the same use case?
use decorator
from functools import wraps
def retry(times):
def wrapper_fn(f):
#wraps(f)
def new_wrapper(*args,**kwargs):
for i in range(times):
try:
print 'try %s' % (i + 1)
return f(*args,**kwargs)
except Exception as e:
error = e
raise error
return new_wrapper
return wrapper_fn
#retry(3)
def foo():
return 1/0;
print foo()
Here is one possible approach:
def attempt(func, times=3):
for _ in range(times):
try:
return func()
except Exception as err:
pass
raise err
A demo with a print statement in:
>>> attempt(lambda: 1/0)
Attempt 1
Attempt 2
Attempt 3
Traceback (most recent call last):
File "<pyshell#18>", line 1, in <module>
attempt(lambda: 1/0)
File "<pyshell#17>", line 8, in attempt
raise err
ZeroDivisionError: integer division or modulo by zero
If you're using Python 3.x and get an UnboundLocalError, you can adapt as follows:
def attempt(func, times=3):
to_raise = None
for _ in range(times):
try:
return func()
except Exception as err:
to_raise = err
raise to_raise
This is because the err is cleared at the end of the try statement; per the docs:
When an exception has been assigned using as target, it is cleared
at the end of the except clause.
Ignoring the debug output and the ancient Python dialect, this looks good. The only thing I would change is to put it into a function, you could then simply return the result of trial_function(). Also, the error = None then becomes unnecessary, including the associated checks. If the loop terminates, error must have been set, so you can just throw it. If you don't want a function, consider using else in combination with the for loop and breaking after the first result.
for i in range(3):
try:
result = foo()
break
except Exception as error:
pass
else:
raise error
use_somehow(result)
Of course, the suggestion to use a decorator for the function still holds. You can also apply this locally, the decorator syntax is only syntactic sugar after all:
# retry from powerfj's answer below
rfoo = retry(3)(foo)
result = rfoo()
Came across a clean way of doing the retries. There is a module called retry.
First install the module using
pip install retry
Then import the module in the code.
from retry import retry
Use #retry decorator above the method, We can pass the parameters to the decorator. Some of the parameters are tries , delay , Exception.
Example
from retry import retry
#retry(AssertionError, tries=3, delay=2)
def retryfunc():
try:
ret = False
assert ret, "Failed"
except Exception as ex:
print(ex)
raise ex
The above code asserts and fails everytime, but the retry decorator retries for 3 times with a delay of 2 seconds between retries. Also this only retries on Assertion failures since we have specified the error type as AssertionError on any other error the function wont retry.

Categories

Resources