I'm trying to handle failure on fabric, but the example I saw on the docs was too localized for my taste. I need to execute rollback actions if any of a number of actions fail. I tried, then, to use contexts to handle it, like this:
#_contextmanager
def failwrapper():
with settings(warn_only=True):
result = yield
if result.failed:
rollback()
abort("********* Failed to execute deploy! *********")
And then
#task
def deploy():
with failwrapper():
updateCode()
migrateDb()
restartServer()
Unfortunately, when one of these tasks fail, I do not get anything on result.
Is there any way of accomplishing this? Or is there another way of handling such situations?
According to my tests, you can accomplish that with this:
from contextlib import contextmanager
#contextmanager
def failwrapper():
try:
yield
except SystemExit:
rollback()
abort("********* Failed to execute deploy! *********")
As you can see I got rid of the warn_only setting as I suppose you don't need it if the rollback can be executed and you're aborting the execution anyway with abort().
Fabric raises SystemExit exception when encountering errors and warn_only setting is not used. We can just catch the exception and do the rollback.
Following on from Henri's answer, this also handles keyboard interrupts (Ctrl-C) and other exceptions:
#_contextmanager
def failwrapper():
try:
yield
except:
rollback()
raise
Related
I'm looking for the most pythonic way of trying a command, catching if an error occurs and retrying by running a preparatory command and then the original command. Specifically in my case I'm looking to write a table to a database, catch if a "schema does not exist" error is thrown, then trying to create a schema and retrying the write table. Then, if the write table errors again I don't want to catch it.
So far I have (schematically):
try:
write_to_table(table, schema)
except sqlalchemy.exc.ProgrammingError:
create_schema(schema)
write_to_table(table, schema)
This does what I want, but seems a bit off somehow, maybe because I'm duplicating write_to_table().
So what's the most pythonic way of doing the above?
P.S. When I say I'd like to retry, I do NOT want something like this: How to retry after exception?
Just create a reusable decorator !!
def retry(func):
'''
Decorator.
If the decorated function fails with an exception
calls it again after a random interval.
'''
def _retry(*args,**kwargs):
max_retries = 1
for i in range(max_retries):
try:
value = func(*args,**kwargs)
return value
except sqlalchemy.exc.ProgrammingError as e:
print('function:[{}({},{})] Failed with error: {} . Retrying...'.format(func.__name__,str(*args),str(kwargs),str(e)))
time.sleep(random.uniform(1,3))
print("Max_retries({}) exceeded for function {}".format(max_retries,func.__name__))
return _retry
By using the above decorator, in which you can also configure the number of retriesand the retry interval, you can do the following:
#retry
write_to_table(table, schema)
I'm wondering if anybody would have an idea to catch all exceptions in a running thread. My program is started as follow, by a service
def main():
global RUNNING
signal.signal(signal.SIGINT, stopHandler)
signal.signal(signal.SIGTERM, stopHandler)
projectAlice = ProjectAlice()
try:
while RUNNING:
time.sleep(0.1)
except KeyboardInterrupt:
pass
finally:
projectAlice.onStop()
_logger.info('Project Alice stopped, see you soon!')
So a CTRL-C or a signal can stop it. ProjectAlice runs forever and answers to mqtt topics that are sent by Snips. It uses paho-mqtt with loop_forever. As it's pretty large, errors can occur, even though they shouldn't. I cover as many as I can, but today, as an exemple, google-translate started to throw out errors, because it can't use the service anymore (free...). Unhandled errors.... So the thread crashes and ProjectAlice is left as is. I would like to, as it's possible per exemple in Java, to super catch all exceptions and work further from there
Here's a simple solution to override the python exception hook, thus enabling you to handle uncaught exceptions:
import sys
def my_custom_exception_hook(exctype, value, tb):
print('Yo, do stuff here, handle specific exceptions and raise others or whatever')
and before your actual code starts do:
sys.excepthook = my_custom_exception_hook
A simple except Exception: will catch all exceptions except KeyboardInterrupt and SystemExit within the same thread.
You'll have to have the try: except ...: block within the code that is run in the thread to catch exceptions occurring in the thread.
I know I could return, but I'm wondering if there's something else, especially for helper methods where the task where return None would force the caller to add boilerplate checking at each invocation.
I found InvalidTaskError, but no real documentation - is this an internal thing? Is it appropriate to raise this?
I was looking for something like a self.abort() similar to the self.retry(), but didn't see anything.
Here's an example where I'd use it.
def helper(task, arg):
if unrecoverable_problems(arg):
# abort the task
raise InvalidTaskError()
#task(bind=True)
task_a(self, arg):
helper(task=self, arg=arg)
do_a(arg)
#task(bind=True)
task_b(self, arg):
helper(task=self, arg=arg)
do_b(arg)
After doing more digging, I found an example using Reject;
(copied from doc page)
The task may raise Reject to reject the task message using AMQPs basic_reject method. This will not have any effect unless Task.acks_late is enabled.
Rejecting a message has the same effect as acking it, but some brokers may implement additional functionality that can be used. For example RabbitMQ supports the concept of Dead Letter Exchanges where a queue can be configured to use a dead letter exchange that rejected messages are redelivered to.
Reject can also be used to requeue messages, but please be very careful when using this as it can easily result in an infinite message loop.
Example using reject when a task causes an out of memory condition:
import errno
from celery.exceptions import Reject
#app.task(bind=True, acks_late=True)
def render_scene(self, path):
file = get_file(path)
try:
renderer.render_scene(file)
# if the file is too big to fit in memory
# we reject it so that it's redelivered to the dead letter exchange
# and we can manually inspect the situation.
except MemoryError as exc:
raise Reject(exc, requeue=False)
except OSError as exc:
if exc.errno == errno.ENOMEM:
raise Reject(exc, requeue=False)
# For any other error we retry after 10 seconds.
except Exception as exc:
raise self.retry(exc, countdown=10)
i have use funcargs in my tests:
def test_name(fooarg1, fooarg2):
all of them have pytest_funcarg__ factories, which returns request.cached_setup, so all of them have setup/teardown sections.
sometimes i have a problem with fooarg2 teardown, so i raise exception in here. in this case ignore all the others teardowns(fooarg1.teardown, teardown_module, etc) and just goes to pytest_sessionfinished section.
is there any option in pytest not to collect exceptions and execute all remaining teardowns functions?
Are you using pytest-2.5.1? pytest-2.5 and in particular issue287 is supposed to have brought support for running all finalizers and re-raising the first failed exception if any.
In your teardown function you can catch the error and print it.
def teardown(): # this is the teardown function with the error
try:
# the original code with error
except:
import traceback
traceback.print_exc() # no error should pass silently
I want to know what is the best way of checking an condition in Python definition and prevent it from further execution if condition is not satisfied. Right now i am following the below mentioned scheme but it actually prints the whole trace stack. I want it to print only an error message and do not execute the rest of code. Is there any other cleaner solution for doing it.
def Mydef(n1,n2):
if (n1>n2):
raise ValueError("Arg1 should be less than Arg2)
# Some Code
Mydef(2,1)
That is what exceptions are created for. Your scheme of raising exception is good in general; you just need to add some code to catch it and process it
try:
Mydef(2,1)
except ValueError, e:
# Do some stuff when exception is raised, e.message will contain your message
In this case, execution of Mydef stops when it encounters raise ValueError line of code, and goes to the code block under except.
You can read more about exceptions processing in the documentation.
If you don't want to deal with exceptions processing, you can gracefully stop function to execute further code with return statement.
def Mydef(n1,n2):
if (n1>n2):
return
def Mydef(n1,n2):
if (n1>n2):
print "Arg1 should be less than Arg2"
return None
# Some Code
Mydef(2,1)
Functions stop executing when they reach to return statement or they run the until the end of definition. You should read about flow control in general (not specifically to python)