pytest. execute all teardown modules - python

i have use funcargs in my tests:
def test_name(fooarg1, fooarg2):
all of them have pytest_funcarg__ factories, which returns request.cached_setup, so all of them have setup/teardown sections.
sometimes i have a problem with fooarg2 teardown, so i raise exception in here. in this case ignore all the others teardowns(fooarg1.teardown, teardown_module, etc) and just goes to pytest_sessionfinished section.
is there any option in pytest not to collect exceptions and execute all remaining teardowns functions?

Are you using pytest-2.5.1? pytest-2.5 and in particular issue287 is supposed to have brought support for running all finalizers and re-raising the first failed exception if any.

In your teardown function you can catch the error and print it.
def teardown(): # this is the teardown function with the error
try:
# the original code with error
except:
import traceback
traceback.print_exc() # no error should pass silently

Related

Selenium Python: execute code even after failure

I am using python unittest and trying to catch the exceptions. I have tried with self.fail but in that case, once there is an exception, it is a failure and it stops running the rest of the code
What can I try so even if one case fails, it still executes the rest of the cases ?
I am trying to avoid printing the exceeptions.
Code currently using:
if 'Anonymous' in elem_welcome.text:
pass
else:
self.fail('Test Failed: Logout Failed'))
Use the try and except block and name the said 'error' on the except block
E.g. Error is 'NameError'
try:
#cases
except NameError:
#other cases
You can read more here Handling Exceptions
You can make every test to be a function and call it from the main block. This way each one will get executed and the failed once will return the fail reason to be filed by main block.

How to handle a specific uncaught exception?

I would like to handle one specific exception in my script in a single place without resorting to a try/exception everytime*. I was hoping that the code below would do this:
import sys
def handle(exc_type, exc_value, exc_traceback):
if issubclass(exc_type, ValueError):
print("ValueError handled here and the script continues")
return
# follow default behaviour for the exception
sys.__excepthook__(exc_type, exc_value, exc_traceback)
sys.excepthook = handle
print("hello")
raise ValueError("wazaa")
print("world")
a = 1/0
The idea was that ValueError would be handled "manually" and the script would continue running (return to the script). For any other error (ZeroDivisionError in the case above), the normal traceback and script crash would ensue.
What happens is
$ python scratch_13.py
hello
ValueError handled here and the script continues
Process finished with exit code 1
The documentation mentions that (emphasis mine)
When an exception is raised and uncaught, the interpreter calls
sys.excepthook with three arguments, the exception class, exception
instance, and a traceback object. In an interactive session this
happens just before control is returned to the prompt; in a Python
program this happens just before the program exits.
which would mean that when I am in handler() it is already too late as the script has decided to die anyway and my only possibility is to influence how the traceback will look like.
Is there a way to ignore a specific exception globally in a script ?
* this is for a debugging context where the exception would normally be raised and crash the script (in production) but in some specific cases (a dev platform for instance), this specific exception needs to just be discarded. Otherwise I would have put a try/exception clause everywhere where the issue could arise.
One way to do it is to use contextlib.suppress and have a global tuple of suppressed Exceptions:
suppressed = (ValueError,)
And then anywhere where the error might occure you just wrap it in with suppress(*suppressed):
print("hello")
with suppress(*suppressed): # gets ignored
raise ValueError("wazaa")
print("world")
a = 1/0 # raise ZeroDivisionError
And then in production you just change suppressed to ():
suppressed = ()
print("hello")
with suppress(*suppressed):
raise ValueError("wazaa") # raises the error
print("world")
a = 1/0 # doesn't get executed
I think this is the best you can do. You can't ignore the exception completly globally, but you can make it so you only have to change on place.

Setting an exit code for a custom exception in python

I'm using custom exceptions to differ my exceptions from Python's default exceptions.
Is there a way to define a custom exit code when I raise the exception?
class MyException(Exception):
pass
def do_something_bad():
raise MyException('This is a custom exception')
if __name__ == '__main__':
try:
do_something_bad()
except:
print('Oops') # Do some exception handling
raise
In this code, the main function runs a few functions in a try code.
After I catch an exception I want to re-raise it to preserve the traceback stack.
The problem is that 'raise' always exits 1.
I want to exit the script with a custom exit code (for my custom exception), and exit 1 in any other case.
I've looked at this solution but it's not what I'm looking for:
Setting exit code in Python when an exception is raised
This solution forces me to check in every script I use whether the exception is a default or a custom one.
I want my custom exception to be able to tell the raise function what exit code to use.
You can override sys.excepthook to do what you want yourself:
import sys
class ExitCodeException(Exception):
"base class for all exceptions which shall set the exit code"
def getExitCode(self):
"meant to be overridden in subclass"
return 3
def handleUncaughtException(exctype, value, trace):
oldHook(exctype, value, trace)
if isinstance(value, ExitCodeException):
sys.exit(value.getExitCode())
sys.excepthook, oldHook = handleUncaughtException, sys.excepthook
This way you can put this code in a special module which all your code just needs to import.

Using sys.exit or SystemExit; when to use which?

Some programmers use sys.exit, others use SystemExit.
What is the difference?
When do I need to use SystemExit or sys.exit inside a function?
Example:
ref = osgeo.ogr.Open(reference)
if ref is None:
raise SystemExit('Unable to open %s' % reference)
or:
ref = osgeo.ogr.Open(reference)
if ref is None:
print('Unable to open %s' % reference)
sys.exit(-1)
No practical difference, but there's another difference in your example code - print goes to standard out, but the exception text goes to standard error (which is probably what you want).
sys.exit(s) is just shorthand for raise SystemExit(s), as described in the former's docstring; try help(sys.exit). So, instead of either one of your example programs, you can do
sys.exit('Unable to open %s' % reference)
There are 3 exit functions, in addition to raising SystemExit.
The underlying one is os._exit, which requires 1 int argument, and exits immediately with no cleanup. It's unlikely you'll ever want to touch this one, but it is there.
sys.exit is defined in sysmodule.c and just runs PyErr_SetObject(PyExc_SystemExit, exit_code);, which is effectively the same as directly raising SystemExit. In fine detail, raising SystemExit is probably faster, since sys.exit requires an LOAD_ATTR and CALL_FUNCTION vs RAISE_VARARGS opcalls. Also, raise SystemExit produces slightly smaller bytecode (4bytes less), (1 byte extra if you use from sys import exit since sys.exit is expected to return None, so includes an extra POP_TOP).
The last exit function is defined in site.py, and aliased to exit or quit in the REPL. It's actually an instance of the Quitter class (so it can have a custom __repr__, so is probably the slowest running. Also, it closes sys.stdin prior to raising SystemExit, so it's recommended for use only in the REPL.
As for how SystemExit is handled, it eventually causes the VM to call os._exit, but before that, it does some cleanup. It also runs atexit._run_exitfuncs() which runs any callbacks registered via the atexit module. Calling os._exit directly bypasses the atexit step.
My personal preference is that at the very least SystemExit is raised (or even better - a more meaningful and well documented custom exception) and then caught as close to the "main" function as possible, which can then have a last chance to deem it a valid exit or not. Libraries/deeply embedded functions that have sys.exit is just plain nasty from a design point of view. (Generally, exiting should be "as high up" as possible)
According to documentation sys.exit(s) effectively does raise SystemExit(s), so it's pretty much the same thing.
While the difference has been answered by many answers, Cameron Simpson makes an interesting point in https://mail.python.org/pipermail/python-list/2016-April/857869.html:
TL;DR: It's better to just raise a "normal" exception, and use SystemExit or sys.exit only at the top levels of a script.
I m on python 2.7 and Linux , I have a simple code need suggestion if I
I could replace sys.exit(1) with raise SystemExit .
==Actual code==
def main():
try:
create_logdir()
create_dataset()
unittest.main()
except Exception as e:
logging.exception(e)
sys.exit(EXIT_STATUS_ERROR)
if __name__ == '__main__': main()
==Changed Code==
def main():
try:
create_logdir()
create_dataset()
unittest.main()
except Exception as e:
logging.exception(e)
raise SystemExit
if __name__ == '__main__':
main()
I am against both of these personally. My preferred pattern is like
this:
def main(argv):
try:
...
except Exception as e:
logging.exception(e)
return 1
if __name__ == '__main__':
sys.exit(main(sys.argv))
Notice that main() is back to being a normal function with normal
returns.
Also, most of us would avoid the "except Exception" and just let a top
level except bubble out: that way you get a stack backtrace for
debugging. I agree it prevents logging the exception and makes for
uglier console output, but I think it is a win. And if you do want
to log the exception there is always this:
try:
... except Exception as e:
logging.exception(e)
raise
to recite the exception into the log and still let it bubble out
normally.
The problem with the "except Exception" pattern is that it catches and
hides
every exception, not merely the narrow set of specific exceptions that you understand.
Finally, it is frowned upon to raise a bare Exception class. In
python 3 I believe it is actually forbidden, so it is nonportable
anyway. But even In Python to it is best to supply an Exception
instance, not the class:
raise SystemExit(1)
All the functions in try block have exception bubbled out using raise
Example for create_logdir() here is the function definition
def create_logdir():
try:
os.makedirs(LOG_DIR)
except OSError as e:
sys.stderr.write("Failed to create log directory...Exiting !!!")
raise
print "log file: " + corrupt_log
return True
def main():
try:
create_logdir()
except Exception as e:
logging.exception(e)
raise SystemExit
(a) In case if create_logdir() fails we will get the below error ,is
this fine or do I need to improve this code.
Failed to create log directory...Exiting !!!ERROR:root:[Errno 17] File
exists: '/var/log/dummy'
Traceback (most recent call last):
File "corrupt_test.py", line 245, in main
create_logdir()
File "corrupt_test.py", line 53, in create_logdir
os.makedirs(LOG_DIR)
File "/usr/local/lib/python2.7/os.py", line 157, in makedirs
OSError: [Errno 17] File exists: '/var/log/dummy'
I prefer the bubble out approach, perhap with a log or warning
messages as you have done, eg:
logging.exception("create_logdir failed: makedirs(%r): %s" %
(LOG_DIR, e)) raise
(Also not that that log message records more context: context is very
useful when debugging problems.)
For very small scripts sys.stderr.write is ok, but in general any of
your functions that turned out to be generally useful might migrate
into a library in order to be reused; consider that stderr is not
always the place for messages; instead reading for the logging module
with error() or wanr() or exception() as appropriate. There is more
scope for configuring where the output goes that way without wiring
it into your inner functions.
Can I have just raise , instead of SystemExit or sys.exit(1) . This
looks wrong to me
def main():
try:
create_logdir()
except Exception as e
logging.exception(e)
raise
This is what I would do, myself.
Think: has the exception been "handled", meaning has the situation
been dealt with because it was expected? If not, let the exception
bubble out so that the user knows that something not understood by
the program has occurred.
Finally, it is generally bad to SystemExit or sys.exit() from inside
anything other than the outermost main() function. And I resist it
even there; the main function, if written well, may often be called
from somewhere else usefully, and that makes it effectively a library
function (it has been reused). Such a function should not
unilaterally abort the program. How rude! Instead, let the exception
bubble out: perhaps the caller of main() expects it and can handle
it. By aborting and not "raise"ing, you have deprived the caller of
the chance to do something appropriate, even though you yourself
(i.e. "main") do not know enough context to handle the exception.
So I am for "raise" myself. And then only because you want to log the
error. If you didn't want to log the exception you could avoid the
try/except entirely and have simpler code: let the caller worry
about unhandled exceptions!
SystemExit is an exception, which basically means that your progam had a behavior such that you want to stop it and raise an error. sys.exit is the function that you can call to exit from your program, possibily giving a return code to the system.
EDIT: they are indeed the same thing, so the only difference is in the logic behind in your program. An exception is some kind of "unwanted" behaviour, whether a call to a function is, from a programmer point of view, more of a "standard" action.

Handling failures with Fabric

I'm trying to handle failure on fabric, but the example I saw on the docs was too localized for my taste. I need to execute rollback actions if any of a number of actions fail. I tried, then, to use contexts to handle it, like this:
#_contextmanager
def failwrapper():
with settings(warn_only=True):
result = yield
if result.failed:
rollback()
abort("********* Failed to execute deploy! *********")
And then
#task
def deploy():
with failwrapper():
updateCode()
migrateDb()
restartServer()
Unfortunately, when one of these tasks fail, I do not get anything on result.
Is there any way of accomplishing this? Or is there another way of handling such situations?
According to my tests, you can accomplish that with this:
from contextlib import contextmanager
#contextmanager
def failwrapper():
try:
yield
except SystemExit:
rollback()
abort("********* Failed to execute deploy! *********")
As you can see I got rid of the warn_only setting as I suppose you don't need it if the rollback can be executed and you're aborting the execution anyway with abort().
Fabric raises SystemExit exception when encountering errors and warn_only setting is not used. We can just catch the exception and do the rollback.
Following on from Henri's answer, this also handles keyboard interrupts (Ctrl-C) and other exceptions:
#_contextmanager
def failwrapper():
try:
yield
except:
rollback()
raise

Categories

Resources