How to change error and failure detection in pytest? - python

In pytest, I want to report all uncaught AssertionError exceptions as Failure and all other uncaught exceptions as Errors (instead of the default behavior of reporting all uncaught exceptions in setup method as Errors while all uncaught exceptions in test cases and UUT as Failure). I thought it could be done with pytest hooks. However, "passed", "skipped", and "failed" seem to be the only valid outcome values in TestReport object.
So,
Is it possible to add "error" as a valid outcome and let the rest of pytest do the appropriate reporting, i.e., display E/ERROR instead of F/FAILURE on console output?
If so, what would be the ideal part of the source to do this?
If we cannot add "error" as a valid outcome, then what would be the best way to inject this behavior?

[Self answer]
pytest-finer-verdicts plugin achieves this behavior :)

Related

Pyhon: A better way to run a function when an error occurs in the program?

I created a big program that does a lot of different stuff. In this program, I added some error management but I would like to add management for critical errors which should start the critical_error_function().
So basically, I've used :
try :
//some fabulous code
except :
critical error(error_type)
But I am here to ask if a better way to do this...
In Python exceptions are the intended way of error handling. Assuming you wrap your whole program in one try-except block, a better way would be to
only try-except-wrap the lines that can generate exceptions instead of your complete program
catch them with a specific exception such as ValueError or even your own custom exception instead of the blank except statement
handle them appropriately. Handling could mean skipping this value, logging the error or calling your critical_error_function.

How do I make pytest crash on a warning?

this is an expansion of How do I find where a "Sorting because non-concatenation" warning is coming from?.
I'm still getting the same warning, in my pytest. I've looked at several questions here, and done:
import warnings
warnings.filterwarnings('error')
which is suggested in How do I catch a numpy warning like it's an exception (not just for testing)?
However, when I run pytest, it still gives me the error, but nothing actually errors...
Try passing the -W flag when you run pytest, like this:
pytest -W error::RuntimeWarning
Specify the kind of warning you want to turn in to an error e.g. DeprecationWarning, FutureWarning, UserWarning.
Wanted to share another solution in hopes that it will help others as I spent way too long trying to solve this.
I specifically only wanted a single test to fail on a warning, not all of them. In my case an exception was being raised within a thread I wanted to test for and discovered the pytest.mark.filterwarnings decorator can be used for this purpose.
The traceback:
raise SerialException(
serial.serialutil.SerialException: device reports readiness to read but returned no data (device disconnected or multiple access on port?)
warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg))
-- Docs: https://docs.pytest.org/en/stable/warnings.html
The decorator to catch it:
#pytest.mark.filterwarnings("error::pytest.PytestUnhandledThreadExceptionWarning")

Logging just arised exception

Is this idiomatic/pythonic to do like this or is there a better way? I want all the errors to get in log for in case I don't have access to the console output. Also I want to abort this code path in case the problem arises.
try:
with open(self._file_path, "wb") as out_f:
out_f.write(...)
...
except OSError as e:
log("Saving %s failed: %s" % (self._file_path, str(e)))
raise
EDIT: this question is about handling exceptions in a correct place/with correct idiom. It is not about logging class.
A proven, working scheme is to have a generic except clause at the top level of your application code to make sure any unhandled error will be logged (and re-raised fo course) - and it also gives you an opportunity to try and do some cleanup before crashing)
Once you have this, adding specific "log and re-reraise" exception handlers in your code makes sense if and when you want to capture more contextual informations in your log message, as in your snippet example. This means the exception might end up logged twice but this is hardly and issue .
If you really want to be pythonic (or if you value your error logs), use the stdlib's logging module and it's logger.exception() method that will automagically add the full traceback to the log.
Some (other) benefits of the logging module are the ability to decouple the logging configuration (which should be handled by the app itself, and can be quite fine-grained) from the logging calls (which most often happen at library code level), the compatibility with well-written libs (which already use logging so you just have to configure your loggers to get infos from 3rd-part libs - and this can really save your ass), and the ability to use different logging mechanisms (to stderr, to file, to syslog, via email alerts, whatever, and you're not restricted to a single handler) according to the log source and severity and the deployment environment.
Update:
What would you say about re-raising the same exception (as in example) or re-raising custom exception (MyLibException) instead of original one?
This is a common pattern indeed, but beware of overdoing it - you only want to do this for exceptions that are actually expected and where you really know the cause. Some exception classes can have different causes - cf OSError, 'IOErrorandRuntimeError- so never assume anything about what really caused the exception, either check it with a decently robust condition (for example the.errnofield forIOError`) or let the exception propagate. I once wasted a couple hours trying to understand why some lib complained about a malformed input file when the real reason was a permission issue (which I found out tracing the library code...).
Another possible issue with this pattern is that (in Python2 at least) you will loose the original exception and traceback, so better to log them appropriately before raising your own exception. IIRC Python3 has some mechanism to handle this situation in a cleaner way that let you preserve some of the original exception infos.

What's the purpose of raising in error?

What's the point of using raise if it exits the program?
Wouldn't it be just as effective to allow the crash to happen?
If I leave out the try-except block, the function crashes when I divide by zero and displays the reason. Or is there some other use that I don't know about?
def div(x,y):
try:
return(x/y)
except ZeroDivisionError as problem:
raise (problem)
I your case effect would be the same. But you may want to perform some additional logic in case of error (cleanup etc.) and perhaps raise a different (perhaps custom) error instead of original system low-level one, like with a message "Incorrect data, please check your input". And this can be done catching the error and raising a different one.
There is no point (in this case) in using raise. Normally, you'd have some code in there to do "something else" - that could include outputting some more debug information, writing some log data out, retrying the operation with a different set of parameters, etc. etc. etc.
I'm not sure there's much value in your case, where when an exception occurs it just re-raises it - it seems like someone (perhaps) intended to write some sort of handling code there, but just never got around to it.
Some great examples of the use cases for exception handling are in the Python Exception Handling Wiki --> http://wiki.python.org/moin/HandlingExceptions
The reason to re-raise an exception is to allow whatever code is calling you the opportunity to handle it after you have done something to handle it yourself. For example, you have closed a file that you were using (because cleanliness is a virtue) but your code cannot continue.
If you are not going to do anything to handle the exception, then no, there is no reason to write an exception handler for it!
The correct way to re-raise an exception is to simply use raise without any arguments. This way, whoever catches the exception (or the user of the script, if nobody catches it) gets a correct stack trace that tells where the exception was originally raised.

Stop Piston's error catching

I'm using Piston with Django. Anytime there's an error in my handler code, I get a simplified, text-only description of the error in my http response, which gives me much less information that Django does when it's reporting errors. How can I stop Piston catching errors in this way?
In your settings.py file, add PISTON_DISPLAY_ERRORS = False this will cause exceptions to be raised allowing them to be shown as expected in the Django debug error page when you are using DEBUG = True.
There are a few cases when the exception won't propagate properly. I've seen it happen when Piston says that the function definition doesn't match, but haven't looked to see why...
Maybe you could try to override Resource.error_handle, and instead of using the default implementation:
https://bitbucket.org/jespern/django-piston/src/c4b2d21db51a/piston/resource.py#cl-248
just re-raise the original exception.

Categories

Resources