How do I make pytest crash on a warning? - python

this is an expansion of How do I find where a "Sorting because non-concatenation" warning is coming from?.
I'm still getting the same warning, in my pytest. I've looked at several questions here, and done:
import warnings
warnings.filterwarnings('error')
which is suggested in How do I catch a numpy warning like it's an exception (not just for testing)?
However, when I run pytest, it still gives me the error, but nothing actually errors...

Try passing the -W flag when you run pytest, like this:
pytest -W error::RuntimeWarning
Specify the kind of warning you want to turn in to an error e.g. DeprecationWarning, FutureWarning, UserWarning.

Wanted to share another solution in hopes that it will help others as I spent way too long trying to solve this.
I specifically only wanted a single test to fail on a warning, not all of them. In my case an exception was being raised within a thread I wanted to test for and discovered the pytest.mark.filterwarnings decorator can be used for this purpose.
The traceback:
raise SerialException(
serial.serialutil.SerialException: device reports readiness to read but returned no data (device disconnected or multiple access on port?)
warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg))
-- Docs: https://docs.pytest.org/en/stable/warnings.html
The decorator to catch it:
#pytest.mark.filterwarnings("error::pytest.PytestUnhandledThreadExceptionWarning")

Related

Logging just arised exception

Is this idiomatic/pythonic to do like this or is there a better way? I want all the errors to get in log for in case I don't have access to the console output. Also I want to abort this code path in case the problem arises.
try:
with open(self._file_path, "wb") as out_f:
out_f.write(...)
...
except OSError as e:
log("Saving %s failed: %s" % (self._file_path, str(e)))
raise
EDIT: this question is about handling exceptions in a correct place/with correct idiom. It is not about logging class.
A proven, working scheme is to have a generic except clause at the top level of your application code to make sure any unhandled error will be logged (and re-raised fo course) - and it also gives you an opportunity to try and do some cleanup before crashing)
Once you have this, adding specific "log and re-reraise" exception handlers in your code makes sense if and when you want to capture more contextual informations in your log message, as in your snippet example. This means the exception might end up logged twice but this is hardly and issue .
If you really want to be pythonic (or if you value your error logs), use the stdlib's logging module and it's logger.exception() method that will automagically add the full traceback to the log.
Some (other) benefits of the logging module are the ability to decouple the logging configuration (which should be handled by the app itself, and can be quite fine-grained) from the logging calls (which most often happen at library code level), the compatibility with well-written libs (which already use logging so you just have to configure your loggers to get infos from 3rd-part libs - and this can really save your ass), and the ability to use different logging mechanisms (to stderr, to file, to syslog, via email alerts, whatever, and you're not restricted to a single handler) according to the log source and severity and the deployment environment.
Update:
What would you say about re-raising the same exception (as in example) or re-raising custom exception (MyLibException) instead of original one?
This is a common pattern indeed, but beware of overdoing it - you only want to do this for exceptions that are actually expected and where you really know the cause. Some exception classes can have different causes - cf OSError, 'IOErrorandRuntimeError- so never assume anything about what really caused the exception, either check it with a decently robust condition (for example the.errnofield forIOError`) or let the exception propagate. I once wasted a couple hours trying to understand why some lib complained about a malformed input file when the real reason was a permission issue (which I found out tracing the library code...).
Another possible issue with this pattern is that (in Python2 at least) you will loose the original exception and traceback, so better to log them appropriately before raising your own exception. IIRC Python3 has some mechanism to handle this situation in a cleaner way that let you preserve some of the original exception infos.

calling pytest from inside python code

I am writing a Python script for collecting data from running tests under different conditions. At the moment, I am interested in adding support for Py.Test.
The Py.Test documentation clearly states that running pytest inside Python code is supported:
You can invoke pytest from Python code directly... acts as if you would call “pytest” from the command line...
However, the documentation does not describe in detail return value of calling pytest.main() as prescribed. The documentation only seems to indicate how to read the exit code of calling the tests.
What are the limits of data resolution available through this interface? Does this method simply return a string indicating the results of the test? Is support more friendly data structures supported (e.g., outcome of each test case assigned to key, value pair)?
Update: Examining the return data structure in the REPL reveals that calling pytest.main yeilds an integer return type indicating system exit code and directs a side-effect (stream of text detailing test result) to standard out. Considering this is the case, does Py.Test provide an alternate interface for accessing the result of tests run from within python code through some native data structure (e.g., dictionary)? I would like to avoid catching and parsing the std.out result because that approach seems error prone.
I don`t think so, the official documentation tells us that pytest.main
returns an os error code like is described in the example.
here
You can use the pytest flags if you want to, even the traceback (--tb) option to see if some of those marks helps you.
In your other point about parsing the std.out result because that approach seems error prone.
It really depends on what you are doing. Python has a lot of packages to do it like subprocess for example.

How to change error and failure detection in pytest?

In pytest, I want to report all uncaught AssertionError exceptions as Failure and all other uncaught exceptions as Errors (instead of the default behavior of reporting all uncaught exceptions in setup method as Errors while all uncaught exceptions in test cases and UUT as Failure). I thought it could be done with pytest hooks. However, "passed", "skipped", and "failed" seem to be the only valid outcome values in TestReport object.
So,
Is it possible to add "error" as a valid outcome and let the rest of pytest do the appropriate reporting, i.e., display E/ERROR instead of F/FAILURE on console output?
If so, what would be the ideal part of the source to do this?
If we cannot add "error" as a valid outcome, then what would be the best way to inject this behavior?
[Self answer]
pytest-finer-verdicts plugin achieves this behavior :)

Timeout on tests with nosetests

I'm setting up my nosetests environment but can't seem to get the timeout to work properly. I would like to have an x second (say 2) timeout on each test discovered by nose.
I tried the following:
nosetests --processes=-1 --process-timeout=2
This works just fine but I noticed the following:
Parallel testing takes longer for a few simple testcases
Nose does not report back when a test has timed out (and thus failed)
Does anyone know how I can get such a timeout to work? I would prefer it to work without parallel testing but this would not be an issue as long as I get the feedback that a test has timed out.
I do not know if this will make your life easier, but there is a similar functionality in nose.tools that will fail on timeout, and you do not have to have parallel testing for it:
from nose.tools import timed
#timed(2)
def test_a():
sleep(3)
You can probably auto decorate all your tests in a module using a script/plugin, if manually adding an attribute is an issue, but I personally prefer clarity over magic.
By looking through the Lib/site-packages/nose/plugins/multiprocess.py source it looks like process-timeout option that you are using is somewhat specific to managing "hanging" subprocesses that may be preventing test from completion.

Keep simpleserver active even on syntax errors

Is there a way to configure the simple-server that Flask uses to not exit on every single syntax error?
app = Flask(__name__)
app.run(host='0.0.0.0', debug=True, use_debugger=True, passthrough_errors=False);
Currently I'm using this setup for the simple-server.
Setting passthrough_errors to False means most of the errors actually keeps the process alive so that I can use the interactive debugger, syntax errors still exits the program though. I've tried different configuration values but I have not found anything that works. Thanks!
I just posted a Flask-Failsafe extension to solve this exact issue.
I hit this all the time and ran across your post earlier looking for a solution. After a bit of experimenting I hacked up a decorator you can use to wrap your initialization code so that if it fails the reloader will keep working. Check it out and let me know what you think.
According to Python documentation there are two types or errors:
Syntax Errors
Exceptions
Syntax errors are produced during parse time (at that moment your code doesn't actually executes, so you have no possibility to catch errors, since parse time is not a runtime, when your code actually executes).
The only way you can catch syntax errors is when they happen inside a piece of code given as an argument to exec function (executes string of python code):
>>> try:
... exec('x===6')
... except SyntaxError:
... print('Hello!')
...
Hello!
But you must remember to use exec() only when you really know what you do. It's not recommended to use exec() at all especially when it depends on user input.

Categories

Resources