Making Nose fail slow tests - python

I want to my tests to fail if they take longer than a certain time to run (say 500ms) because it sucks when a load of slightly slow tests mount up and suddenly you have this big delay every time you run the test suite. Are there any plugins or anything for Nose that do this already?

For cases where where timing is important (e.g. realtime requirements):
http://nose.readthedocs.org/en/latest/testing_tools.html
nose.tools.timed(limit)
Test must finish within specified time limit to pass.
Example use:
from nose.tools import timed
#timed(.1)
def test_that_fails():
time.sleep(.2)

I respectfully suggest that changing the meaning of "broken" is a bad idea.
The meaning of a failed/"red" test should never be anything other than "this functionality is broken". To do anything else risks diluting the value of the tests.
If you implement this and then next week a handful of tests fail, would it be an indicator that
Your tests are running slowly?
The code is broken?
Both of the above at the same time?
I suggest it would be better to gather MI from your build process and monitor it in order to spot slow tests building up, but let red mean "broken functionality" rather then "broken functionality and/or slow test."

Related

Run Unittests in Python with highlighting of Keywords in Terminal (determined during the run)

I'm administrating a quite a huge amount of Python Unittests and as sometimes the Stacktraces are weirdly long I had one thought on optimizing the outputs in order to see the file I'm interested faster.
Currently I am running tests with ack:
python3 unittests.py | ack --passthru 'GraphTests.py|Versioning/Testing/Tests.py'
Which does work as I desired it. But as the amount of tests keeps growing and I want to keep it dynamic I wanted to read the classes from whatever I've set in the testing suite.
suite.addTest(loader.loadTestsFromModule(UC3))
What would be the best way to accomplish this?
My first thought was to split the unittests up into two files, one loader, one caller. The loader adds all the unittests as before and executes the caller.py with an ack, including the list of files.
Is that a reasonable approach? Are there better ways? I don't think it is possible to fill the ack patterns in after I executed the line.
There's also the idea of just piping the results into a file that I read afterwards, but from my experience until now I am not sure if that will work as planned and not just cause extra-troubles. (I use an experimental unittest-framework that adds colouring during the execution of unittests using format-characters like '\033[91m' - see: https://stackoverflow.com/questions/15580303/python-output-complex-line-with-floats-colored-by-value)
Is there an option I don't see?
Edit (1): Also I forgot to add: Getting into the debugger is a different experience. With ack it doesn't seem to work properly any more

How to pass a failed testsuite run (junit.xml parsing)?

We are working in a automated continues integration/deployment pipeline. we got hundreds of testcases and 4 stages (Red, Orange, Yellow, Green).
The issue I'm facing is that a test can fail (bug, timings, stuck process etc.) and it will fail the entire regression run.
I think that we need some sort of weight to determine amount of pass/fail tests to be considered as 'fail' build.
any ideas? something you created on your pipeline?
Thanks,
-M
Having a failed build does not always reflect the quality of the product, mainly if the fails are related to testing infrastructure issues.
Reduce the risk of unwanted fails which are not related to application bugs(timings, stuck process), by building a strong and stable framework, that can be easily maintained and extended.
When speaking of fails related to application bugs, the type of tests that failed are more important than the amount of fails. This is defect severity. You can have 3 trivial fails that don't have big impact, and you can also have only 1 fail that is critical. You need to flag your tests accordingly.
Additionally to this, there is a Jenkins plugin that creates an easy to follow test run history, where you can see the number of tests that failed the most times in the last runs.

Python property testing with timeout

I have a certain amount of time to test a system. Can I write a Python property test that runs property tests until one hour is up? I looked for a solution in hypothesis but I couldn't find one.
I imagine that property-testing libraries have some kind of test-case generator, in which can I could just pull and execute from it until the timeout is up. This would be an acceptable lazy solution.
Hypothesis does not have a generator you can pull from - the internals are implemented quite differently to the usual lazy-infinite-list construction used by Quickcheck (because... Haskell).
We have an open issue to add a fuzzing mode, which will (hopefully) be the subject of an undergrad group project at Imperial College in early 2019.
Until that's ready, you can add #settings(timeout=60*60, suppress_health_check=HealthCheck.all(), max_examples=10**9) to a test and it should run for an hour (or until finding a bug).

how to handle time-consuming tests

I have tests which have a huge variance in their runtime. Most will take much less than a second, some maybe a few seconds, some of them could take up to minutes.
Can I somehow specify that in my Nosetests?
In the end, I want to be able to run only a subset of my tests which take e.g. less than 1 second (via my specified expected runtime estimate).
Have a look at this write up about attribute plugin for nose tests, where you can manually tag tests as #attr('slow') and #attr('fast'). You can tun nosetests -a '!slow' afterward to run your tests quickly.
It would be great if you can do it automatically, but I'm afraid that you would have to write additional code to do it on the fly. If you are into rapid development, I would run the nose with xunit xml output enabled (which tracks the runtime of each test). Your test module can dynamically read in your xml output file from previous runs and set attribute settings for tests accordingly to filter out quick tests. This way you do not have to do it manually, alas with more work (and you have to run all tests at least once).

Timeout on tests with nosetests

I'm setting up my nosetests environment but can't seem to get the timeout to work properly. I would like to have an x second (say 2) timeout on each test discovered by nose.
I tried the following:
nosetests --processes=-1 --process-timeout=2
This works just fine but I noticed the following:
Parallel testing takes longer for a few simple testcases
Nose does not report back when a test has timed out (and thus failed)
Does anyone know how I can get such a timeout to work? I would prefer it to work without parallel testing but this would not be an issue as long as I get the feedback that a test has timed out.
I do not know if this will make your life easier, but there is a similar functionality in nose.tools that will fail on timeout, and you do not have to have parallel testing for it:
from nose.tools import timed
#timed(2)
def test_a():
sleep(3)
You can probably auto decorate all your tests in a module using a script/plugin, if manually adding an attribute is an issue, but I personally prefer clarity over magic.
By looking through the Lib/site-packages/nose/plugins/multiprocess.py source it looks like process-timeout option that you are using is somewhat specific to managing "hanging" subprocesses that may be preventing test from completion.

Categories

Resources