During the Coverage.py with Ned Batchelder python&testing podcast, Brian and Ned briefly discussed that, if you need to run tests with coverage, it is preferred to run tests from coverage.py executing the coverage run as opposed to invoking a test runner with coverage. Why is that and what is the difference?
To put some context into this: currently I'm using nose test runner and execute the tests with the help of nosetests command-line tool with --with-coverage option:
$ nosetests --with-coverage --cover-html
Should I do it via the coverage run -m instead?
$ coverage run -m nose
$ coverage report
I guess I am uniquely qualified to answer this question :)
mwchase and mgilson have it right in their comments: using a plugin means you are depending on that plugin's behavior being correct and understandable. In the name of being helpful, plugins will have their own logic that may have been the best idea when they were written, but the test runner and/or coverage.py may have changed in the meantime. The plugins tend not to be as well-maintained as the other components. If you can avoid them, you have one less thing to think about.
True fact: the reason I added support for .coveragerc configuration files in the first place was because I wanted to add features to coverage.py and didn't want to wait for plugin UIs to be updated to support them.
Related
I found out that I can discover and run unit tests under my directory tree by doing this:
python3 -m test
The above works, but the documented method to discover and run all tests finds hundreds more, including a new one that was not found by the previous method:
python3 -m unittest
What exactly is -m test and why can't I find documentation on it after a quick search, except the following page which seems to be about CPython?
https://devguide.python.org/runtests/
The test package is intended to test the Python API itself. According to the documentation:
Note: The test package is meant for internal use by Python only. It is documented for the benefit of the core developers of Python. Any use of this package outside of Python’s standard library is discouraged as code mentioned here can change or be removed without notice between releases of Python.
The link to this documentation appears in the TOC under Development Tools. While it is not entirely surprising that the python3 -m test command discovers and runs tests, it is not really designed to discover and run the tests that you write for your own code.
I am working on a project that has many "unit tests" that have hard dependencies that need to interact with the database and other APIs. The tests are a valuable and useful resource to our team, but they just cannot be ran independently, without relying on the functionality of other services within the test environment. Personally I would call these "functional tests", but this is just the semantics already established within our team.
The problem is, now that we are beginning to introduce more pure unit tests into our code, we have a medley of tests that do or do not have external dependencies. These tests can be ran immediately after checking out code with no requirement to install or configure other tools. They can also be ran in a continuous integration environment like jenkins.
So my question is, how I can denote which is which for a cleaner separation? Is there an existing decorator within unit testing library?
You can define which test should be skipped with the skipIf decorator. In combinations with setting an environmental variable you can skip tests in some environments. An example:
from unittest import skipIf
class MyTest(Testcase):
#skipIf(os.environ.get('RUNON') == 'jenkins', 'Does not run in Jenkins')
def test_my_code(self):
...
Here's another option. You could separate different test categories by directory. If you wanted to try this strategy, it may look something like:
python
-modules
unit
-pure unit test modules
functional
-other unit test modules
In your testing pipeline, you can call your testing framework to only execute the desired tests. For example, with Python's unittest, you could run your 'pure unit tests' from within the python directory with
python -m unittest discover --start-directory ../unit
and the functional/other unit tests with
python -m unittest discover --start-directory ../functional
An advantage of this setup is that your tests are easily categorized and you can do any scaffolding or mocked up services that you need in each testing environment. Someone with a little more Python experience might be able to help you run the tests regardless of the current directory, too.
We are using Behave BDD tool for automating API's. Is there any tool which give code coverage using our behave cases?
We tried using coverage module, it didn't work with Behave.
You can run any module with coverage to see the code usage.
In your case should be close to coverage run --source='.' -m behave
Tracking code coverage for Aceptace/Integration/Behaviour test will give a high coverage number easily but can lead to the idea that the code are properly tested.
Those are for see things working together, not to track how much code are well 'covered'.
Tying together unittests and coverages makes more sense to me.
What I want
I would like to create a set of benchmarks for my Python project. I would like to see the performance of these benchmarks change as I introduce new code. I would like to do this in the same way that I test Python, by running the utility command like nosetests and getting a nicely formatted readout.
What I like about nosetests
The nosetests tool works by searching through my directory structure for any functions named test_foo.py and runs all functions test_bar() contained within. It runs all of those functions and prints out whether or not they raised an exception.
I'd like something similar that searched for all files bench_foo.py and ran all contained functions bench_bar() and reported their runtimes.
Questions
Does such a tool exist?
If not what are some good starting points? Is some of the nose source appropriate for this?
nosetests can run any type of test, so you can decide if they test functionality, input/output validity etc., or performance or profiling (or anything else you'd like). The Python Profiler is a great tool, and it comes with your Python installation.
import unittest
import cProfile
class ProfileTest(unittest.TestCase):
test_run_profiler:
cProfile.run('foo(bar)')
cProfile.run('baz(bar)')
You just add a line to the test, or add a test to the test case for all the calls you want to profile, and your main source is not polluted with test code.
If you only want to time execution and not all the profiling information, timeit is another useful tool.
The wheezy documentation has a good example on how to do this with nose. The important part if you just want to have the timings is to use options -q for quiet run, -s for not capturing the output (so you will see the output of the report) and -m benchmark to only run the 'timing' tests.
I recommend using py.test for testing over. To run the example from wheezy with that, change the name of the runTest method to test_bench_run and run only this benchmark with:
py.test -qs -k test_bench benchmark_hello.py
(-q and -s having the same effect as with nose and -k to select the pattern of the test names).
If you put your benchmark tests in file in a separate file or directory from normal tests they are of course more easy to select and don't need special names.
I have some pyunit unit tests for a simple command line programme I'm writing. Is it possible for me to generate test coverage numbers? I want to see what lines aren't being covered by my tests.
I regularly use Ned Batchelder's coverage.py tool for exactly this purpose.
If you run your tests with testoob you can get a coverage report with --coverage. Can install with easy_install. No changes to your tests necessary:
testoob alltests.py --coverage