I'm running a set of tests with pytest and want also to measure its coverage with the coverage package.
I have some decorators in my code, dividing the tests in several suites using something like:
#pytest.mark.firstgroup
def test_myfirsttest():
assert True is True
#pytest.mark.secondgroup
def test_mysecondtest():
assert False is False
I need to find something like:
coverage run --suite=firstgroup -m pytest
So that in this case only the first test, since it's in the correct suite, will be used to test coverage.
The suites work correctly when I use directly pytest but I don't know how to use them in coverage
Is there any way to configure coverage to achieve this?
You can pass whatever arguments you want to pytest. If this line works:
pytest --something --also=another 1 2 foo bar
then you can run it under coverage like this:
coverage run -m pytest --something --also=another 1 2 foo bar
Related
I wish to configure pytest such that it excludes some tests by default; but it should be easy to include them again with some command line option. I only found -k, and I have the impression that that allows complex specifications, but am not sure how to go about my specific need...
The exclusion should be part of the source or a config file (it's permanent - think about very long-running tests which should only be included as conscious choice, certainly never in a build pipeline...).
Bonus question: if that is not possible, how would I use -k to exclude specific tests? Again, I saw hints in the documentation about a way to use not as a keyword, but that doesn't seem to work for me. I.e., -k "not longrunning" gives an error about not being able to find a file "notrunning", but does not exclude anything...
goal: by default skip tests marked as #pytest.mark.integration
conftest.py
import pytest
# function executed right after test items collected but before test run
def pytest_collection_modifyitems(config, items):
if not config.getoption('-m'):
skip_me = pytest.mark.skip(reason="use `-m integration` to run this test")
for item in items:
if "integration" in item.keywords:
item.add_marker(skip_me)
pytest.ini
[pytest]
markers =
integration: integration test that requires environment
now all tests marked as #pytest.mark.integration are skipped unless you use
pytest -m integration
You can use pytest to mark some tests and use -k arg to skip or include them.
For example consider following tests,
import pytest
def test_a():
assert True
#pytest.mark.never_run
def test_b():
assert True
def test_c():
assert True
#pytest.mark.never_run
def test_d():
assert True
you can run pytest like this to run all the tests
pytest
To skip the marked tests you can run pytest like this,
pytest -m "not never_run"
If you want to run the marked tests alone,
pytest -m "never_run"
What I have done in the past is create custom markers for these tests that way I can exclude them using the -m command line flag for running the tests. So for example in your pytest.ini file place the following content:
[pytest]
markers =
longrunning: marks a test as longrunning
Then we just have to mark our long running tests with this marker.
#pytest.mark.longrunning
def test_my_long_test():
time.sleep(100000)
Then when we run the tests we would do pytest -m "not longrunning" tests/ to exclude them and pytest tests to run everything as intended.
I want to run pytest multiple times from within Python without restarting the script/interpreter.
The problem is that pytest is caching test contents/results. That is, if you modify a test file between two runs, pytest doesn't pick up the changes, showing the same results as before. (Unless you restart the script/exit the interpreter, which you'd naturally do when using pytest from the command line.)
Reproduction
test_foo.py:
def test_me():
assert False
In the Python shell:
>>> import pytest
>>> pytest.main(['test_foo.py'])
(...)
def test_me():
> assert False
E assert False
test_foo.py:2: AssertionError
Good so far. Now don't exit the interpreter but change the test to assert True and re-run pytest.
>>> pytest.main(['test_foo.py'])
(...)
def test_me():
> assert True
E assert False
test_foo.py:2: AssertionError
Expected result
Pytest should have picked up the change in the file and pass the rewritten test.
Solutions that don't work
Using importlib.reload(pytest) to reload pytest between the runs.
Running pytest with cleared caches: pytest.main(['--cache-clear', test_foo.py'])
(Running pytest as a subprocess isn't an option because I want have a reference to the pytest module from within my application.)
Any hints how to make pytest pick up these changes or how to properly reload the module?
Anyone landing here, another answer that may help, just install pytest-xdist plugin, and call pytest --looponfail
I have a class, having 3 methods(in python) .
class MyClass:
def A(self):
.......
def B(self):
........
def C(self):
........
I have written unit test case for only one method A. This unit test covers each line of the method A. That is we don't have any if...else or any branching constructs.
What would be the code coverage percentage?
Again if I write another unit test case for 2nd method of the class covering all lines. What would be the code coverage percentage now?
I got the answer myself :-)
Code coverage is all depends on which module or which files you are running the coverage for. Lets say if we run coverage for one file the way i had framed my question. Each line in each method will be accounted for code coverage.
Now as per my question i am covering only one method contain 20 lines. Other 2 methods have another 80 lines (total 100 lines in 3 methods). So if i ran coverage for my file. I would get code coverage of only 20%.
In python we can run (in pycharm terminal) like :coverage run -m py.test my_file.py
To get the report run the command :coverage report -m py.test my_file.py
To run for entire module (in all packages) use : coverage run -m py.test and coverage report -m py.test
I have a few very slow tests and many short unittests. I would like to be able to run only the short unittests with plain nosetests command and if I decide it's time to run the slow tests, to be able to call them explicitly.
What I'd like to be able to do:
run unittests but not the slow tests
$ nosetests
No special command is used - anyone who enters nosetests just out of curiosity would be satisfied in a few seconds.
explicitly request the slow tests:
$ nosetests --somehow-magicaly-invoke-slow-tests
There is no ambiguity - I want the slow tests (or unittests and the slow test - it does not matter)
What have I tried:
I tried using nose.plugins.attrib:
from nose.plugins.attrib import attr
import time
#attr('slow')
def test_slow_one():
time.sleep(5)
def test_unittest():
assert True
but it in fact does almost the opposite of what I'm trying to accomplish - I have to specify extra commandline parameters to not run the slow tests. The commands are:
Run unittests but not the slow tests
$ nosetests -v -a '!slow'
test_a.test_unittest ... ok
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
explicitly request the slow tests:
$ nosetests -v -a 'slow'
test_a.test_slow_one ... ok
----------------------------------------------------------------------
Ran 1 test in 5.005s
OK
And to make things worse, when someone runs just
$ nosetests -v
test_a.test_slow_one ... ok
test_a.test_unittest ... ok
----------------------------------------------------------------------
Ran 2 tests in 5.007s
OK
all tests including the slow ones will be run.
The question:
Is there a way to disable some tests so they won't get called with plain nosetests command, but can be run with some additional commandline parameters?
Current "solution":
I just moved all slow tests into separate files and named the files check_*.py instead of test_*.py so nosetests won't pick them up. When I want to run the slow tests I just specify the whole path to the check_*.py files, something like:
$ nosetests test/check_foo.py test/check_bar.py [...]
which is not very elegant.
You can easily use unittest.skipUnless() to that effect.
Skipping single tests
Simply decorate the methods of your test case that you want to conditionally skip with
#unittest.skipUnless(condition, reason)
For example, you could check for an environment variable SLOW_TESTS that you simply don't set in your automated CI environment, but set if and when you want to run your slow tests locally:
import os
import time
import unittest
SLOW_TESTS = os.environ.get('SLOW_TESTS', False)
class TestFoo(unittest.TestCase):
def test_upper(self):
self.assertEqual('foo'.upper(), 'FOO')
#unittest.skipUnless(SLOW_TESTS, "slow test")
def test_something_slow(self):
time.sleep(5)
self.assertTrue(True)
Output for a regular run:
$ nosetests -v
test_something_slow (test_foo.TestFoo) ... SKIP: slow test
test_upper (test_foo.TestFoo) ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.002s
OK (SKIP=1)
Output when setting the environment variable:
$ SLOW_TESTS=1 nosetests -v
test_something_slow (test_foo.TestFoo) ... ok
test_upper (test_foo.TestFoo) ... ok
----------------------------------------------------------------------
Ran 2 tests in 5.003s
OK
(Note that the Nose testrunner still says Ran 2 tests even though it skipped one. The skipped tests are indicated at the very end with (SKIP=n) and in the test results with SKIP or S in non-verbose mode).
Of course you could invert the behavior by using skipIf() using an environment variable like FAST_TESTS that you set in your CI setup.
Skipping an entire TestCase
If you want to skip all tests in a TestCase that you know to be slow or have some heavy setup, it may be more convenient to call TestCase.skipTest() explicitly (you can also do that from a single test if you need more fine grained control):
class TestFoo(unittest.TestCase):
def setUp(self):
if not SLOW_TESTS:
self.skipTest("slow test")
# some expensive setup
See Skipping tests and expected failures for more details on skipping tests.
nose discovers tests beginning with test_, as well as subclasses of unittest.TestCase.
If one wishes to run a single TestCase test, e.g.:
# file tests.py
class T(unittest.TestCase):
def test_something():
1/0
This can be done on the command line with:
nosetests tests:T.test_something
I sometimes prefer to write a simple function and skip all the unittest boilerplate:
def test_something_else():
assert False
In that case, the test will still be run by nose when it runs all my tests. But how can I tell nose to run only that test, using the (Unix) command line?
That would be:
nosetests tests:test_something_else
An additional tip is to use attributes
from nose.plugins.attrib import attr
#attr('now')
def test_something_else():
pass
To run all tests tagged with the attribute, execute:
nosetests -a now
Inversely, avoid running those tests:
nosetests -a !now