I am trying to achieve a 100% coverage for a basic python module.
I use Ned Batchelder's coverage.py module to test it.
1 class account(object):
2 def __init__(self, initial_balance=0):
3 self.balance = initial_balance
4 def add_one(self):
5 self.balance = self.balance + 1
These are the tests.
class TestAccount(unittest.TestCase):
def test_create_edit_account(self):
a = account1.account()
a.add_one()
Here is what the coverage report I get.
COVERAGE REPORT =
Name Stmts Miss Cover Missing
-----------------------------------------------------
__init__ 1 1 0% 1
account1 5 3 40% 1-2, 4
account2 7 7 0% 1-7
As we can see, the lines 1-2 and 4 are not covered which are the defintions.
The rest of the lines are executed.
I think your problem is described in the FAQ:
Q: Why do the bodies of functions (or classes) show as executed, but
the def lines do not?
This happens because coverage is started after the functions are
defined. The definition lines are executed without coverage
measurement, then coverage is started, then the function is called.
This means the body is measured, but the definition of the function
itself is not.
To fix this, start coverage earlier. If you use the command line to
run your program with coverage, then your entire program will be
monitored. If you are using the API, you need to call coverage.start()
before importing the modules that define your functions.
Following jcollado's answer:
I have this problem with Django nose which only covers lines used by tests.
For fix it I launch firstly manage.py with coverage and after I launch tests.
.coverage file will contain both's reports.
My first command is a custom which prints my project settings. Example:
coverage run ./manage.py settings && ./manage.py test myapp
Related
I'm trying to get the executed code out of each Pytest Test case -in order to make a debuger-, now it's possible in Coverage module apis i will use the following code:
cov.start()
assert nrm.mySqrt(x) == 3 // this is a test case using Pytest
cov.stop()
cov.json_report()
cov.erase()// after this line i repeat the steps top
but it's not practical, so if I used the coverage run - m pytest fileName command it will not give me a report for each case, I'm trying to use the plugin in order to apply some special case where if the executed line is including assert I want to get a JSON report and the erase the data, will I be able to do that? or should I try to find a way to implant the code above for each test case?
I have a class, having 3 methods(in python) .
class MyClass:
def A(self):
.......
def B(self):
........
def C(self):
........
I have written unit test case for only one method A. This unit test covers each line of the method A. That is we don't have any if...else or any branching constructs.
What would be the code coverage percentage?
Again if I write another unit test case for 2nd method of the class covering all lines. What would be the code coverage percentage now?
I got the answer myself :-)
Code coverage is all depends on which module or which files you are running the coverage for. Lets say if we run coverage for one file the way i had framed my question. Each line in each method will be accounted for code coverage.
Now as per my question i am covering only one method contain 20 lines. Other 2 methods have another 80 lines (total 100 lines in 3 methods). So if i ran coverage for my file. I would get code coverage of only 20%.
In python we can run (in pycharm terminal) like :coverage run -m py.test my_file.py
To get the report run the command :coverage report -m py.test my_file.py
To run for entire module (in all packages) use : coverage run -m py.test and coverage report -m py.test
My tests clearly execute each function, and there are no unused imports either. Yet, according to the coverage report, 62% of the code was never executed in the following file:
Can someone please point out what I might be doing wrong?
Here's how I initialise the test suite and the coverage:
cov = coverage(branch=True, omit=['website/*', 'run_test_suite.py'])
cov.start()
try:
unittest.main(argv=[sys.argv[0]])
except:
pass
cov.stop()
cov.save()
print "\n\nCoverage Report:\n"
cov.report()
print "HTML version: " + os.path.join(BASEDIR, "tmp/coverage/index.html")
cov.html_report(directory='tmp/coverage')
cov.erase()
This is the third question in the coverage.py FAQ:
Q: Why do the bodies of functions (or classes) show as executed, but
the def lines do not?
This happens because coverage is started after the functions are
defined. The definition lines are executed without coverage
measurement, then coverage is started, then the function is called.
This means the body is measured, but the definition of the function
itself is not.
To fix this, start coverage earlier. If you use the command line to
run your program with coverage, then your entire program will be
monitored. If you are using the API, you need to call coverage.start()
before importing the modules that define your functions.
The simplest thing to do is run you tests under coverage:
$ coverage run -m unittest discover
Your custom test script isn't doing much beyond what the coverage command line would do, it will be simpler just to use the command line.
For excluding the imports statements, you can add the following lines to .coveragerc
[report]
exclude_lines =
# Ignore imports
from
import
but when I tried to add '#' for decorators, the source code within the scope of decorators was excluded. The coverage rate was wrong.
There may be some other ways to exclude decorators.
Say I want to grade some student python code using tests, something like (this is pseudo-code I wish I could write):
code = __import__("student_code") # Import the code to be tested
grade = 100
for test in all_tests(): # Loop over the tests that were gathered
good = perform(test, code) # Perform the test individually on the code
if not good: # Do something if the code gives the wrong result
grade -= 1
For that, I would like to use pytest (easy to write good tests), but there are many things I don't know how to do:
how to run tests on external code? (here the code imported from student's code)
how to list all the tests available? (here all_tests())
how to run them individually on code? (here perform(test, code))
I couldn't find anything related to this user-case (pytest.main() does not seem to do the trick anyhow...)
I hope you see my point, cheers!
EDIT
I finally found how to perform my 1st point (apply tests on external code). In the repository where you want to perform the tests, generate a conftest.py file with:
import imp # Import standard library
import pytest
def pytest_addoption(parser):
"""Add a custom command-line option to py.test."""
parser.addoption("--module", help="Code file to be tested.")
#pytest.fixture(scope='session')
def module(request):
"""Import code specified with the command-line custom option '--module'."""
codename = request.config.getoption("--module")
# Import module (standard __import__ does not support import by filename)
try:
code = imp.load_source('code', codename)
except Exception as err:
print "ERROR while importing {!r}".format(codename)
raise
return code
Then, gather your tests in a tests.py file, using the module fixture:
def test_sample(module):
assert module.add(1, 2) == 3
Finally, run the tests with py.test tests.py --module student.py.
I'm still working on points 2 and 3.
EDIT 2
I uploaded my (incomplete) take at this question:
https://gitlab.in2p3.fr/ycopin/pyTestExam
Help & contributions are welcome!
Very cool and interesting project. It's difficult to answer without knowing more.
Basically you should be able to do this by writing a custom plugin. Probably something you could place in a conftest.py in a test or project folder with your unittest subfolder and a subfolder for each student.
Probably would want to write two plugins:
One to allow weighting of tests (e.g. test_foo_10 and and test_bar_5) and calculation of final grade (e.g. 490/520) (teamcity-messages is an example that uses the same hooks)
Another to allow distribution of test to separate processes. (xdist as an example)
I know this is not a very complete answer but I wanted to at least point out the last point. Since there would be a very high probability that students would be using overlapping module names, they would clash in a pytest world where tests are collected first and then run in a process that would attempt to not re-import modules with a common name.
Even if you attempt to control for that, you will eventually have a student manipulate the global namespace in a sloppy way that could cause another students code to fail. You will, for that reason, need either a bash script to run each students file or a plugin that would run them in separate processes.
Make this use case a graded take-home exam and see what they come up with :-) ... naaah ... but you could ;-)
I come up with something like this (whre it is assument that the sum function is the student code):
import unittest
score = 0
class TestAndGrade(unittest.TestCase):
def test_correctness(self):
self.assertEqual(sum([2,2]), 4)
global score; score += 6 # Increase score
def test_edge_cases(self):
with self.assertRaises(Exception):
sum([2,'hello'])
global score; score += 1 # Increase score
# Print the score
#classmethod
def tearDownClass(cls):
global score
print('\n\n-------------')
print('| Score: {} |'.format(score))
print('-------------\n')
# Run the tests
unittest.main()
I have a minor issue using py.test for my unit tests.
I use py.test to run my tests and output a junitxml report of the tests.
This xml report is imported in jenkins and generates nice statistics.
When I use a test class which derives from unittest.TestCase,
I skip expected failures using:
#unittest.skip("Bug 1234 : This does not work")
This message also shows up in jenkins when selecting this test.
When I don't use a unittest.TestCase class, e.g. to use py.test parametrize functionality,
I skip expected failures using:
#pytest.mark.xfail(reason="Bug 1234 : This does not work", run=False)
But then this reason is not actually displayed in jenkins, instead it will say:
Skip Message
expected test failure
How can I fix this?
I solved it using this line as the first line of the test:
pytest.skip("Bug 1234: This does not work")
I'd rather have used one of the pytest decorators, but this'll do.
I had a similar problem except I had a different Jenkins message and could not tell which test was skipped.
It turns out that if the only test in the module is a skipped test, then jenkins would not show the test in the test result list (using either decorator or jr-be's soloution). You could see that there was a skipped test in the total resuls, but could not tell which test or which module the skipped test was in.
To solve this (ok hack solve), I went back to using the decorator on my test and added a dummy test (so have 1 test that runs and 1 test that gets skipped):
#pytest.skip('SONIC-3218')
def test_segments_create_delete(self, api):
logging.info('TestCreateDeleteSegments.test_segments_create_delete')
def test_dummy(self, api):
'''
Dummy test to see if suite will display in jenkins if one
test is run and 1 is skipped (instead of having only skipped tests)
'''
logging.info('TestCreateDeleteSegments.test_dummy')
For me that works since I would rather have 1 extra dummy test and be able to find my skipped tests.