In the following code,
1.If we assert the end result of a function then is it right to say that we have covered all lines of the code while testing or we have to test each line of the code explicitly if so how ?
2.Also Is it fine that we can have the positive ,negative and more assert statements in one single test function.If not please give examples
def get_wcp(book_id):
update_tracking_details(book_id)
c_id = get_new_supreme(book_id)
if book_id == c_id:
return True
return False
class BookingentitntyTest(TestCase):
def setUp(self):
self.booking_id = 123456789# 9 digit number poistive test case
def test_get_wcp(self):
retVal = get_wcp(self.book_id)
self.assertTrue( retVal )
self.booking_id = 1# 1 digit number Negative test case
retVal = get_wcp(self.book_id)
self.assertFalse( retVal )
No, just because you asserted the final result of the statement doesn't mean all paths of your code has been evaluated. You don't need to write "test each line" in the sense that you only need to go through all the code paths possible.
As a general guideline, try to keep the number of assert statements in a test as minimum as possible. When one assert statement fails, the further statements are not executed, and doesn't count as failures which is often not required.
Besides try to write your tests as elegantly as possible, even more so than the code they are testing. We wouldn't want to write tests to test our tests, now would we?
def test_get_wcp_returns_true_when_valid_booking_id(self):
self.assertTrue(get_wcp(self.book_id))
def test_get_wcp_returns_false_if_invalid_booking_id(self):
self.assertFalse(get_wcp(1))
Complete bug-free code is not possible.
"Testing shows the presence, not the absence of bugs" - Edsger Dijkstra.
Related
I have to edit a python file such that after every if condition, i need to add a line which says
if condition_check:
if self.debug == 1: print "COVERAGE CONDITION #8.3 True (condition_check)"
#some other code
else:
if self.debug == 1: print "COVERAGE CONDITION #8.4 False (condition_check)"
#some other code
The number 8.4(generally y.x) refer to the fact that this if condition is in function number 8(y) (the functions are just sequentially numbers, nothing special about 8) and x is xth if condition in yth function.
and of course, the line that will be added will have to be added with proper indentation. The condition_check is the condition being checked.
For example:
if (self.order_in_cb):
self.ccu_process_crossing_buffer_order()
becomes:
if (self.order_in_cb):
if self.debug == 1: print "COVERAGE CONDITION #8.2 TRUE (self.order_in_cb)"
self.ccu_process_crossing_buffer_order()
How do i achieve this?
EXTRA BACKGROUND:
I have about 1200 lines of python code with about 180 if conditions - i need to see if every if condition is hit during the execution of 47 test cases.
In other words i need to do code coverage. The complication is - i am working with cocotb stimulus for RTL verification. As a result, there is no direct way to drive the stimulus, so i dont see an easy way to use the standard coverage.py way to test coverage.
Is there a way to check the coverage so other way? I feel i am missing something.
If you truly can't use coverage.py, then I would write a helper function that used inspect.stack to find the caller, then linecache to read the line of source, and log that way. Then you only have to change if something: to if condition(something): throughout your file, which should be fairly easy.
Here's a proof of concept:
import inspect
import linecache
import re
debug = True
def condition(label, cond):
if debug:
caller = inspect.stack()[1]
line = linecache.getline(caller.filename, caller.lineno)
condcode = re.search(r"if condition\(.*?,(.*)\):", line).group(1)
print("CONDITION {}: {}".format(label, condcode))
return cond
x = 1
y = 1
if condition(1.1, x + y == 2):
print("it's two!")
This prints:
CONDITION 1.1: x + y == 2
it's two!
I have about 1200 lines of python code with about 180 if conditions - i need to see if every if condition is hit during the execution of 47 test cases. In other words i need to do code coverage. The complication is - i am working with cocotb stimulus for RTL verification.
Cocotb has support for coverage built in (docs)
export COVERAGE=1
# run cocotb however you currently invoke it
My program is producing natural language sentences.
I would like to test it properly by setting the random seed to a fix value and then:
producing expected results;
comparing generated sentences with expected results;
if they differ, asking the user if the generated sentences were actually the expected results, and in this case, updating the expected results.
I already met such systems in JS, so I am surprised for not finding it in Python. How do you deal with such situations?
There are many testing frameworks in Python, with two of the most popular being PyTest and Nose. PyTest tends to cover all the bases, but Nose has a lot of nice extras as well.
With nose, fixtures are covered early on in the docs. The example they give looks something like
def setup_func():
"set up test fixtures"
def teardown_func():
"tear down test fixtures"
#with_setup(setup_func, teardown_func)
def test():
"test ..."
In your case, with the manual review, you may need to build that logic directly into the test itself.
Edit with a more specific example
Building on the example from Nose, one way you could address this is by writing the test
from nose.tools import eq_
def setup_func():
"set your random seed"
def teardown_func():
"whatever teardown you need"
#with_setup(setup_func, teardown_func)
def test():
expected = "the correct answer"
actual = "make a prediction ..."
_eq(expected, actual, "prediction did not match!")
When you run your tests, if the model does not produce the correct output, the tests will fail with "prediction did not match!". In that case, you should go to your test file and update expected with the expected value. This procedure isn't as dynamic as typing it in at runtime, but it has the advantage of being easily versioned and controlled.
One drawback of asking the user to replace the expected answer is that the automated test can not be run automatically. Therefore, test frameworks do not allow reading from input.
I really wanted this feature, so my implementation looks like:
def compare_results(expected, results):
if not os.path.isfile(expected):
logging.warning("The expected file does not exist.")
elif filecmp.cmp(expected, results):
logging.debug("%s is accepted." % expected)
return
content = Path(results).read_text()
print("The test %s failed." % expected)
print("Should I accept the results?")
print(content)
while True:
try:
keep = input("[y/n]")
except OSError:
assert False, "The test failed. Run directly this file to accept the result"
if keep.lower() in ["y", "yes"]:
Path(expected).write_text(content)
break
elif keep.lower() in ["n", "no"]:
assert False, "The test failed and you did not accept the answer."
break
else:
print("Please answer by yes or no.")
def test_example_iot_root(setup):
...
compare_results(EXPECTED_DIR / "iot_root.test", tmp.name)
if __name__ == "__main__":
from inspect import getmembers, isfunction
def istest(o):
return isfunction(o[1]) and o[0].startswith("test")
[random.seed(1) and o[1](setup) for o in getmembers(sys.modules[__name__]) \
if istest(o)]
When I run directly this file, it asks me whether or not it should replace the expected results. When I run from pytest, input creates an OSError that allows to quit the loop. Definitely not perfect.
In my tests suite, I have different tests for integration and for stability.
For example,
#pytest.mark.integration
def test_integration_total_devices(settings, total_devices):
assert total_devices == settings['integration']['nodes']['total']
#pytest.mark.stability
def test_stability_total_devices(settings, total_devices):
assert total_devices == settings['stability']['nodes']['total']
As you can notice, it's exactly the same code, just reading a different parameter from the config.
How can I prevent this situation of duplicating the code? The value of the settings is different, so I can't just:
#pytest.mark.integration
#pytest.mark.stability
def test_integration_total_devices(settings, total_devices):
assert total_devices == settings['nodes']['total']
I forgot to mention (thanks #dzejdzej to remind me) that it seems pytest parametrize doesn't do the trick. It works when I want to run both "marks", but the purpose of the mark is to be able to just run the tests of one of them independently, for example, pytest -m integration. However, as far as I tested, whenever I set parametrize it will run both.
#pytest.mark.parametrize('type', (
pytest.param('stability', marks=pytest.mark.stability),
pytest.param('integration', marks=pytest.mark.integration),
))
#pytest.mark.integration
#pytest.mark.stability
def test_total_devices(settings, total_devices, type):
assert total_devices == settings[type]['nodes']['total']
Please take a look at pytest parametrize https://docs.pytest.org/en/latest/parametrize.html
It should be possible to do sth along these lines:
#pytest.mark.parametrize('area,total_devices', (
pytest.param('stability', 10, marks=pytest.mark.stability),
pytest.param('integration', 15, marks=pytest.mark.integration),
))
def test_integration_total_devices(area, total_devices):
assert total_devices == settings.get(area)['nodes']['total']
Occasionally I will handle some condition that I'm fairly certain would be an edge case, but I can't think of an example where it would come up, so I can't come up with a test case for it. I'm wondering if there's a way to add a pragma to my code such that if, in the future some change in the tests accidentally starts covering this line, I'd would be alerted to this fact (since such accidental coverage will produce the needed test case, but possibly as an implementation detail, leaving the coverage of this line fragile). I've come up with a contrived example of this:
In mysquare.py:
def mysquare(x):
ov = x * x
if abs(ov) != ov or type(abs(ov)) != type(ov):
# Always want this to be positive, though why would this ever fail?!
ov = abs(ov) # pragma: nocover
return ov
Then in my test suite I start with:
from hypothesis import given
from hypothesis.strategies import one_of, integers, floats
from mysquare import mysquare
NUMBERS = one_of(integers(), floats()).filter(lambda x: x == x)
#given(NUMBERS)
def test_mysquare(x):
assert mysquare(x) == abs(x * x)
#given(NUMBERS)
def test_mysquare_positive(x):
assert mysquare(x) == abs(mysquare(x))
The nocover line is never hit, but that's only because I can't think of a way to reach it! However, at some time in the far future, I decide that mysquare should also support complex numbers, so I change NUMBERS:
NUMBERS = one_of(integers(), floats(),
complex_numbers()).filter(lambda x: x == x)
Now I'm suddenly unexpectedly covering the line, but I'm not alerted this fact. Is there something like nocover that works more like pytest.xfail - as a positive assertion that that particular line is covered by no tests? Preferably compatible with pytest.
Coverage.py doesn't yet have this feature, but it's an interesting one: A warning that a line marked as not covered, was actually covered. As a difficult hack, you could do something like disable your nocover pragma, and collect the lines covered, and compare them to the line numbers with the pragma.... Ick.
I am writing some Python unit tests using the "unittest" framework and run them in PyCharm. Some of the tests compare a long generated string to a reference value read from a file. If this comparison fails, I would like to see the diff of the two compared strings using PyCharms diff viewer.
So the the code is like this:
actual = open("actual.csv").read()
expected = pkg_resources.resource_string('my_package', 'expected.csv').decode('utf8')
self.assertMultiLineEqual(actual, expected)
And PyCharm nicely identifies the test as a failure and provides a link in the results window to click which opens the diff viewer. However, due to how unittest shortens the results, I get results such as this in the diff viewer:
Left side:
'time[57 chars]ercent
0;1;1;1;1;1;1;1
0;2;1;3;4;2;3;1
0;3;[110 chars]32
'
Right side:
'time[57 chars]ercen
0;1;1;1;1;1;1;1
0;2;1;3;4;2;3;1
0;3;2[109 chars]32
'
Now, I would like to get rid of all the [X chars] parts and just see the whole file(s) and the actual diff fully visualized by PyCharm.
I tried to look into unittest code but could not find a configuration option to print full results. There are some variables such as maxDiff and _diffThreshold but they have no impact on this print.
Also, I tried to run this in py.test but there the support in PyCharm was even less (no links even to failed test).
Is there some trick using the difflib with unittest or maybe some other tricks with another Python test framework to do this?
The TestCase.maxDiff=None answers given in many places only make sure that the diff shown in the unittest output is of full length. In order to also get the full diff in the <Click to see difference> link you have to set MAX_LENGTH.
import unittest
# Show full diff in unittest
unittest.util._MAX_LENGTH=2000
Source: https://stackoverflow.com/a/23617918/1878199
Well, I managed to hack myself around this for my test purposes. Instead of using the assertEqual method from unittest, I wrote my own and use that inside the unittest test cases. On failure, it gives me the full texts and the PyCharm diff viewer also shows the full diff correctly.
My assert statement is in a module of its own (t_assert.py), and looks like this
def equal(expected, actual):
msg = "'"+actual+"' != '"+expected+"'"
assert expected == actual, msg
In my test I then call it like this
def test_example(self):
actual = open("actual.csv").read()
expected = pkg_resources.resource_string('my_package', 'expected.csv').decode('utf8')
t_assert.equal(expected, actual)
#self.assertEqual(expected, actual)
Seems to work so far..
A related problem here is that unittest.TestCase.assertMultiLineEqual is implemented with difflib.ndiff(). This generates really big diffs that contain all shared content along with the differences. If you monkey patch to use difflib.unified_diff() instead, you get much smaller diffs that are less often truncated. This often avoids the need to set maxDiff.
import unittest
from unittest.case import _common_shorten_repr
import difflib
def assertMultiLineEqual(self, first, second, msg=None):
"""Assert that two multi-line strings are equal."""
self.assertIsInstance(first, str, 'First argument is not a string')
self.assertIsInstance(second, str, 'Second argument is not a string')
if first != second:
firstlines = first.splitlines(keepends=True)
secondlines = second.splitlines(keepends=True)
if len(firstlines) == 1 and first.strip('\r\n') == first:
firstlines = [first + '\n']
secondlines = [second + '\n']
standardMsg = '%s != %s' % _common_shorten_repr(first, second)
diff = '\n' + ''.join(difflib.unified_diff(firstlines, secondlines))
standardMsg = self._truncateMessage(standardMsg, diff)
self.fail(self._formatMessage(msg, standardMsg))
unittest.TestCase.assertMultiLineEqual = assertMultiLineEqual