In my tests suite, I have different tests for integration and for stability.
For example,
#pytest.mark.integration
def test_integration_total_devices(settings, total_devices):
assert total_devices == settings['integration']['nodes']['total']
#pytest.mark.stability
def test_stability_total_devices(settings, total_devices):
assert total_devices == settings['stability']['nodes']['total']
As you can notice, it's exactly the same code, just reading a different parameter from the config.
How can I prevent this situation of duplicating the code? The value of the settings is different, so I can't just:
#pytest.mark.integration
#pytest.mark.stability
def test_integration_total_devices(settings, total_devices):
assert total_devices == settings['nodes']['total']
I forgot to mention (thanks #dzejdzej to remind me) that it seems pytest parametrize doesn't do the trick. It works when I want to run both "marks", but the purpose of the mark is to be able to just run the tests of one of them independently, for example, pytest -m integration. However, as far as I tested, whenever I set parametrize it will run both.
#pytest.mark.parametrize('type', (
pytest.param('stability', marks=pytest.mark.stability),
pytest.param('integration', marks=pytest.mark.integration),
))
#pytest.mark.integration
#pytest.mark.stability
def test_total_devices(settings, total_devices, type):
assert total_devices == settings[type]['nodes']['total']
Please take a look at pytest parametrize https://docs.pytest.org/en/latest/parametrize.html
It should be possible to do sth along these lines:
#pytest.mark.parametrize('area,total_devices', (
pytest.param('stability', 10, marks=pytest.mark.stability),
pytest.param('integration', 15, marks=pytest.mark.integration),
))
def test_integration_total_devices(area, total_devices):
assert total_devices == settings.get(area)['nodes']['total']
Related
I might be guilty of using pytest in a way I'm not supposed to, but let's say I want to generate and display some short message on successful test completion. Like I'm developing a compression algorithm, and instead of "PASSED" I'd like to see "SQUEEZED BY 65.43%" or something like that. Is this even possible? Where should I start with the customization, or maybe there's a plugin I might use?
I've stumbled upon pytest-custom-report, but it provides only static messages, that are set up before tests run. That's not what I need.
I might be guilty of using pytest in a way I'm not supposed to
Not at all - this is exactly the kind of use cases the pytest plugin system is supposed to solve.
To answer your actual question: it's not clear where the percentage value comes from. Assuming it is returned by a function squeeze(), I would first store the percentage in the test, for example using the record_property fixture:
from mylib import squeeze
def test_spam(record_property):
value = squeeze()
record_property('x', value)
...
To display the stored percentage value, add a custom pytest_report_teststatus hookimpl in a conftest.py in your project or tests root directory:
# conftest.py
def pytest_report_teststatus(report, config):
if report.when == 'call' and report.passed:
percentage = dict(report.user_properties).get('x', float("nan"))
short_outcome = f'{percentage * 100}%'
long_outcome = f'SQUEEZED BY {percentage * 100}%'
return report.outcome, short_outcome, long_outcome
Now running test_spam in default output mode yields
test_spam.py 10.0% [100%]
Running in the verbose mode
test_spam.py::test_spam SQUEEZED BY 10.0% [100%]
My program is producing natural language sentences.
I would like to test it properly by setting the random seed to a fix value and then:
producing expected results;
comparing generated sentences with expected results;
if they differ, asking the user if the generated sentences were actually the expected results, and in this case, updating the expected results.
I already met such systems in JS, so I am surprised for not finding it in Python. How do you deal with such situations?
There are many testing frameworks in Python, with two of the most popular being PyTest and Nose. PyTest tends to cover all the bases, but Nose has a lot of nice extras as well.
With nose, fixtures are covered early on in the docs. The example they give looks something like
def setup_func():
"set up test fixtures"
def teardown_func():
"tear down test fixtures"
#with_setup(setup_func, teardown_func)
def test():
"test ..."
In your case, with the manual review, you may need to build that logic directly into the test itself.
Edit with a more specific example
Building on the example from Nose, one way you could address this is by writing the test
from nose.tools import eq_
def setup_func():
"set your random seed"
def teardown_func():
"whatever teardown you need"
#with_setup(setup_func, teardown_func)
def test():
expected = "the correct answer"
actual = "make a prediction ..."
_eq(expected, actual, "prediction did not match!")
When you run your tests, if the model does not produce the correct output, the tests will fail with "prediction did not match!". In that case, you should go to your test file and update expected with the expected value. This procedure isn't as dynamic as typing it in at runtime, but it has the advantage of being easily versioned and controlled.
One drawback of asking the user to replace the expected answer is that the automated test can not be run automatically. Therefore, test frameworks do not allow reading from input.
I really wanted this feature, so my implementation looks like:
def compare_results(expected, results):
if not os.path.isfile(expected):
logging.warning("The expected file does not exist.")
elif filecmp.cmp(expected, results):
logging.debug("%s is accepted." % expected)
return
content = Path(results).read_text()
print("The test %s failed." % expected)
print("Should I accept the results?")
print(content)
while True:
try:
keep = input("[y/n]")
except OSError:
assert False, "The test failed. Run directly this file to accept the result"
if keep.lower() in ["y", "yes"]:
Path(expected).write_text(content)
break
elif keep.lower() in ["n", "no"]:
assert False, "The test failed and you did not accept the answer."
break
else:
print("Please answer by yes or no.")
def test_example_iot_root(setup):
...
compare_results(EXPECTED_DIR / "iot_root.test", tmp.name)
if __name__ == "__main__":
from inspect import getmembers, isfunction
def istest(o):
return isfunction(o[1]) and o[0].startswith("test")
[random.seed(1) and o[1](setup) for o in getmembers(sys.modules[__name__]) \
if istest(o)]
When I run directly this file, it asks me whether or not it should replace the expected results. When I run from pytest, input creates an OSError that allows to quit the loop. Definitely not perfect.
In the following code,
1.If we assert the end result of a function then is it right to say that we have covered all lines of the code while testing or we have to test each line of the code explicitly if so how ?
2.Also Is it fine that we can have the positive ,negative and more assert statements in one single test function.If not please give examples
def get_wcp(book_id):
update_tracking_details(book_id)
c_id = get_new_supreme(book_id)
if book_id == c_id:
return True
return False
class BookingentitntyTest(TestCase):
def setUp(self):
self.booking_id = 123456789# 9 digit number poistive test case
def test_get_wcp(self):
retVal = get_wcp(self.book_id)
self.assertTrue( retVal )
self.booking_id = 1# 1 digit number Negative test case
retVal = get_wcp(self.book_id)
self.assertFalse( retVal )
No, just because you asserted the final result of the statement doesn't mean all paths of your code has been evaluated. You don't need to write "test each line" in the sense that you only need to go through all the code paths possible.
As a general guideline, try to keep the number of assert statements in a test as minimum as possible. When one assert statement fails, the further statements are not executed, and doesn't count as failures which is often not required.
Besides try to write your tests as elegantly as possible, even more so than the code they are testing. We wouldn't want to write tests to test our tests, now would we?
def test_get_wcp_returns_true_when_valid_booking_id(self):
self.assertTrue(get_wcp(self.book_id))
def test_get_wcp_returns_false_if_invalid_booking_id(self):
self.assertFalse(get_wcp(1))
Complete bug-free code is not possible.
"Testing shows the presence, not the absence of bugs" - Edsger Dijkstra.
I have a python unittest like below , I want to run this whole Test N Number of times
class Test(TestCase)
def test_0(self):
.........
.........
.........
Test.Run(name=__name__)
Any Suggestions?
You can use parameterized tests. There are different modules to do that. I use nose to run my unittests (more powerful than the default unittest module) and there's a package called nose-parameterized that allows you to write a factory test and run it a number of times with different values for variables you want.
If you don't want to use nose, there are several other options for running parameterized tests.
Alternatively you can execute any number of test conditions in a single test (as soon as one fails the test will report error). In your particular case, maybe this makes more sense than parameterized tests, because in reality it's only one test, only that it needs a large number of runs of the function to get to some level of confidence that it's working properly. So you can do:
import random
class Test(TestCase)
def test_myfunc(self):
for _ in range(100):
input = random.random()
self.assertEquals(input, input + 2)
Test.Run(name=__name__)
Why because... the test_0 method contains a random option.. so each time it runs it selects random number of configuration and tests against those configurations. so I am not testing the same thing multiple times..
Randomness in a test makes it non-reproducible. One day you might get 1 failure out of 100, and when you run it again, it’s already gone.
Use a modern testing tool to parametrize your test with a sequential number, then use random.seed to have a random but reproducible test case for each number in a sequence.
portusato suggests nose, but pytest is a more modern and popular tool:
import random, pytest
#pytest.mark.parametrize('i', range(100))
def test_random(i):
orig_state = random.getstate()
try:
random.seed(i)
data = generate_random_data()
assert my_algorithm(data) == works
finally:
random.setstate(orig_state)
pytest.mark.parametrize “explodes” your single test_random into 100 individual tests — test_random[0] through test_random[99]:
$ pytest -q test.py
....................................................................................................
100 passed in 0.14 seconds
Each of these tests generates different, random, but reproducible input data to your algorithm. If test_random[56] fails, it will fail every time, so you will then be able to debug it.
If you don't want your test to stop after the first failure, you can use subTest.
class Test(TestCase):
def test_0(self):
for i in [1, 2, 3]:
with self.subTest(i=i):
self.assertEqual(squared(i), i**2)
Docs
I am writing some Python unit tests using the "unittest" framework and run them in PyCharm. Some of the tests compare a long generated string to a reference value read from a file. If this comparison fails, I would like to see the diff of the two compared strings using PyCharms diff viewer.
So the the code is like this:
actual = open("actual.csv").read()
expected = pkg_resources.resource_string('my_package', 'expected.csv').decode('utf8')
self.assertMultiLineEqual(actual, expected)
And PyCharm nicely identifies the test as a failure and provides a link in the results window to click which opens the diff viewer. However, due to how unittest shortens the results, I get results such as this in the diff viewer:
Left side:
'time[57 chars]ercent
0;1;1;1;1;1;1;1
0;2;1;3;4;2;3;1
0;3;[110 chars]32
'
Right side:
'time[57 chars]ercen
0;1;1;1;1;1;1;1
0;2;1;3;4;2;3;1
0;3;2[109 chars]32
'
Now, I would like to get rid of all the [X chars] parts and just see the whole file(s) and the actual diff fully visualized by PyCharm.
I tried to look into unittest code but could not find a configuration option to print full results. There are some variables such as maxDiff and _diffThreshold but they have no impact on this print.
Also, I tried to run this in py.test but there the support in PyCharm was even less (no links even to failed test).
Is there some trick using the difflib with unittest or maybe some other tricks with another Python test framework to do this?
The TestCase.maxDiff=None answers given in many places only make sure that the diff shown in the unittest output is of full length. In order to also get the full diff in the <Click to see difference> link you have to set MAX_LENGTH.
import unittest
# Show full diff in unittest
unittest.util._MAX_LENGTH=2000
Source: https://stackoverflow.com/a/23617918/1878199
Well, I managed to hack myself around this for my test purposes. Instead of using the assertEqual method from unittest, I wrote my own and use that inside the unittest test cases. On failure, it gives me the full texts and the PyCharm diff viewer also shows the full diff correctly.
My assert statement is in a module of its own (t_assert.py), and looks like this
def equal(expected, actual):
msg = "'"+actual+"' != '"+expected+"'"
assert expected == actual, msg
In my test I then call it like this
def test_example(self):
actual = open("actual.csv").read()
expected = pkg_resources.resource_string('my_package', 'expected.csv').decode('utf8')
t_assert.equal(expected, actual)
#self.assertEqual(expected, actual)
Seems to work so far..
A related problem here is that unittest.TestCase.assertMultiLineEqual is implemented with difflib.ndiff(). This generates really big diffs that contain all shared content along with the differences. If you monkey patch to use difflib.unified_diff() instead, you get much smaller diffs that are less often truncated. This often avoids the need to set maxDiff.
import unittest
from unittest.case import _common_shorten_repr
import difflib
def assertMultiLineEqual(self, first, second, msg=None):
"""Assert that two multi-line strings are equal."""
self.assertIsInstance(first, str, 'First argument is not a string')
self.assertIsInstance(second, str, 'Second argument is not a string')
if first != second:
firstlines = first.splitlines(keepends=True)
secondlines = second.splitlines(keepends=True)
if len(firstlines) == 1 and first.strip('\r\n') == first:
firstlines = [first + '\n']
secondlines = [second + '\n']
standardMsg = '%s != %s' % _common_shorten_repr(first, second)
diff = '\n' + ''.join(difflib.unified_diff(firstlines, secondlines))
standardMsg = self._truncateMessage(standardMsg, diff)
self.fail(self._formatMessage(msg, standardMsg))
unittest.TestCase.assertMultiLineEqual = assertMultiLineEqual