how to rewrite django test case to avoid unpredictable occasional failures - python

I have a test case that's written exactly like this
def test_material_search_name(self):
"""
Tests for `LIKE` condition in searches.
For both name and serial number.
"""
material_one = MaterialFactory(name="Eraenys Velinarys", serial_number="SB2341")
material_two = MaterialFactory(name="Nelaerla Velnaris", serial_number="TB7892")
response = self.client.get(reverse('material-search'), {'q': 'vel'})
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data['count'], 2)
self.assertEqual(response.data['results'][0]['name'], material_one.name)
self.assertEqual(response.data['results'][1]['name'], material_two.name)
My error message is :
line 97, in test_material_search_name
self.assertEqual(response.data['results'][0]['name'], material_one.name)
AssertionError: 'Nelaerla Velnaris' != 'Eraenys Velinarys'
- Nelaerla Velnaris
+ Eraenys Velinarys
Then when i re-run without changing any code, it becomes successful.
This error happens occasionally not always.
I was wondering if there's a better way to achieve the objectives of the test case without having that weird failure once in a while.
The frequency this error occurs is around 1 every 50 times I run the test.
The typical test command I use :
python manage.py test app_name.tests --keepdb

Here are a few options:
Order the results you get back by name before doing the assertEquals
Collect all the names out of the results first and then for each name do self.assertIn(name, names)
Order the results the back end returns

Related

How to test an endpoint exception using flask and pytest?

I have an endpoint that returns a list from my database. If something goes wrong along the way, I return an internal_server_error, which has 500 status_code and a message as a parameter.
def get_general_ranking():
try:
ranking_list = GamificationService.get_general_ranking()
return basic_response(ranking_list, 200)
except Exception as e:
logging.error(str(e))
cache.delete()
return internal_server_error_response('Could not get ranking. Check log for reasons.')
I am implementing an unit test to this endpoint. So, right now, I have this implementation:
class TestGamificationController(unittest.TestCase):
def setUp(self):
"""
Function called when the class is initialized.
"""
test_app = app.test_client()
self.general_ranking = test_app.get('/v1/gamification/general_ranking')
def test_endpoint_general_ranking(self):
"""
Testing the endpoint '/v1/gamification/general_ranking'.
"""
assert self.general_ranking.status_code == 200, "Wrong status code."
assert len(self.general_ranking.json) > 0, "/v1/gamification/general_ranking is returning an empty list."
assert self.general_ranking.content_type == 'application/json', "Wrong content_type"
But, as you can see below, when I run the test with coverage to check if I am covering 100% of my code, I get 75%. The missing lines are the exception ones.
---------- coverage: platform darwin, python 3.8.0-final-0 -----------
Name Stmts Miss Cover Missing
------------------------------------------------------------------------
api/controller/GamificationController.py 16 4 75% 18-21
Missing lines:
except Exception as e:
logging.error(str(e))
cache.delete()
return internal_server_error_response('Could not get ranking. Check log for reasons.')
How can I cover this exception too using pytest? Or should I use something else?
I see three possible fixes for this:
Create a custom route in your app that just raises said exception.
Programmatically add this custom route in your app when you start your tests.
Move your global error handler function to its own file and ignore it from your coverage.
Now personally 1 is the easiest with just applying a debug/dev environment check that just throws a route not found error if it's off.
2 is doable if you use your Flask app factory to generate an app and generate the custom route all on the test execution although I'm not sure if this is enough to pass the AssertionError that Flask throws if you make any modification to its app routes after the first request is received. Which is usually the case when you instantiate your app on your conftest.py.
3 is kind of cheating I suppose.
Hope you got this sorted out. Would like to know how you solved this.
Update: The best way I've found in doing this is by using the sessionstart hook of pytest that runs before any tests. I use this to initialize the custom error endpoints. This approach works best since you won't have to pollute the codebase with test-specific logic.

python testing code coverage and multiple asserts in one test function

In the following code,
1.If we assert the end result of a function then is it right to say that we have covered all lines of the code while testing or we have to test each line of the code explicitly if so how ?
2.Also Is it fine that we can have the positive ,negative and more assert statements in one single test function.If not please give examples
def get_wcp(book_id):
update_tracking_details(book_id)
c_id = get_new_supreme(book_id)
if book_id == c_id:
return True
return False
class BookingentitntyTest(TestCase):
def setUp(self):
self.booking_id = 123456789# 9 digit number poistive test case
def test_get_wcp(self):
retVal = get_wcp(self.book_id)
self.assertTrue( retVal )
self.booking_id = 1# 1 digit number Negative test case
retVal = get_wcp(self.book_id)
self.assertFalse( retVal )
No, just because you asserted the final result of the statement doesn't mean all paths of your code has been evaluated. You don't need to write "test each line" in the sense that you only need to go through all the code paths possible.
As a general guideline, try to keep the number of assert statements in a test as minimum as possible. When one assert statement fails, the further statements are not executed, and doesn't count as failures which is often not required.
Besides try to write your tests as elegantly as possible, even more so than the code they are testing. We wouldn't want to write tests to test our tests, now would we?
def test_get_wcp_returns_true_when_valid_booking_id(self):
self.assertTrue(get_wcp(self.book_id))
def test_get_wcp_returns_false_if_invalid_booking_id(self):
self.assertFalse(get_wcp(1))
Complete bug-free code is not possible.
"Testing shows the presence, not the absence of bugs" - Edsger Dijkstra.

PyCharm show full diff when unittest fails for multiline string?

I am writing some Python unit tests using the "unittest" framework and run them in PyCharm. Some of the tests compare a long generated string to a reference value read from a file. If this comparison fails, I would like to see the diff of the two compared strings using PyCharms diff viewer.
So the the code is like this:
actual = open("actual.csv").read()
expected = pkg_resources.resource_string('my_package', 'expected.csv').decode('utf8')
self.assertMultiLineEqual(actual, expected)
And PyCharm nicely identifies the test as a failure and provides a link in the results window to click which opens the diff viewer. However, due to how unittest shortens the results, I get results such as this in the diff viewer:
Left side:
'time[57 chars]ercent
0;1;1;1;1;1;1;1
0;2;1;3;4;2;3;1
0;3;[110 chars]32
'
Right side:
'time[57 chars]ercen
0;1;1;1;1;1;1;1
0;2;1;3;4;2;3;1
0;3;2[109 chars]32
'
Now, I would like to get rid of all the [X chars] parts and just see the whole file(s) and the actual diff fully visualized by PyCharm.
I tried to look into unittest code but could not find a configuration option to print full results. There are some variables such as maxDiff and _diffThreshold but they have no impact on this print.
Also, I tried to run this in py.test but there the support in PyCharm was even less (no links even to failed test).
Is there some trick using the difflib with unittest or maybe some other tricks with another Python test framework to do this?
The TestCase.maxDiff=None answers given in many places only make sure that the diff shown in the unittest output is of full length. In order to also get the full diff in the <Click to see difference> link you have to set MAX_LENGTH.
import unittest
# Show full diff in unittest
unittest.util._MAX_LENGTH=2000
Source: https://stackoverflow.com/a/23617918/1878199
Well, I managed to hack myself around this for my test purposes. Instead of using the assertEqual method from unittest, I wrote my own and use that inside the unittest test cases. On failure, it gives me the full texts and the PyCharm diff viewer also shows the full diff correctly.
My assert statement is in a module of its own (t_assert.py), and looks like this
def equal(expected, actual):
msg = "'"+actual+"' != '"+expected+"'"
assert expected == actual, msg
In my test I then call it like this
def test_example(self):
actual = open("actual.csv").read()
expected = pkg_resources.resource_string('my_package', 'expected.csv').decode('utf8')
t_assert.equal(expected, actual)
#self.assertEqual(expected, actual)
Seems to work so far..
A related problem here is that unittest.TestCase.assertMultiLineEqual is implemented with difflib.ndiff(). This generates really big diffs that contain all shared content along with the differences. If you monkey patch to use difflib.unified_diff() instead, you get much smaller diffs that are less often truncated. This often avoids the need to set maxDiff.
import unittest
from unittest.case import _common_shorten_repr
import difflib
def assertMultiLineEqual(self, first, second, msg=None):
"""Assert that two multi-line strings are equal."""
self.assertIsInstance(first, str, 'First argument is not a string')
self.assertIsInstance(second, str, 'Second argument is not a string')
if first != second:
firstlines = first.splitlines(keepends=True)
secondlines = second.splitlines(keepends=True)
if len(firstlines) == 1 and first.strip('\r\n') == first:
firstlines = [first + '\n']
secondlines = [second + '\n']
standardMsg = '%s != %s' % _common_shorten_repr(first, second)
diff = '\n' + ''.join(difflib.unified_diff(firstlines, secondlines))
standardMsg = self._truncateMessage(standardMsg, diff)
self.fail(self._formatMessage(msg, standardMsg))
unittest.TestCase.assertMultiLineEqual = assertMultiLineEqual

Python unittest, results vary depending on print statement

I'm mildly confused. I'm testing a django application with python's unittest library. All of a sudden, after running my tests with 100 % success for some minutes, suddenly an error appears. Ok I thought, I must have just added some stupid syntax error. I started looking at the test and then my code, I then tried to print out the results which are being compared with assertEqual before they are compared. Suddenly if I do that, the test runs!!! :o
Why is this? has anyone experienced this before. I swear, the only change I made was adding a print statement inside my test function. I'll post this function before and after
Before (Fails)
def test_swap_conditionals(self):
"""
Test conditional template keys
"""
testStr = "My email is: {?email}"
swapStr = self.t.swap(testStr)
# With email
self.assertEqual(swapStr, "My email is: john#baconfactory.com")
# Without email
self.t.template_values = {"phone" : "00458493"}
swapStr = self.t.swap(testStr)
self.assertEqual(swapStr, "My email is: ")
After (Success)
def test_swap_conditionals(self):
"""
Test conditional template keys
"""
testStr = "My email is: {?email}"
swapStr = self.t.swap(testStr)
print(swapStr) #diff here
# With email
self.assertEqual(swapStr, "My email is: john#baconfactory.com")
# Without email
self.t.template_values = {"phone" : "00458493"}
swapStr = self.t.swap(testStr)
self.assertEqual(swapStr, "My email is: ")
It looks like there is some external reason.
What you can check:
Rerun the test several times under the same conditions. Is it always failing or passing? Or is it a 'flipper' test? This might be caused by timing issues (although unlikely).
Put the test in its own class, so there are no side effects from other unit tests.
If the test in its own test class was passing the reason is a side effect by:
other unit tests
startup/teardown functionality
Ok well embarrassing, but this was completely my fault. The swap function was looking up every conditional template variable on the line and then iterating over that list one conditional template variable at a time, so either it missed keys it already had or it got lucky and it happened to hit that key.
Example
line: "This is my {?email}, and this is my {?phone}"
finds:
[{?email}, {?phone}]
iterates over [{?email}, {?phone}]
1. {?email}
key being compared = phone : '00549684'
It has phone as a key but it completely disregards it and does not swap it out because
it's just holding {?email} so it simply returns "".
I'm sincerely sorry to waste all your time here. Thanks for good answers. I am refactoring the code now for the better, and definitely taking a coffee break :D

Can i test all assertions of a test after the first assertion failed?

I have a list of different audioformats, to which a certain file should be converted. The conversion function i have written, should now convert the file and return information on success, the path to the newly created file or some failure information.
self.AUDIO_FORMATS = ({'format':'wav', 'samplerate':44100, 'bitdepth':16 },
{'format':'aac', 'samplerate':44100, 'bitdepth':16 },
{'format':'ogg', 'samplerate':44100, 'bitdepth':16 },
{'format':'mp3', 'samplerate':44100, 'bitdepth':16 } )
As one possible reason for one of the conversions failing is a missing library, or some bug or failure in such a library or my implementation of it, i want to test each of the conversions to have a list of passed and failed tests in the end, where the failed ones tell me exactly which conversion did cause the trouble. This is what i tried (a bit simplified):
def test_convert_to_formats(self):
for options in self.AUDIO_FORMATS:
created_file_path, errors = convert_audiofile(self.audiofile,options)
self.assertFalse( errors )
self.assertTrue( os.path.isfile(created_file_path),
Now this is, of course, aborting the test as soon as the first conversion fails. I could write a test function for each of the conversions. That would result in having to write a new test for each added format, where now i just have to add a new dictionary to my AUDIO_FORMATS tuple.
Instead of asserting, store the errors in an array. At the end of your iteration, assert that the errors array is empty and potentially dump the contents of the array as the assertion failure reason.
why not use try...except... ?
errors = []
for option in optionlist:
try:
assert_and_raise1(option)
assert_and_raise2(...)
except Exception, e:
errors.append("[%s] fail: %s"%(option, e))
for e in errors:
print e

Categories

Resources