Why does Order matter in Kwarg parameters in MagicMock asserts? - python

I have a test where I am mocking a filter call on a manager. the assert looks like this:
filter_mock.assert_called_once_with(type_id__in=[3, 4, 5, 6], finance=mock_finance, parent_transaction__date_posted=tran_date_posted)
and the code being tested looks like this:
agregates = Balance.objects.filter(
finance=self.finance,type_id__in=self.balance_types,
parent_transaction__date_posted__lte=self.transaction_date_posted
)
I thought that since these are kwargs, order shouldn't matter, but the test is failing, even though the values for each pair DO match. below is the error I am seeing:
AssertionError: Expected call: filter(type_id__in=[3, 4, 5, 6],
parent_transaction__date_posted=datetime.datetime(2015, 5, 29, 16, 22,
59, 532772), finance=) Actual call:
filter(type_id__in=[3, 4, 5, 6], finance=,
parent_transaction__date_posted__lte=datetime.datetime(2015, 5, 29,
16, 22, 59, 532772))
what the heck is going on? kwarg order should not matter, and even if I do order to match what the test is asserting, the test still fails.

Your keys are not exactly the same. In your assert_called_with, you have the key parent_transaction__date_posted, but in your code you are using the key parent_transaction__date_posted__lte. That is what is causing your test to fail, not the bad sorting. Here is my own test as a proof of concept:
>>> myobject.test(a=1, b=2)
>>> mock_test.assert_called_with(b=2, a=1)
OK
>>> myobject.test(a=1, b__lte=2)
>>> mock_test.assert_called_with(b=2, a=1)
AssertionError: Expected call: test(a=1, b=2)
Actual call: test(a=1, b__lte=2)
You will need to correct either your test or your code so that they match (include __lte or don't depending on your need)

Related

Parameters with Default Values and Variadic Parameters Together

I'm trying to learn the different special function parameters in Python and get somewhat confused about them.
Consider this function bar:
def bar(arg, default_arg = 99, *pos_vari_arg, **kywd_vari_arg):
print(arg, default_arg, pos_vari_arg, kywd_vari_arg)
When I call it by
bar(0, 1, 2, 3, 4, foo=5, test=6)
It prints
0 1 (2, 3, 4) {'foo': 5, 'test': 6}
However, I actually want the default_arg to just be 99 and pass 1,2,3,4 to pos_vari_arg. To achieve this, I tried to access default_arg as a keyword parameter instead:
bar(0, 1, 2, 3, 4, foo=5, test=6, default_arg = 99)
But that would simply give me a #TypeError: bar() got multiple values for argument 'default_arg', because Python is not sure whether my default_arg = 99 should go to the parameter default_arg or the dictionary kywd_vari_arg.
I also tried to force default_arg to be a keyword-only parameter by inserting a *,:
def bar(arg, *, default_arg = 99, *pos_vari_arg, **kywd_vari_arg):
print(arg, default_arg, pos_vari_arg, kywd_vari_arg)
But Python throws an error:
File "<string>", line 1
def bar(arg, * , default_arg = 99, *pos_vari_arg, **kywd_vari_arg):
^
SyntaxError: invalid syntax
After some research, I found that *vari_arg must be a positional parameter and thus it cannot be placed after the *, and hence the syntax error. I also found that all parameters after *vari_arg would be forced to be keyword-only parameters, so I do not need the *, to do this job. Instead, I can just swap the places of the parameters:
def bar(arg, *vari_arg, default_arg = 99, **vari_kwarg):
print(arg, default_arg, vari_arg, vari_kwarg)
Now, bar(0, 1, 2, 3, 4, foo=5, test=6) will print 0 99 (1, 2, 3, 4) {'foo': 5, 'test': 6}, which is what I intended to achieve originally.
Question:
Is this the only way to achieve this effect?
Are all my understanding correct? Especially sentence in italic.
What is the best practice to follow if a function has standard
parameters, parameters with default values, and variadic parameters?
Thank you!

How to (log) transform *args arguments without losing structure

I am attempting to apply statistical tests to some datasets with variable numbers of groups. This causes a problem when I try to perform a log transformation for said groups while maintaining the ability to perform the test function (in this case scipy's kruskal()), which takes a variable number of arguments, one for each group of data.
The code below is an idea of what I want. Naturally stats.kruskal([np.log(i) for i in args]) does not work, as kruskal() does not expect a list of arrays, but one argument for each array. How do I perform log transformation (or any kind of alteration, really), while still being able to use the function?
import scipy.stats as stats
import numpy as np
def t(*args):
test = stats.kruskal([np.log(i) for i in args])
return test
a = [11, 12, 4, 42, 12, 1, 21, 12, 6]
b = [1, 12, 4, 3, 14, 8, 8, 6]
c = [2, 2, 3, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 7, 7, 7, 8, 8]
print(t(a, b, c))
IIUC, * in front of the list you are forming while calling kruskal should do the trick:
test = stats.kruskal(*[np.log(i) for i in args])
Asterisk unpacks the list and passes each entry of the list as arguments to the function being called i.e. kruskal here.

How to avoid code duplication in unit tests

Suppose I have a function called "factorial" and I want to test this function. I often find myself rewriting unit tests like the one shown below where I define some test cases, possibly including some edge cases, running tests for all of those. This common pattern, defining the test values and expected output and running tests on them leaves me with the following boilerplate code. Essentially I would like to have one function to which I pass the list of test values with the list of expected values and the function to test it on and let the framework handle the rest for me. Does something like that exist and what would speaks against such a simplified approach?
import unittest
class TestRecursionAlgorithms(unittest.TestCase):
def test_factorial(self):
input_values = [1, 2, 3, 4, 5]
solutions = [1, 2, 6, 24, 120]
for idx, (input_value, expected_solution) in enumerate(zip(input_values, solutions)):
with self.subTest(test_case=idx):
self.assertEqual(expected_solution, factorial(input_value))
Cheers
You could use a variation of this.
input_values = [1, 2, 3, 4, 5]
solutions = [1, 2, 6, 24, 120]
result = dict(zip(input_values, solutions)) # Key:Value
print(result)
match = {i: k for i, k in result.items() if i == k} # Key Value comparison
print(match)
result:
{1: 1, 2: 2, 3: 6, 4: 24, 5: 120}
{1: 1, 2: 2}

Strange output from ndarray.shape

I am trying to convert the value of a dictionary to a 1d array using:np.asarray(dict.values()), but when I tried to print the shape of the output array, I have problem.
My array looks like this:
dict_values([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26])
but the output of array.shape is:
()
by which I was expecting (27,1) or (27,)
after I changed the code to np.asarray(dict.values()).flatten(),the output of array.shape became
(1,)
I have read the document of numpy.ndarray.shape, but can't get a hint why the outputs are like these. Can someone explain it to me? Thx
This must be python 3.
From docs
The objects returned by dict.keys(), dict.values() and dict.items()
are view objects. They provide a dynamic view on the dictionary’s
entries, which means that when the dictionary changes, the view
reflects these changes.
The issue is that dict.values() is only returning a dynamic view of the data in dictionary's values, Leading to the behaviour you see.
dict_a = {'1': 1, '2': 2}
res = np.array(dict_a.values())
res.shape #()
res
#Output:
array(dict_values([1, 2]), dtype=object)
Notice that the numpy array isn't resolving the view object into the actual integers, but rather just coercing the view into an array with dtype = object
To avoid this issue, consume the view to get a list, as follows:
dict_a = {'1': 1, '2': 2}
res = np.array(list(dict_a.values()))
res.shape #(2,)
res #array([1, 2])
res.dtype #dtype('int32')

Python - Counter merges elements from list together

I have a list full of Windows API calls:
listOfSequences =
['GetSystemDirectoryA',
'IsDBCSLeadByte',
'LocalAlloc',
'CreateSemaphoreW',
'CreateSemaphoreA',
'GlobalAddAtomW',
'lstrcpynW',
'LoadLibraryExW',
'SearchPathW',
'CreateFileW',
'CreateFileMappingW',
'MapViewOfFileEx',
'GetSystemMetrics',
'RegisterClipboardFormatW',
'SystemParametersInfoW',
'GetDC',
'GetDeviceCaps',
'ReleaseDC', ...... and so on .....]
Since some of them occurs several times, I wanted to collected their number of occurences. Thus, I used collections.Counter.
But it concatenates some APIs together:
lCountedAPIs = Counter(listOfSequences)
when I print the lCountedAPIs I get the folowing:
Counter({'IsRectEmptyLocalAlloc': 2,
'DdePostAdvise': 3,
'DispatchMessageWGetModuleFileNameA': 2,
'FindResourceExW': 50318,
'ReleaseDCGetModuleFileNameW': 7,
'DefWindowProcAGetThreadLocale': 1,
'CoGetCallContext': 40,
'CoGetTreatAsClassGetCommandLineA': 1,
'GetForegroundWindowGetSystemDirectoryW': 1,
'GetModuleHandleWGetSystemTimeAsFileTime': 2,
'WaitForSingleObjectExIsChild': 1,
'LoadIconAGetWindowsDirectoryW': 2,
'GlobalFreeLocalAlloc': 10,
'GetMapModeCreateSemaphoreW': 1,
'HeapLock': 11494, <---------- A
'CharNextAGetCurrentProcessId': 11, <---------- B
'RemovePropWGetStartupInfoA': 1,
'GetTickCountGetVersionExW': 55,
So for ex.:
HeapLock (see A) was not merged with another API
But CharNextA was concatenated with GetCurrentProcessId (see B)
Can somebody tell me why this happens and how to fix that ?
Thanks in advcance & best regards :)
Check your list definition. Python concatenates adjacent string literals, so you must have missed a comma somewhere in the the middle:
listOfSequences = [
'GetSystemDirectoryA',
'IsDBCSLeadByte',
'LocalAlloc',
...
'CharNextA'
# ^ comma missing here
'GetCurrentProcessId',
...
]
This has bitten me several times.
Nothing in Counter does that. You must necessarily have 11 occurrences of 'CharNextAGetCurrentProcessId' in listOfSequences. You can check this by running 'CharNextAGetCurrentProcessId' in listOfSequences.

Categories

Resources