I'm doing a bunch of unit tests for a bunch of methods I have in my code and started to realize I could really condense this code block down if I could do some sort of macro that would just take care of all my insertions for me.
This is an example of a method I would use, however the only differences is the parameters c_unit_case also c_test_case are based on the methods in lib.
def test_case(filepath):
tests = parsedfile(filepath)
for unittests in tests:
print lib.c_test_case(unittests.params[0])
I'm looking for something sort of like this.
GENERIC_CASE(method_name, params, filepath)
(tests) = parsedfile(filepath)
for unittests in (tests):
args += unittests.(params)
print lib.(method_name)(args)
Is it possible to do this type of thing in Python?
Try something like this and see if this works:
def test_case(method_name, filepath, *args):
tests = parsedfile(filepath)
print (method_name(*args))
*args in the signature of the function lets you put in as many additional arguments as you'd like. Calling method_name(*args) unrolls the arguments into the parameter.
This should handle a variable number of arguments.
This is what it would sort of look like the way you had it:
GENERIC_CASE(method_name, filepath, *params)
(tests) = parsedfile(filepath)
for unittests in (tests):
args += unittests.(*params)
print lib.(method_name)(args)
I'm not 100% if this is what you're looking for.
Related
I am adding some tests to existing not so test friendly code, as title suggest, I need to test if the complex method actually calls another method, eg.
class SomeView(...):
def verify_permission(self, ...):
# some logic to verify permission
...
def get(self, ...):
# some codes here I am not interested in this test case
...
if some condition:
self.verify_permission(...)
# some other codes here I am not interested in this test case
...
I need to write some test cases to verify self.verify_permission is called when condition is met.
Do I need to mock all the way to the point of where self.verify_permission is executed? Or I need to refactor the def get() function to abstract out the code to become more test friendly?
There are a number of points made in the comments that I strongly disagree with, but to your actual question first.
This is a very common scenario. The suggested approach with the standard library's unittest package is to utilize the Mock.assert_called... methods.
I added some fake logic to your example code, just so that we can actually test it.
code.py
class SomeView:
def verify_permission(self, arg: str) -> None:
# some logic to verify permission
print(self, f"verify_permission({arg=}=")
def get(self, arg: int) -> int:
# some codes here I am not interested in this test case
...
some_condition = True if arg % 2 == 0 else False
...
if some_condition:
self.verify_permission(str(arg))
# some other codes here I am not interested in this test case
...
return arg * 2
test.py
from unittest import TestCase
from unittest.mock import MagicMock, patch
from . import code
class SomeViewTestCase(TestCase):
def test_verify_permission(self) -> None:
...
#patch.object(code.SomeView, "verify_permission")
def test_get(self, mock_verify_permission: MagicMock) -> None:
obj = code.SomeView()
# Odd `arg`:
arg, expected_output = 3, 6
output = obj.get(arg)
self.assertEqual(expected_output, output)
mock_verify_permission.assert_not_called()
# Even `arg`:
arg, expected_output = 2, 4
output = obj.get(arg)
self.assertEqual(expected_output, output)
mock_verify_permission.assert_called_once_with(str(arg))
You use a patch variant as a decorator to inject a MagicMock instance to replace the actual verify_permission method for the duration of the entire test method. In this example that method has no return value, just a side effect (the print). Thus, we just need to check if it was called under the correct conditions.
In the example, the condition depends directly on the arg passed to get, but this will obviously be different in your actual use case. But this can always be adapted. Since the fake example of get has exactly two branches, the test method calls it twice to traverse both of them.
When doing unit tests, you should always isolate the unit (i.e. function) under testing from all your other functions. That means, if your get method calls other methods of SomeView or any other functions you wrote yourself, those should be mocked out during test_get.
You want your test of get to be completely agnostic to the logic inside verify_permission or any other of your functions used inside get. Those are tested separately. You assume they work "as advertised" for the duration of test_get and by replacing them with Mock instances you control exactly how they behave in relation to get.
Note that the point about mocking out "network requests" and the like is completely unrelated. That is an entirely different but equally valid use of mocking.
Basically, you 1.) always mock your own functions and 2.) usually mock external/built-in functions with side effects (like e.g. network or disk I/O). That is it.
Also, writing tests for existing code absolutely has value. Of course it is better to write tests alongside your code. But sometimes you are just put in charge of maintaining a bunch of existing code that has no tests. If you want/can/are allowed to, you can refactor the existing code and write your tests in sync with that. But if not, it is still better to add tests retroactively than to have no tests at all for that code.
And if you write your unit tests properly, they still do their job, if you or someone else later decides to change something about the code. If the change breaks your tests, you'll notice.
As for the exception hack to interrupt the tested method early... Sure, if you want. It's lazy and calls into question the whole point of writing tests, but you do you.
No, seriously, that is a horrible approach. Why on earth would you test just part of a function? If you are already writing a test for it, you may as well cover it to the end. And if it is so complex that it has dozens of branches and/or calls 10 or 20 other custom functions, then yes, you should definitely refactor it.
I am currently developing an application (my_app) that uses logic from another python package (internal_package), that is being developed in parallel.
Lets say I use a function func1 from internal_package in my code. In internal_package, it is defined like this:
def func1(a, b):
# do something
...
However, If the function signature of func1 (i.e. the parameters with which the function is called) changes, lets say to
def func1(a, b, c):
# do something
...
My code would of course raise an Exception, since I did not pass parameter c.
To become aware of that immediately, I want to write a test in pytest. In my understanding, this kind of test is not a unit test, but an integration test.
My approach would be something like this:
import inspect
import internal_package
def test_func1_signature():
expected_signature = ? # not sure how to correctly provide the expected signature here
actual_signature = inspect.signature(internal_package.func1)
assert actual_signature == expected_signature
I would expect this to be a common issue, but I was not able to find anything related.
What would be the most pythonic way of performing this sort of test?
In case anyone is interested, I ended up using getfullargspec from the inbuilt inspect library. In the documentation it says:
Get the names and default values of a Python function’s parameters. A named tuple is returned:
FullArgSpec(args, varargs, varkw, defaults, kwonlyargs, kwonlydefaults, annotations)
As I am interested in in the args part, I retrieve them from func1 as follows:
inspect.getfullargspec(func1)[0]
Which then returns ['a', 'b']. I can then write a simple test in pytest like this:
import pytest
import inspect
def test_func1_signature:
expected_signature = ['a', 'b'] # I specify what i expect
actual_signature = inspect.getfullargspec(func1)[0]
assert actual_signature == expected_signature
If anyone knows a better/more pythonic/more correct way, please let me know.
In python 3.4, I want to be able to do a very simple dispatch table for testing purposes. The idea is to have a dictionary with the key being a string of the name of the function to be tested and the data item being the name of the test function.
For example:
myTestList = (
"myDrawFromTo",
"myDrawLineDir"
)
myTestDict = {
"myDrawFromTo": test_myDrawFromTo,
"myDrawLineDir": test_myDrawLineDir
}
for myTest in myTestList:
result = myTestDict[myTest]()
The idea is that I have a list of function names someplace. In this example, I manually create a dictionary that maps those names to the names of test functions. The test function names are a simple extension of the function name. I'd like to compute the entire dictionary from the list of function names (here it is myTestList).
Alternately, if I can do the same thing without the dictionary, that'd be fine, too. I tried just building a new string from the entries in myTestList and then using local() to set up the call, but didn't have any luck. The dictionary idea came from the Python 3.x documentation.
There are two parts to the problem.
The easy part is just prefixing 'text_' onto each string:
tests = {test: 'test_'+test for test in myTestDict}
The harder part is actually looking up the functions by name. That kind of thing is usually a bad idea, but you've hit on one of the cases (generating tests) where it often makes sense. You can do this by looking them up in your module's global dictionary, like this:
tests = {test: globals()['test_'+test] for test in myTestList}
There are variations on the same idea if the tests live somewhere other than the module's global scope. For example, it might be a good idea to make them all methods of a class, in which case you'd do:
tester = TestClass()
tests = {test: getattr(tester, 'test_'+test) for test in myTestList}
(Although more likely that code would be inside TestClass, so it would be using self rather than tester.)
If you don't actually need the dict, of course, you can change the comprehension to an explicit for statement:
for test in myTestList:
globals()['test_'+test]()
One more thing: Before reinventing the wheel, have you looked at the testing frameworks built into the stdlib, or available on PyPI?
Abarnert's answer seems to be useful but to answer your original question of how to call all test functions for a list of function names:
def test_f():
print("testing f...")
def test_g():
print("testing g...")
myTestList = ['f', 'g']
for funcname in myTestList:
eval('test_' + funcname + '()')
I have written several functions that run sequentially, each one taking as its input the output of the previous function so in order to run it, I have to run this line of code
make_list(cleanup(get_text(get_page(URL))))
and I just find that ugly and inefficient, is there a better way to do sequential function calls?
Really, this is the same as any case where you want to refactor commonly-used complex expressions or statements: just turn the expression or statement into a function. The fact that your expression happens to be a composition of function calls doesn't make any difference (but see below).
So, the obvious thing to do is to write a wrapper function that composes the functions together in one place, so everywhere else you can make a simple call to the wrapper:
def get_page_list(url):
return make_list(cleanup(get_text(get_page(url))))
things = get_page_list(url)
stuff = get_page_list(another_url)
spam = get_page_list(eggs)
If you don't always call the exact same chain of functions, you can always factor out into the pieces that you frequently call. For example:
def get_clean_text(page):
return cleanup(get_text(page))
def get_clean_page(url):
return get_clean_text(get_page(url))
This refactoring also opens the door to making the code a bit more verbose but a lot easier to debug, since it only appears once instead of multiple times:
def get_page_list(url):
page = get_page(url)
text = get_text(page)
cleantext = cleanup(text)
return make_list(cleantext)
If you find yourself needing to do exactly this kind of refactoring of composed functions very often, you can always write a helper that generates the refactored functions. For example:
def compose1(*funcs):
#wraps(funcs[0])
def composed(arg):
for func in reversed(funcs):
arg = func(arg)
return arg
return composed
get_page_list = compose1(make_list, cleanup, get_text, get_page)
If you want a more complicated compose function (that, e.g., allows passing multiple args/return values around), it can get a bit complicated to design, so you might want to look around on PyPI and ActiveState for the various existing implementations.
You could try something like this. I always like separating train wrecks(the book "Clean Code" calls those nested functions train wrecks). This is easier to read and debug. Remember you probably spend twice as long reading your code than writing it so make it easier to read. You will thank yourself later.
url = get_page(URL)
url_text = get_text(url)
make_list(cleanup(url_text))
# you can also encapsulate that into its own function
def build_page_list_from_url(url):
url = get_page(URL)
url_text = get_text(url)
return make_list(cleanup(url_text))
Options:
Refactor: implement this series of function calls as one, aptly-named method.
Look into decorators. They're syntactic sugar for 'chaining' functions in this way. E.g. implement cleanup and make_list as a decorators, then decorate get_text with them.
Compose the functions. See code in this answer.
You could shorten constructs like that with something like the following:
class ChainCalls(object):
def __init__(self, *funcs):
self.funcs = funcs
def __call__(self, *args, **kwargs):
result = self.funcs[-1](*args, **kwargs)
for func in self.funcs[-2::-1]:
result = func(result)
return result
def make_list(arg): return 'make_list(%s)' % arg
def cleanup(arg): return 'cleanup(%s)' % arg
def get_text(arg): return 'get_text(%s)' % arg
def get_page(arg): return 'get_page(%r)' % arg
mychain = ChainCalls(make_list, cleanup, get_text, get_page)
print( mychain('http://is.gd') )
Output:
make_list(cleanup(get_text(get_page('http://is.gd'))))
I have been doing a lot of searching, and I don't think I've really found what I have been looking for. I will try my best to explain what I am trying to do, and hopefully there is a simple solution, and I'll be glad to have learned something new.
This is ultimately what I am trying to accomplish: Using nosetests, decorate some test cases using the attribute selector plugin, then execute test cases that match a criteria by using the -a switch during commandline invocation. The attribute values for the tests that are executed are then stored in an external location. The command line call I'm using is like below:
nosetests \testpath\ -a attribute='someValue'
I have also created a customized nosetest plugin, which stores the test cases' attributse, and writes them to an external location. The idea is that I can select a batch of tests, and by storing the attributes of these tests, I can do filtering on these results later for reporting purposes. I am accessing the method attributes in my plugin by overriding the "wantMethod" method with the code similar to the following:
def set_attribs(self, method, attribute):
if hasattr(method, attribute):
if not self.method_attributes.has_key(method.__name__):
self.method_attributes[method.__name__] = {}
self.method_attributes[method.__name__][attribute] = getattr(method, attribute)
def wantMethod(self, method):
self.set_attribs(method, "attribute1")
self.set_attribs(method, "attribute2")
pass
I have this working for pretty much all the tests, except for one case, where the test is uing the "yield" keyword. What is happening is that the methods that are generated are being executed fine, but then the method attributes are empty for each of the generated functions.
Below is the example of what I am trying to achieve. The test below retreives a list of values, and for each of those values, yields the results from another function:
#attr(attribute1='someValue', attribute2='anotherValue')
def sample_test_generator(self):
for (key, value) in _input_dictionary.items()
f = partial(self._do_test, key, value)
f.attribute1='someValue'
yield (lambda x: f(), key)
def _do_test(self, input1, input2):
# Some code
From what I have read, and think I understand, when yield is called, it would create a new callable function which then gets executed. I have been trying to figure out how to retain the attribute values from my sample_test_generator method, but I have not been successful. I thought I could create a partial method, and then add the attribute to the method, but no luck. The tests execute without errors at all, it just seems that from my plugin's perspective, the method attributes aren't present, so they don't get recorded.
I realize this a pretty involved question, but I wanted to make sure that the context for what I am trying to achieve is clear. I have been trying to find information that could help me for this particular case, but I feel like I've reached a stumbling block now, so I would really like to ask the experts for some advice.
Thanks.
** Update **
After reading through the feedback and playing around some more, it looks like if I modified the lambda expression, it would achieve what I am looking for. In fact, I didn't even need to create the partial function:
def sample_test_generator(self):
for (key, value) in _input_dictionary.items()
yield (lambda: self._do_test)
The only downside to this approach is that the test name will not change. As I am playing around more, it looks like in nosetests, when a test generator is used, it would actually change the test name in the result based on the keywords it contains. Same thing was happening when I was using the lambda expression with a parameter.
For example:
Using lamdba expression with a parameter:
yield (lambda x: self._do_test, "value1")
In nosetests plugin, when you access the test case name, it would be displayed as "sample_test_generator(value1)
Using lambda expression without a parameter:
yield (lambda: self._do_test)
The test case name in this case would be "sample_test_generator". In my example above, if there are multiple values in the dictionary, then the yield call would occur multiple times. However, the test name would always remain as "sample_test_generator". This is not as bad as when I would get the unique test names, but then not be able to store the attribute values at all. I will keep playing around, but thanks for the feedback so far!
EDIT
I forgot to come back and provide my final update on how I was able to get this to work in the end, there was a little confusion on my part at first, and after I looked through it some more, I figured out that it had to do with how the tests are recognized:
My original implementation assumed that every test that gets picked up for execution goes through the "wantMethod" call from the plugin's base class. This is not true when "yield" is used to generate the test, because at this point, the test method has already passed the "wantMethod" call.
However, once the test case is generated through the "yeild" call, it does go through the "startTest" call from the plug-in base class, and this is where I was finally able to store the attribute successfully.
So in a nut shell, my test execution order looked like this:
nose -> wantMethod(method_name) -> yield -> startTest(yielded_test_name)
In my override of the startTest method, I have the following:
def startTest(self, test):
# If a test is spawned by using the 'yield' keyword, the test names would be the parent test name, appended by the '(' character
# example: If the parent test is "smoke_test", the generated test from yield would be "smoke_test('input')
parent_test_name = test_name.split('(')[0]
if self.method_attributes.has_key(test_name):
self._test_attrib = self.method_attributes[test_name]
elif self.method_attributes.has_key(parent_test_name):
self._test_attrib = self.method_attributes[parent_test_name]
else:
self._test_attrib = None
With this implementation, along with my overide of wantMethod, each test spawned by the parent test case also inherits attributes from the parent method, which is what I needed.
Again, thanks to all who send replies. This was quite a learning experience.
Would this fix your name issue?
def _actual_test(x, y):
assert x == y
def test_yield():
_actual_test.description = "test_yield_%s_%s" % (5, 5)
yield _actual_test, 5, 5
_actual_test.description = "test_yield_%s_%s" % (4, 8) # fail
yield _actual_test, 4, 8
_actual_test.description = "test_yield_%s_%s" % (2, 2)
yield _actual_test, 2, 2
Rename survives #attr too.
does this work?
#attr(attribute1='someValue', attribute2='anotherValue')
def sample_test_generator(self):
def get_f(f, key):
return lambda x: f(), key
for (key, value) in _input_dictionary.items()
f = partial(self._do_test, key, value)
f.attribute1='someValue'
yield get_f(f, key)
def _do_test(self, input1, input2):
# Some code
The Problem ist that the local variables change after you created the lambda.