I am writing a test script which contains different functions for different tests. I would like to be able to randomly select a test to run. I have already achieved this with the following function...
test_options = ("AOI", "RMODE")
def random_test(test_options, control_obj):
ran_test_opt = choice(test_options)
if ran_test_opt.upper() == "AOI":
logging.debug("Random AOI Test selected")
random_aoi()
elif ran_test_opt.upper() == "RMODE":
logging.debug("Random Read Mode Test selected")
random_read_mode(control_obj)
However, I want to be to add further test functions without having to modify the random test select function. All I would like to do is add in the test function to the script. Additionally I would also like a way to selecting which test will be included in the random selection. This is what the variable test_options does. How would I go about changing my random generate function to achieve this?
EDIT: I got around the fact that all the tests might need different arguments by including them all in a test class. All the arguments will be passed into the init and the test functions will refer to them using "self." when they need a specific variable...
class Test(object):
"""A class that contains and keeps track of the tests and the different modes"""
def __init__(self, parser, control_obj):
self.parser = parser
self.control_obj = control_obj
def random_test(self):
test_options = []
for name in self.parser.options('Test_Selection'):
if self.parser.getboolean('Test_Selection', name):
test_options.append(name.lower())
ran_test_opt = choice(test_options)
ran_test_func = getattr(self, ran_test_opt)
ran_test_func()
#### TESTS ####
def random_aoi(self):
logging.info("Random AOI Test")
self.control_obj.random_readout_size()
def random_read_mode(self):
logging.info("Random Readout Mode Test")
self.control_obj.random_read_mode()
You can create a list of functions in python which you can call:
test_options = (random_aoi, random_read_mode)
def random_test(test_options, control_obj):
ran_test_opt = choice(test_options)
ran_test_opt(control_obj) # call the randomly selected function
You have to make each function take the same arguments this way, so you can call them all the same way.
If you need to have some human readable names for the functions, you can store them in a dictionary, together with the function. I expect that you pass the control_obj to every function.
EDIT: This seems to be identical to the answer by #Ikke, but uses the dictionary instead of a list of functions.
>>> test_options = {'AOI': random_aoi, 'RMODE': random_read_mode}
>>> def random_test(test_options, control_obj):
... ran_test_opt = test_options[choice(test_options.keys())]
... ran_test_opt(control_obj)
Or you could pick out a test from test_options.values(). The 'extra' call to list() is because in python 3.x, dict.values() returns an iterator.
>>> ran_test_opt = choice(list(test_options.values()))
I'm gonna go out on a limb here and suggest that you actually use a real unittest framework for this. Python provides one -- conveniently name unittest. To randomly run a test, it would work like this:
import unittest
class Tests(unittest.TestCase):
def test_something(self):
...
def test_something_else(self):
...
if __name__ == '__main__':
tests = list(unittest.defaultTestLoader.loadTestsFromTestCase(Tests))
import random
# repeat as many times as you would like...
test = random.choice(tests)
print(test)
test()
To run all the tests, instead you would use: unittest.main() -- I suppose you could toggle which happens via a simple commandline switch.
This has the huge advantage that you don't need to keep an up-to-date list of tests separate from the tests themselves. If you want to add a new test, just add one and unittest will find it (as long as the method name starts with test). It will also tell you information about which test runs and which one fails, etc.
If you wrapped your test selecting function and the tests themselves in a class, you could do the following:
from random import choice
class Test(object):
""" Randomly selects a test from the methods with 'random' in the name """
def random_aoi(self):
print 'aoi'
def random_read_mode(self):
print 'read_mode'
def random_text(self):
print 'test'
# add as many tests as needed here
# important that this function doesn't have 'random' in the name(in my code anyway)
def run_test(self): # you can add control_obj to the args
methods = [m for m in dir(self) if callable(getattr(self, m)) and 'random' in m]
test = choice(methods)
getattr(self, test)() # you would add control_obj between these parens also
app = Test()
app.run_test()
This makes it easy to add tests without the need to change any other code.
Here is info on getattr
In addition to the other options, look at functools.partial. It allows to create closures for functions:
from functools import partial
def test_function_a(x,y):
return x+y
def other_function(b, c=5, d=4):
return (b*c)**d
def my_test_functions(some_input):
funclist = (partial(test_function_a, x=some_input, y=4),
partial(other_function, b=some_input, d=9))
return funclist
random.choice(funclist)()
This lets you normalize the argument list for each test function.
Related
# Message Object Creation File
messageList = []
def getMessageObjectA():
msg = MessageCreator(msgAttribute1, msgAttribute2)
msgList.append(msg)
return msg
def getMessageObjectB():
msg = MessageCreator(msgAttribute3, msgAttribute4)
msgList.append(msg)
return msg
def getMessageObjectC():
msg = MessageCreator(msgAttribute5, msgAttribute6)
msgList.append(msg)
return msg
def clearMessages():
for msg in messageList:
# logic to clear messages
# Test Script #1
import MessageObjects as MsgObj
a = MsgObj.getMessageObjectA()
c = MsgObj.getMessageObjectC()
# Do stuff
MsgObj.clearMessages()
# Do more stuff
# Test Script #223423423
import MessageObjects as MsgObj
e = MsgObj.getMessageObjectE()
u = MsgObj.getMessageObjectU()
y = MsgObj.getMessageObjectY()
# Do stuff
MsgObj.clearMessages()
# Do more stuff
In the actual code, I will have over a hundred getMessageObject() functions. And in certain places, I will only call some of those getMessageObject() functions depending on what is needed, which is why I have those getters.
Adding this line msgList.append(msg) inside every function introduces human programming error and possibly unnecessarily adds to the length of the source code file.
How do I have every getter call msgList.append(msg)? Is there some sort of fancy way to wrap all of this logic in a wrapper function that I'm not thinking of? I'm pretty sure decorators won't work because they don't see the variables inside the function, and I would have to repeat those decorators too for every function I make.
NOTE: Answer has to be in Python2. Can't use Python3 at work.
NOTE: The intent is for these getters() to be inside a Constants-like file, where our many different test scripts call these getters.
The simplest solution is just to generalize the function. The only difference between each getItem# function is the arguments passed to GenerateItem. Just pass that data in to getItem:
def getItem(arg1, arg2):
item = GenerateItem(arg1, arg2)
itemList.append(item)
return item
a = getItem(val1, val2)
b = getItem(val3, val4)
If you need functions with specific names, just create aliases. This can be done easily using functools.partial:
from functools import partial
getItemA = partial(getItem, val1, val2)
getItemB = partial(getItem, val3, val4)
a = getItemA()
b = getItemB()
The arguments are partially applied to getItem, and a 0-arity function is returned and placed in the alias.
Of course though, manually hardcoding all these also leads to sources of error. You may want to reconsider how things are setup if this is necessary.
Why should the itemList be populated inside getters in the first place?
What you can do is, when you are calling such getters, add a line for the appending the respective item to the list
a = getItemA()
itemList.append(a)
I have a program that defines the function verboseprint to either print or not print to the screen based on a boolean:
# define verboseprint based on whether we're running in verbose mode or not
if in_verbose_mode:
def verboseprint (*args):
for arg in args:
print arg,
print
print "Done defining verbose print."
else:
# if we're not in verbosemode, do nothing
verboseprint = lambda *a: None
My program uses multiple files, and I'd like to use this definition of verboseprint in all of them. All of the files will be passed the in_verbose_mode boolean. I know that I could just define verboseprint by itself in a file and then import it into all of my other files, but I need the function definition to be able to be declared two different ways based on a boolean.
So in summary: I need a function that can declare another function in two different ways, that I can then import into multiple files.
Any help would be appreciated.
You should look up the factory design pattern. It's basically designed to do exactly what you are talking about, though it would be with classes not functions. That being said, you can get the behavior that you want by having a class that returns one of two possible objects (based on your boolean). They both have the same method but it operates differently (just like your two functions).
Class A:
def method():
do things one way
Class B:
def method():
do things another way
import A,B
Class Factory:
def __init__(bool):
self.printer = A if bool else B
def do_thing():
self.printer.method()
import Factory
fac = Factory(True)
fac.do_thing() # does A thing
fac = Factor(False)
fac.do_thing() # does B thing
Usually you don't want define a function in this way.
And I think the easy way to achieve this is you pass the boolean as a function parameter and define the behavior based on the parameter:
def verboseprint (*args, mode):
if mode == in_verbose_mode:
for arg in args:
print arg,
print
print "Done defining verbose print."
# if we're not in verbosemode, do nothing
##else:
## verboseprint = lambda *a: None
And then import this function to use in your other files.
I wrote code for a data analysis project, but it's becoming unwieldy and I'd like to find a better way of structuring it so I can share it with others.
For the sake of brevity, I have something like the following:
def process_raw_text(txt_file):
# do stuff
return token_text
def tag_text(token_text):
# do stuff
return tagged
def bio_tag(tagged):
# do stuff
return bio_tagged
def restructure(bio_tagged):
# do stuff
return(restructured)
print(restructured)
Basically I'd like the program to run through all of the functions sequentially and print the output.
In looking into ways to structure this, I read up on classes like the following:
class Calculator():
def add(x, y):
return x + y
def subtract(x, y):
return x - y
This seems useful when structuring a project to allow individual functions to be called separately, such as the add function with Calculator.add(x,y), but I'm not sure it's what I want.
Is there something I should be looking into for a sequential run of functions (that are meant to structure the data flow and provide readability)? Ideally, I'd like all functions to be within "something" I could call once, that would in turn run everything within it.
Chain together the output from each function as the input to the next:
def main():
print restructure(bio_tag(tag_text(process_raw_text(txt_file))
if __name__ == '__main__':
main()
#SvenMarnach makes a nice suggestion. A more general solution is to realise that this idea of repeatedly using the output as the input for the next in a sequence is exactly what the reduce function does. We want to start with some input txt_file:
def main():
pipeline = [process_raw_text, tag_text, bio_tag, restructure]
print reduce(apply, pipeline, txt_file)
There's nothing preventing you from creating a class (or set of classes) that represent that you want to manage with implementations that will call the functions you need in a sequence.
class DataAnalyzer():
# ...
def your_method(self, **kwargs):
# call sequentially, or use the 'magic' proposed by others
# but internally to your class and not visible to clients
pass
The functions themselves could remain private within the module, which seem to be implementation details.
you can implement a simple dynamic pipeline just using modules and functions.
my_module.py
def 01_process_raw_text(txt_file):
# do stuff
return token_text
def 02_tag_text(token_text):
# do stuff
return tagged
my_runner.py
import my_module
if __name__ == '__main__':
funcs = sorted([x in my_module.__dict__.iterkeys() if re.match('\d*.*', x)])
data = initial_data
for f in funcs:
data = my_module.__dict__[f](data)
I'd like to modify the arguments passed to a method in a module, as opposed to replacing its return value.
I've found a way around this, but it seems like something useful and has turned into a lesson in mocking.
module.py
from third_party import ThirdPartyClass
ThirdPartyClass.do_something('foo', 'bar')
ThirdPartyClass.do_something('foo', 'baz')
tests.py
#mock.patch('module.ThirdPartyClass.do_something')
def test(do_something):
# Instead of directly overriding its return value
# I'd like to modify the arguments passed to this function.
# change return value, no matter inputs
do_something.return_value = 'foo'
# change return value, based on inputs, but have no access to the original function
do_something.side_effect = lambda x, y: y, x
# how can I wrap do_something, so that I can modify its inputs and pass it back to the original function?
# much like a decorator?
I've tried something like the following, but not only is it repetitive and ugly, it doesn't work. After some PDB introspection.. I'm wondering if it's simply due to however this third party library works, as I do see the original functions being called successfully when I drop a pdb inside the side_effect.
Either that, or some auto mocking magic I'm just not following that I'd love to learn about.
def test():
from third_party import ThirdPartyClass
original_do_something = ThirdPartyClass.do_something
with mock.patch('module.ThirdPartyClass.do_something' as mocked_do_something:
def side_effect(arg1, arg2):
return original_do_something(arg1, 'overridden')
mocked_do_something.side_effect = side_effect
# execute module.py
Any guidance is appreciated!
You may want to use parameter wraps for the mock call. (Docs for reference.) This way the original function will be called, but it will have everything from Mock interface.
So for changing parameters called to original function you may want to try it like that:
org.py:
def func(x):
print(x)
main.py:
from unittest import mock
import org
of = org.func
def wrapped(a):
of('--{}--'.format(a))
with mock.patch('org.func', wraps=wrapped):
org.func('x')
org.func.assert_called_with('x')
result:
--x--
The trick is to pass the original underlying function that you still want to access as a parameter to the function.
Eg, for race condition testing, have tempfile.mktemp return an existing pathname:
def mock_mktemp(*, orig_mktemp=tempfile.mktemp, **kwargs):
"""Ensure mktemp returns an existing pathname."""
temp = orig_mktemp(**kwargs)
open(temp, 'w').close()
return temp
Above, orig_mktemp is evaluated when the function is declared, not when it is called, so all invocations will have access to the original method of tempfile.mktemp via orig_mktemp.
I used it as follows:
#unittest.mock.patch('tempfile.mktemp', side_effect=mock_mktemp)
def test_retry_on_existing_temp_path(self, mock_mktemp):
# Simulate race condition: creation of temp path after tempfile.mktemp
...
I have a dictionary of data, the key is the file name and the value is another dictionary of its attribute values. Now I'd like to pass this data structure to various functions, each of which runs some test on the attribute and returns True/False.
One approach would be to call each function one by one explicitly from the main code. However I can do something like this:
#MYmodule.py
class Mymodule:
def MYfunc1(self):
...
def MYfunc2(self):
...
#main.py
import Mymodule
...
#fill the data structure
...
#Now call all the functions in Mymodule one by one
for funcs in dir(Mymodule):
if funcs[:2]=='MY':
result=Mymodule.__dict__.get(funcs)(dataStructure)
The advantage of this approach is that implementation of main class needn't change when I add more logic/tests to MYmodule.
Is this a good way to solve the problem at hand? Are there better alternatives to this solution?
I'd say a better and much more Pythonic approach would be to define a decorator to indicate which functions you want to use:
class MyFunc(object):
funcs = []
def __init__(self, func):
self.funcs.append(func)
#MyFunc
def foo():
return 5
#MyFunc
def bar():
return 10
def quux():
# Not decorated, so will not be in MyFunc
return 20
for func in MyFunc.funcs:
print func()
Output:
5
10
Essentially you're performing the same logic: taking only functions who were defined in a particular manner and applying them to a specific set of data.
Sridhar, the method you proposed is very similar to the one used in the unittest module.
For example, this is how unittest.TestLoader finds the names of all the test methods to run (lifted from /usr/lib/python2.6/unittest.py):
def getTestCaseNames(self, testCaseClass):
"""Return a sorted sequence of method names found within testCaseClass
"""
def isTestMethod(attrname, testCaseClass=testCaseClass, prefix=self.testMethodPrefix):
return attrname.startswith(prefix) and hasattr(getattr(testCaseClass, attrname), '__call__')
testFnNames = filter(isTestMethod, dir(testCaseClass))
if self.sortTestMethodsUsing:
testFnNames.sort(key=_CmpToKey(self.sortTestMethodsUsing))
return testFnNames
Just like your proposal, unittest uses dir to list all the attributes of
testCaseClass, and filters the list for those whose name startswith prefix (which is set elsewhere to equal 'test').
I suggest a few minor changes:
If you place the functions in MYmodule.py, then (of course) the import statement must be
import MYmodule
Use getattr instead of .__dict__.get. Not only is it shorter, but it continue to work if you subclass Mymodule. That might not be your intention at this point, but using getattr is probably a good default habit anyway.
for funcs in dir(MYmodule.Mymodule):
if funcs.startswith('MY'):
result=getattr(MYmodule.Mymodule,funcs)(dataStructure)