Related
An Example code (which does not work)
#pytest.fixture(scope="function")
def my_method_user(db):
user = User(name="method_user").save()
return user
#pytest.fixture(scope="class")
def my_class_user(db):
user = User(name="class_user").save()
return user
#pytest.mark.usefixtures("my_class_user")
class TestClass:
def test_user_count(self):
assert len(User.objects.all()) == 1
def test_user_count_2(self, my_method_user):
assert len(User.objects.all()) == 2
However, doing this (or anything similar where I tried to access DB directly through queryset etc) basically always gives me ScopeMistmatch.
Basically, what I'm trying to do is
Create a User that will be "globally" used within the class - fixture with class scope (or maybe rather, some rows that were "setup" in prior to performing a test for class)
In some of the test functions, I would like a user to be created temporarily (but with a certain condition, thus I hope to use fixture). When the function tested is done, the user is automatically removed.
Later on, the datas might be more nested than simple class-function, the entire 5 layers might be all used. So rows that were created by class-scoped fixtures should also be deleted after, but that were created by module/package/session should stay
I've researched about using django_db_blocker.unblock() trying to avoid scopemismatch and keep datas throughout multiple tests, however it gives me a problem where the datas are not being removed after the tests are done (unless I remove them manually, which may be quite inaccurate).
Am i fundamentally wrong about how I'm supposed to use pytest/fixture? or am I missing something? Is there a different way to achieve this?
When I pass the options in the program (a computational biology experiment) I usually pass them through a .py file.
So I have this .py file that reads like:
starting_length=9
starting_cell_size=1000
LengthofExperiments=5000000
Then I execute the file and get the data. Since the program is all on my machine and no one else has access to it, it is secure in a trivial way.
I can also write a similar file very easily:
def writeoptions(directory):
options=""
options+="starting_length=%s%s"%(starting_length,os.linesep)
options+="starting_cell_size=%s%s"%(starting_cell_size,os.linesep)
options+="LengthofExperiments=%s%s"%(LengthofExperiments,os.linesep)
...
open("%s%soptions.py"%(directory,os.sep),'w').write(options)
I want to pass a function as one of the parameters:
starting_length=9
starting_cell_size=1000
LengthofExperiments=5000000
def pippo(a,b):
return a+b
functionoperator=pippo
And of course in the real experiment the function pippo will be much more complex. And different from experiment to experiment.
But what I am unable to do is to write the function automatically. In short I don't know how to generalise the writeoptions function to keep on writing the options, if one of the options is a function. I could of course copy the original file, but this is inelegant, inefficient (because it contains a lot of extra options that are not being used), and generally does not solve the question.
How do you get python to write down the code of a function, as it writes down the value of a variable?
vinko#mithril$ more a.py
def foo(a):
print a
vinko#mithril$ more b.py
import a
import inspect
a.foo(89)
print inspect.getsource(a.foo)
vinko#mithril$ python b.py
89
def foo(a):
print a
You might also consider some other means of data persistence. In my own (astronomy) research, I've been experimenting with two different means of storing scripts for reproducibility. The first is to have them exclusively inside a subversion repository, and then have the job submission script automatically commit them. For instance, if you just wanted to do this in bash:
alias run_py='svn ci -m "Commit before running"; python2.5 $*'
and inside the script, have the output prefixed by the current subversion revision number for that file, you'd have a record of each script that was run and what the input was. You could pull this back out of subversion as need be.
Another, substantially less full-featured, means of tracking the input to a function could be via something like LodgeIt, a pastebin that accepts XML-RPC input and comes with Python bindings. (It can be installed locally, and has support for replying to and updating existing pastes.)
But, if you are looking for a relatively small amount of code to be included, Vinko's solution using inspect should work quite well. Doug Hellman covered the inspect module in his Python Module of the Week series. You could create a decorator that examines each option and argument and then prints it out as appropriate (I'll use inspect.getargspec to get the names of the arguments.)
import inspect
from functools import wraps
def option_printer(func):
#wraps(func)
def run_func(*args, **kwargs):
for name, arg in zip(inspect.getargspec(func)[0], args) \
+ sorted(kwargs.items()):
if isinstance(arg, types.FunctionType):
print "Function argument '%s' named '%s':\n" % (name, func.func_name)
print inspect.getsource(func)
else:
print "%s: %s" % (name, arg)
return func(*args, **kwargs)
return run_func
This could probably be made a bit more elegant, but in my tests it works for simple sets of arguments and variables. Additionally, it might have some trouble with lambdas.
Are you asking about this?
def writeoptions(directory):
options=""
options+="starting_length=%s%s"%(starting_length,os.linesep)
options+="starting_cell_size=%s%s"%(starting_cell_size,os.linesep)
options+="LengthofExperiments=%s%s"%(LengthofExperiments,os.linesep)
options+="def pippo(a,b):%s" % ( os.linesep, )
options+=" '''Some version of pippo'''%s" % ( os.linesep, )
options+=" return 2*a+b%s" % ( os.linesep, )
open("%s%soptions.py"%(directory,os.sep),'w').write(options)
Or something else?
While it is possible to do what you ask (as Vinko has shown), I'd say it is cleaner to share code. Put pippo and his buddies in a submodule that both programs can access.
Instead of diving into the subject of disassemblers and bytecodes (e.g inspect), why don't you just save the generated Python source in a module (file.py), and later, import it?
I would suggest looking into a more standard way of handling what you call options. For example, you can use the JSON module and save or restore your data. Or look into the marshal and pickle modules.
In python 3.4, I want to be able to do a very simple dispatch table for testing purposes. The idea is to have a dictionary with the key being a string of the name of the function to be tested and the data item being the name of the test function.
For example:
myTestList = (
"myDrawFromTo",
"myDrawLineDir"
)
myTestDict = {
"myDrawFromTo": test_myDrawFromTo,
"myDrawLineDir": test_myDrawLineDir
}
for myTest in myTestList:
result = myTestDict[myTest]()
The idea is that I have a list of function names someplace. In this example, I manually create a dictionary that maps those names to the names of test functions. The test function names are a simple extension of the function name. I'd like to compute the entire dictionary from the list of function names (here it is myTestList).
Alternately, if I can do the same thing without the dictionary, that'd be fine, too. I tried just building a new string from the entries in myTestList and then using local() to set up the call, but didn't have any luck. The dictionary idea came from the Python 3.x documentation.
There are two parts to the problem.
The easy part is just prefixing 'text_' onto each string:
tests = {test: 'test_'+test for test in myTestDict}
The harder part is actually looking up the functions by name. That kind of thing is usually a bad idea, but you've hit on one of the cases (generating tests) where it often makes sense. You can do this by looking them up in your module's global dictionary, like this:
tests = {test: globals()['test_'+test] for test in myTestList}
There are variations on the same idea if the tests live somewhere other than the module's global scope. For example, it might be a good idea to make them all methods of a class, in which case you'd do:
tester = TestClass()
tests = {test: getattr(tester, 'test_'+test) for test in myTestList}
(Although more likely that code would be inside TestClass, so it would be using self rather than tester.)
If you don't actually need the dict, of course, you can change the comprehension to an explicit for statement:
for test in myTestList:
globals()['test_'+test]()
One more thing: Before reinventing the wheel, have you looked at the testing frameworks built into the stdlib, or available on PyPI?
Abarnert's answer seems to be useful but to answer your original question of how to call all test functions for a list of function names:
def test_f():
print("testing f...")
def test_g():
print("testing g...")
myTestList = ['f', 'g']
for funcname in myTestList:
eval('test_' + funcname + '()')
I have been doing a lot of searching, and I don't think I've really found what I have been looking for. I will try my best to explain what I am trying to do, and hopefully there is a simple solution, and I'll be glad to have learned something new.
This is ultimately what I am trying to accomplish: Using nosetests, decorate some test cases using the attribute selector plugin, then execute test cases that match a criteria by using the -a switch during commandline invocation. The attribute values for the tests that are executed are then stored in an external location. The command line call I'm using is like below:
nosetests \testpath\ -a attribute='someValue'
I have also created a customized nosetest plugin, which stores the test cases' attributse, and writes them to an external location. The idea is that I can select a batch of tests, and by storing the attributes of these tests, I can do filtering on these results later for reporting purposes. I am accessing the method attributes in my plugin by overriding the "wantMethod" method with the code similar to the following:
def set_attribs(self, method, attribute):
if hasattr(method, attribute):
if not self.method_attributes.has_key(method.__name__):
self.method_attributes[method.__name__] = {}
self.method_attributes[method.__name__][attribute] = getattr(method, attribute)
def wantMethod(self, method):
self.set_attribs(method, "attribute1")
self.set_attribs(method, "attribute2")
pass
I have this working for pretty much all the tests, except for one case, where the test is uing the "yield" keyword. What is happening is that the methods that are generated are being executed fine, but then the method attributes are empty for each of the generated functions.
Below is the example of what I am trying to achieve. The test below retreives a list of values, and for each of those values, yields the results from another function:
#attr(attribute1='someValue', attribute2='anotherValue')
def sample_test_generator(self):
for (key, value) in _input_dictionary.items()
f = partial(self._do_test, key, value)
f.attribute1='someValue'
yield (lambda x: f(), key)
def _do_test(self, input1, input2):
# Some code
From what I have read, and think I understand, when yield is called, it would create a new callable function which then gets executed. I have been trying to figure out how to retain the attribute values from my sample_test_generator method, but I have not been successful. I thought I could create a partial method, and then add the attribute to the method, but no luck. The tests execute without errors at all, it just seems that from my plugin's perspective, the method attributes aren't present, so they don't get recorded.
I realize this a pretty involved question, but I wanted to make sure that the context for what I am trying to achieve is clear. I have been trying to find information that could help me for this particular case, but I feel like I've reached a stumbling block now, so I would really like to ask the experts for some advice.
Thanks.
** Update **
After reading through the feedback and playing around some more, it looks like if I modified the lambda expression, it would achieve what I am looking for. In fact, I didn't even need to create the partial function:
def sample_test_generator(self):
for (key, value) in _input_dictionary.items()
yield (lambda: self._do_test)
The only downside to this approach is that the test name will not change. As I am playing around more, it looks like in nosetests, when a test generator is used, it would actually change the test name in the result based on the keywords it contains. Same thing was happening when I was using the lambda expression with a parameter.
For example:
Using lamdba expression with a parameter:
yield (lambda x: self._do_test, "value1")
In nosetests plugin, when you access the test case name, it would be displayed as "sample_test_generator(value1)
Using lambda expression without a parameter:
yield (lambda: self._do_test)
The test case name in this case would be "sample_test_generator". In my example above, if there are multiple values in the dictionary, then the yield call would occur multiple times. However, the test name would always remain as "sample_test_generator". This is not as bad as when I would get the unique test names, but then not be able to store the attribute values at all. I will keep playing around, but thanks for the feedback so far!
EDIT
I forgot to come back and provide my final update on how I was able to get this to work in the end, there was a little confusion on my part at first, and after I looked through it some more, I figured out that it had to do with how the tests are recognized:
My original implementation assumed that every test that gets picked up for execution goes through the "wantMethod" call from the plugin's base class. This is not true when "yield" is used to generate the test, because at this point, the test method has already passed the "wantMethod" call.
However, once the test case is generated through the "yeild" call, it does go through the "startTest" call from the plug-in base class, and this is where I was finally able to store the attribute successfully.
So in a nut shell, my test execution order looked like this:
nose -> wantMethod(method_name) -> yield -> startTest(yielded_test_name)
In my override of the startTest method, I have the following:
def startTest(self, test):
# If a test is spawned by using the 'yield' keyword, the test names would be the parent test name, appended by the '(' character
# example: If the parent test is "smoke_test", the generated test from yield would be "smoke_test('input')
parent_test_name = test_name.split('(')[0]
if self.method_attributes.has_key(test_name):
self._test_attrib = self.method_attributes[test_name]
elif self.method_attributes.has_key(parent_test_name):
self._test_attrib = self.method_attributes[parent_test_name]
else:
self._test_attrib = None
With this implementation, along with my overide of wantMethod, each test spawned by the parent test case also inherits attributes from the parent method, which is what I needed.
Again, thanks to all who send replies. This was quite a learning experience.
Would this fix your name issue?
def _actual_test(x, y):
assert x == y
def test_yield():
_actual_test.description = "test_yield_%s_%s" % (5, 5)
yield _actual_test, 5, 5
_actual_test.description = "test_yield_%s_%s" % (4, 8) # fail
yield _actual_test, 4, 8
_actual_test.description = "test_yield_%s_%s" % (2, 2)
yield _actual_test, 2, 2
Rename survives #attr too.
does this work?
#attr(attribute1='someValue', attribute2='anotherValue')
def sample_test_generator(self):
def get_f(f, key):
return lambda x: f(), key
for (key, value) in _input_dictionary.items()
f = partial(self._do_test, key, value)
f.attribute1='someValue'
yield get_f(f, key)
def _do_test(self, input1, input2):
# Some code
The Problem ist that the local variables change after you created the lambda.
I know that the parameters can be any object but for the documentation it is quite important to specify what you would expect.
First is how to specify a parameter types like these below?
str (or use String or string?)
int
list
dict
function()
tuple
object instance of class MyClass
Second, how to specify params that can be of multiple types like a function that can handle a single parameter than can be int or str?
Please use the below example to demonstrate the syntax needed for documenting this with your proposed solution. Mind that it is desired to be able to hyperlink reference to the "Image" class from inside the documentation.
def myMethod(self, name, image):
"""
Does something ...
name String: name of the image
image Image: instance of Image Class or a string indicating the filename.
Return True if operation succeeded or False.
"""
return True
Note, you are welcome to suggest the usage of any documentation tool (sphinx, oxygen, ...) as long it is able to deal with the requirements.
Update:
It seams that there is some kind of support for documenting parameter types in doxygen in. general. The code below works but adds an annoying $ to the param name (because it was initially made for php).
#param str $arg description
#param str|int $arg description
There is a better way. We use
def my_method(x, y):
"""
my_method description
#type x: int
#param x: An integer
#type y: int|string
#param y: An integer or string
#rtype: string
#return: Returns a sentence with your variables in it
"""
return "Hello World! %s, %s" % (x,y)
That's it. In the PyCharm IDE this helps a lot. It works like a charm ;-)
You need to add an exclamation mark at the start of the Python docstring for Doxygen to parse it correctly.
def myMethod(self, name, image):
"""!
Does something ...
#param name String: name of the image
#param image Image: instance of Image Class or a string indicating the filename.
#return Return True if operation succeeded or False.
"""
return True
If using Python 3, you can use the function annotations described in PEP 3107.
def compile(
source: "something compilable",
filename: "where the compilable thing comes from",
mode: "is this a single statement or a suite?"):
See also function definitions.
Figured I'd post this little tidbit here since IDEA showed me this was possible, and I was never told nor read about this.
>>> def test( arg: bool = False ) -> None: print( arg )
>>> test(10)
10
When you type test(, IDLE's doc-tip appears with (arg: bool=False) -> None Which was something I thought only Visual Studio did.
It's not exactly doxygen material, but it's good for documenting parameter-types for those using your code.
Yup, #docu is right - this is the (IMHO best) way to combine both documentation schemes more or less seamlessly. If, on the other hand, you also want to do something like putting text on the doxygen-generated index page, you would add
##
# #mainpage (Sub)Heading for the doxygen-generated index page
# Text that goes right onto the doxygen-generated index page
somewhere at the beginning of your Python code.
In other words, where doxygen does not expect Python comments, use ## to alert it that there are tags for it. Where it expects Python comments (e.g. at the beginning of functions or classes), use """!.
Doxygen is great for C++, but if you are working with mostly python code you should give sphinx a try. If you choose sphinx then all you need to do is follow pep8.