Related
How can I be sure of the unittest methods' order? Is the alphabetical or numeric prefixes the proper way?
class TestFoo(TestCase):
def test_1(self):
...
def test_2(self):
...
or
class TestFoo(TestCase):
def test_a(self):
...
def test_b(self):
...
You can disable it by setting sortTestMethodsUsing to None:
import unittest
unittest.TestLoader.sortTestMethodsUsing = None
For pure unit tests, you folks are right; but for component tests and integration tests...
I do not agree that you shall assume nothing about the state.
What if you are testing the state?
For example, your test validates that a service is auto-started upon installation. If in your setup, you start the service, then do the assertion, and then you are no longer testing the state, but you are testing the "service start" functionality.
Another example is when your setup takes a long time or requires a lot of space and it just becomes impractical to run the setup frequently.
Many developers tend to use "unit test" frameworks for component testing...so stop and ask yourself, am I doing unit testing or component testing?
There is no reason given that you can't build on what was done in a previous test or should rebuild it all from scratch for the next test. At least no reason is usually offered but instead people just confidently say "you shouldn't". That isn't helpful.
In general I am tired of reading too many answers here that say basically "you shouldn't do that" instead of giving any information on how to best do it if in the questioners judgment there is good reason to do so. If I wanted someone's opinion on whether I should do something then I would have asked for opinions on whether doing it is a good idea.
That out of the way, if you read say loadTestsFromTestCase and what it calls it ultimately scans for methods with some name pattern in whatever order they are encountered in the classes method dictionary, so basically in key order. It take this information and makes a testsuite of mapping it to the TestCase class. Giving it instead a list ordered as you would like is one way to do this. I am not so sure of the most efficient/cleanest way to do it but this does work.
If you use 'nose' and you write your test cases as functions (and not as methods of some TestCase derived class), 'nose' doesn't fiddle with the order, but uses the order of the functions as defined in the file.
In order to have the assert_* methods handy without needing to subclass TestCase I usually use the testing module from NumPy. Example:
from numpy.testing import *
def test_aaa():
assert_equal(1, 1)
def test_zzz():
assert_equal(1, 1)
def test_bbb():
assert_equal(1, 1)
Running that with ''nosetest -vv'' gives:
test_it.test_aaa ... ok
test_it.test_zzz ... ok
test_it.test_bbb ... ok
----------------------------------------------------------------------
Ran 3 tests in 0.050s
OK
Note to all those who contend that unit tests shouldn't be ordered: while it is true that unit tests should be isolated and can run independently, your functions and classes are usually not independent.
They rather build up on another from simpler/low-level functions to more complex/high-level functions. When you start optimising your low-level functions and mess up (for my part, I do that frequently; if you don't, you probably don't need unit test anyway;-) then it's a lot better for diagnosing the cause, when the tests for simple functions come first, and tests for functions that depend on those functions later.
If the tests are sorted alphabetically the real cause usually gets drowned among one hundred failed assertions, which are not there because the function under test has a bug, but because the low-level function it relies on has.
That's why I want to have my unit tests sorted the way I specified them: not to use state that was built up in early tests in later tests, but as a very helpful tool in diagnosing problems.
I half agree with the idea that tests shouldn't be ordered. In some cases it helps (it's easier, damn it!) to have them in order... after all, that's the reason for the 'unit' in UnitTest.
That said, one alternative is to use mock objects to mock out and patch the items that should run before that specific code under test. You can also put a dummy function in there to monkey patch your code. For more information, check out Mock, which is part of the standard library now.
Here are some YouTube videos if you haven't used Mock before.
Video 1
Video 2
Video 3
More to the point, try using class methods to structure your code, and then place all the class methods in one main test method.
import unittest
import sqlite3
class MyOrderedTest(unittest.TestCase):
#classmethod
def setUpClass(cls):
cls.create_db()
cls.setup_draft()
cls.draft_one()
cls.draft_two()
cls.draft_three()
#classmethod
def create_db(cls):
cls.conn = sqlite3.connect(":memory:")
#classmethod
def setup_draft(cls):
cls.conn.execute("CREATE TABLE players ('draftid' INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, 'first', 'last')")
#classmethod
def draft_one(cls):
player = ("Hakeem", "Olajuwon")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
#classmethod
def draft_two(cls):
player = ("Sam", "Bowie")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
#classmethod
def draft_three(cls):
player = ("Michael", "Jordan")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
def test_unordered_one(self):
cur = self.conn.execute("SELECT * from players")
draft = [(1, u'Hakeem', u'Olajuwon'), (2, u'Sam', u'Bowie'), (3, u'Michael', u'Jordan')]
query = cur.fetchall()
print query
self.assertListEqual(query, draft)
def test_unordered_two(self):
cur = self.conn.execute("SELECT first, last FROM players WHERE draftid=3")
result = cur.fetchone()
third = " ".join(result)
print third
self.assertEqual(third, "Michael Jordan")
Why do you need a specific test order? The tests should be isolated and therefore it should be possible to run them in any order, or even in parallel.
If you need to test something like user unsubscribing, the test could create a fresh database with a test subscription and then try to unsubscribe. This scenario has its own problems, but in the end it’s better than having tests depend on each other. (Note that you can factor out common test code, so that you don’t have to repeat the database setup code or create testing data ad nauseam.)
There are a number of reasons for prioritizing tests, not the least of which is productivity, which is what JUnit Max is geared for. It's sometimes helpful to keep very slow tests in their own module so that you can get quick feedback from the those tests that that don't suffer from the same heavy dependencies. Ordering is also helpful in tracking down failures from tests that are not completely self-contained.
Don't rely on the order. If they use some common state, like the filesystem or database, then you should create setUp and tearDown methods that get your environment into a testable state, and then clean up after the tests have run.
Each test should assume that the environment is as defined in setUp, and should make no further assumptions.
You should try the proboscis library. It will allow you to make tests order as well as set up any test dependencies. I use it and this library is truly awesome.
For example, if test case #1 from module A should depend on test case #3 from module B you CAN set this behaviour using the library.
Here is a simpler method that has the following advantages:
No need to create a custom TestCase class.
No need to decorate every test method.
Use the unittest standard load test protocol. See the Python docs here.
The idea is to go through all the test cases of the test suites given to the test loader protocol and create a new suite but with the tests ordered by their line number.
Here is the code:
import unittest
def load_ordered_tests(loader, standard_tests, pattern):
"""
Test loader that keeps the tests in the order they were declared in the class.
"""
ordered_cases = []
for test_suite in standard_tests:
ordered = []
for test_case in test_suite:
test_case_type = type(test_case)
method_name = test_case._testMethodName
testMethod = getattr(test_case, method_name)
line = testMethod.__code__.co_firstlineno
ordered.append( (line, test_case_type, method_name) )
ordered.sort()
for line, case_type, name in ordered:
ordered_cases.append(case_type(name))
return unittest.TestSuite(ordered_cases)
You can put this in a module named order_tests and then in each unittest Python file, declare the test loader like this:
from order_tests import load_ordered_tests
# This orders the tests to be run in the order they were declared.
# It uses the unittest load_tests protocol.
load_tests = load_ordered_tests
Note: the often suggested technique of setting the test sorter to None no longer works because Python now sorts the output of dir() and unittest uses dir() to find tests. So even though you have no sorting method, they still get sorted by Python itself!
From unittest — Unit testing framework
Note that the order in which the various test cases will be run is determined by sorting the test function names with respect to the built-in ordering for strings.
If you need set the order explicitly, use a monolithic test.
class Monolithic(TestCase):
def step1(self):
...
def step2(self):
...
def steps(self):
for name in sorted(dir(self)):
if name.startswith("step"):
yield name, getattr(self, name)
def test_steps(self):
for name, step in self.steps():
try:
step()
except Exception as e:
self.fail("{} failed ({}: {})".format(step, type(e), e)
Check out this Stack Overflow question for details.
There are scenarios where the order can be important and where setUp and Teardown come in as too limited. There's only one setUp and tearDown method, which is logical, but you can only put so much information in them until it gets unclear what setUp or tearDown might actually be doing.
Take this integration test as an example:
You are writing tests to see if the registration form and the login form are working correctly. In such a case the order is important, as you can't login without an existing account.
More importantly the order of your tests represents some kind of user interaction. Where each test might represent a step in the whole process or flow you're testing.
Dividing your code in those logical pieces has several advantages.
It might not be the best solution, but I often use one method that kicks off the actual tests:
def test_registration_login_flow(self):
_test_registration_flow()
_test_login_flow()
A simple method for ordering "unittest" tests is to follow the init.d mechanism of giving them numeric names:
def test_00_createEmptyObject(self):
obj = MyObject()
self.assertIsEqual(obj.property1, 0)
self.assertIsEqual(obj.dict1, {})
def test_01_createObject(self):
obj = MyObject(property1="hello", dict1={"pizza":"pepperoni"})
self.assertIsEqual(obj.property1, "hello")
self.assertIsDictEqual(obj.dict1, {"pizza":"pepperoni"})
def test_10_reverseProperty(self):
obj = MyObject(property1="world")
obj.reverseProperty1()
self.assertIsEqual(obj.property1, "dlrow")
However, in such cases, you might want to consider structuring your tests differently so that you can build on previous construction cases. For instance, in the above, it might make sense to have a "construct and verify" function that constructs the object and validates its assignment of parameters.
def make_myobject(self, property1, dict1): # Must be specified by caller
obj = MyObject(property1=property1, dict1=dict1)
if property1:
self.assertEqual(obj.property1, property1)
else:
self.assertEqual(obj.property1, 0)
if dict1:
self.assertDictEqual(obj.dict1, dict1)
else:
self.assertEqual(obj.dict1, {})
return obj
def test_00_createEmptyObject(self):
obj = self.make_object(None, None)
def test_01_createObject(self):
obj = self.make_object("hello", {"pizza":"pepperoni"})
def test_10_reverseProperty(self):
obj = self.make_object("world", None)
obj.reverseProperty()
self.assertEqual(obj.property1, "dlrow")
I agree with the statement that a blanket "don't do that" answer is a bad response.
I have a similar situation where I have a single data source and one test will wipe the data set causing other tests to fail.
My solution was to use the operating system environment variables in my Bamboo server...
(1) The test for the "data purge" functionality starts with a while loop that checks the state of an environment variable "BLOCK_DATA_PURGE." If the "BLOCK_DATA_PURGE" variable is greater than zero, the loop will write a log entry to the effect that it is sleeping 1 second. Once the "BLOCK_DATA_PURGE" has a zero value, execution proceeds to test the purge functionality.
(2) Any unit test which needs the data in the table simply increments "BLOCK_DATA_PURGE" at the beginning (in setup()) and decrements the same variable in teardown().
The effect of this is to allow various data consumers to block the purge functionality so long as they need without fear that the purge could execute in between tests. Effectively the purge operation is pushed to the last step...or at least the last step that requires the original data set.
Today I am going to extend this to add more functionality to allow some tests to REQUIRE_DATA_PURGE. These will effectively invert the above process to ensure that those tests only execute after the data purge to test data restoration.
See the example of WidgetTestCase on Organizing test code. It says that
Class instances will now each run one of the test_*() methods, with self.widget created and destroyed separately for each instance.
So it might be of no use to specify the order of test cases, if you do not access global variables.
I have implemented a plugin, nosedep, for Nose which adds support for test dependencies and test prioritization.
As mentioned in the other answers/comments, this is often a bad idea, however there can be exceptions where you would want to do this (in my case it was performance for integration tests - with a huge overhead for getting into a testable state, minutes vs. hours).
A minimal example is:
def test_a:
pass
#depends(before=test_a)
def test_b:
pass
To ensure that test_b is always run before test_a.
The philosophy behind unit tests is to make them independent of each other. This means that the first step of each test will always be to try to rethink how you are testing each piece to match that philosophy. This can involve changing how you approach testing and being creative by narrowing your tests to smaller scopes.
However, if you still find that you need tests in a specific order (as that is viable), you could try checking out the answer to Python unittest.TestCase execution order .
It seems they are executed in alphabetical order by test name (using the comparison function between strings).
Since tests in a module are also only executed if they begin with "test", I put in a number to order the tests:
class LoginTest(unittest.TestCase):
def setUp(self):
driver.get("http://localhost:2200")
def tearDown(self):
# self.driver.close()
pass
def test1_check_at_right_page(self):
...
assert "Valor" in driver.page_source
def test2_login_a_manager(self):
...
submit_button.click()
assert "Home" in driver.title
def test3_is_manager(self):
...
Note that numbers are not necessarily alphabetical - "9" > "10" in the Python shell is True for instance. Consider using decimal strings with fixed 0 padding (this will avoid the aforementioned problem) such as "000", "001", ... "010"... "099", "100", ... "999".
Contrary to what was said here:
tests have to run in isolation (order must not matter for that)
and
ordering them is important because they describe what the system do and how the developer implements it.
In other words, each test brings you information of the system and the developer logic.
So if this information is not ordered it can make your code difficult to understand.
To randomise the order of test methods you can monkey patch the unittest.TestLoader.sortTestMethodsUsing attribute
if __name__ == '__main__':
import random
unittest.TestLoader.sortTestMethodsUsing = lambda self, a, b: random.choice([1, 0, -1])
unittest.main()
The same approach can be used to enforce whatever order you need.
I am trying to solve one problem that I have in my PyTest. I am trying to create PyTest that will run one TestClass several times with different parameters. I know that I can use #pytest.mark.parametrize, but there is a problem with the test order.
When I use #pytest.mark.parametrize so the order of test is:
Test1[Param1]
Test1[Param2]
Test2[Param1]
Test2[Param2]
I am looking for some solution that will run the test in this order:
Test1[Param1]
Test2[Param1]
Test1[Param2]
Test2[Param2]
Thanks for any suggestions!
In principle, the order of test-data supplied to parametrize should not matter. So, if you are at this point where this is what you need: i.e. order things are executed is important, then your test should explicitly state that. For example
#pytest.mark.parametrize("data_1, data_2", [
(...),
(...)
])
def my_test(data_1, data_2):
"""
Explain what test is about.
"""
# Step 1 - Do something first with data_1:
...
# Step 2 - Do something next with data_2:
...
Let's say I have a python function like this:
class Something:
def my_function(...): <---- start fold
...
return None <---- end fold
def my_function2(...):
...
If I am on the first function line, def my_function -- and let's suppose that function is ~50 locs, how would I fold that function in vim? The first thing I thought of doing is zf/return -- but this is quite flawed, as (1) lots of functions won't have return statements; or event more common, there will be multiple return statements within a single function.
What would be the best way to do this?
(StackOverflow doesn't allow the word 'code' in a post??)
Try zf]M. ]M should act as a motion to take you to the end of the current method.
Try :set foldmethod=indent. It may work for you. VimWiki can be quite helpful though.
The problem with python is the lack of explicit block delimiters. So you may want to use some plugins like SimpylFold
I have been doing a lot of searching, and I don't think I've really found what I have been looking for. I will try my best to explain what I am trying to do, and hopefully there is a simple solution, and I'll be glad to have learned something new.
This is ultimately what I am trying to accomplish: Using nosetests, decorate some test cases using the attribute selector plugin, then execute test cases that match a criteria by using the -a switch during commandline invocation. The attribute values for the tests that are executed are then stored in an external location. The command line call I'm using is like below:
nosetests \testpath\ -a attribute='someValue'
I have also created a customized nosetest plugin, which stores the test cases' attributse, and writes them to an external location. The idea is that I can select a batch of tests, and by storing the attributes of these tests, I can do filtering on these results later for reporting purposes. I am accessing the method attributes in my plugin by overriding the "wantMethod" method with the code similar to the following:
def set_attribs(self, method, attribute):
if hasattr(method, attribute):
if not self.method_attributes.has_key(method.__name__):
self.method_attributes[method.__name__] = {}
self.method_attributes[method.__name__][attribute] = getattr(method, attribute)
def wantMethod(self, method):
self.set_attribs(method, "attribute1")
self.set_attribs(method, "attribute2")
pass
I have this working for pretty much all the tests, except for one case, where the test is uing the "yield" keyword. What is happening is that the methods that are generated are being executed fine, but then the method attributes are empty for each of the generated functions.
Below is the example of what I am trying to achieve. The test below retreives a list of values, and for each of those values, yields the results from another function:
#attr(attribute1='someValue', attribute2='anotherValue')
def sample_test_generator(self):
for (key, value) in _input_dictionary.items()
f = partial(self._do_test, key, value)
f.attribute1='someValue'
yield (lambda x: f(), key)
def _do_test(self, input1, input2):
# Some code
From what I have read, and think I understand, when yield is called, it would create a new callable function which then gets executed. I have been trying to figure out how to retain the attribute values from my sample_test_generator method, but I have not been successful. I thought I could create a partial method, and then add the attribute to the method, but no luck. The tests execute without errors at all, it just seems that from my plugin's perspective, the method attributes aren't present, so they don't get recorded.
I realize this a pretty involved question, but I wanted to make sure that the context for what I am trying to achieve is clear. I have been trying to find information that could help me for this particular case, but I feel like I've reached a stumbling block now, so I would really like to ask the experts for some advice.
Thanks.
** Update **
After reading through the feedback and playing around some more, it looks like if I modified the lambda expression, it would achieve what I am looking for. In fact, I didn't even need to create the partial function:
def sample_test_generator(self):
for (key, value) in _input_dictionary.items()
yield (lambda: self._do_test)
The only downside to this approach is that the test name will not change. As I am playing around more, it looks like in nosetests, when a test generator is used, it would actually change the test name in the result based on the keywords it contains. Same thing was happening when I was using the lambda expression with a parameter.
For example:
Using lamdba expression with a parameter:
yield (lambda x: self._do_test, "value1")
In nosetests plugin, when you access the test case name, it would be displayed as "sample_test_generator(value1)
Using lambda expression without a parameter:
yield (lambda: self._do_test)
The test case name in this case would be "sample_test_generator". In my example above, if there are multiple values in the dictionary, then the yield call would occur multiple times. However, the test name would always remain as "sample_test_generator". This is not as bad as when I would get the unique test names, but then not be able to store the attribute values at all. I will keep playing around, but thanks for the feedback so far!
EDIT
I forgot to come back and provide my final update on how I was able to get this to work in the end, there was a little confusion on my part at first, and after I looked through it some more, I figured out that it had to do with how the tests are recognized:
My original implementation assumed that every test that gets picked up for execution goes through the "wantMethod" call from the plugin's base class. This is not true when "yield" is used to generate the test, because at this point, the test method has already passed the "wantMethod" call.
However, once the test case is generated through the "yeild" call, it does go through the "startTest" call from the plug-in base class, and this is where I was finally able to store the attribute successfully.
So in a nut shell, my test execution order looked like this:
nose -> wantMethod(method_name) -> yield -> startTest(yielded_test_name)
In my override of the startTest method, I have the following:
def startTest(self, test):
# If a test is spawned by using the 'yield' keyword, the test names would be the parent test name, appended by the '(' character
# example: If the parent test is "smoke_test", the generated test from yield would be "smoke_test('input')
parent_test_name = test_name.split('(')[0]
if self.method_attributes.has_key(test_name):
self._test_attrib = self.method_attributes[test_name]
elif self.method_attributes.has_key(parent_test_name):
self._test_attrib = self.method_attributes[parent_test_name]
else:
self._test_attrib = None
With this implementation, along with my overide of wantMethod, each test spawned by the parent test case also inherits attributes from the parent method, which is what I needed.
Again, thanks to all who send replies. This was quite a learning experience.
Would this fix your name issue?
def _actual_test(x, y):
assert x == y
def test_yield():
_actual_test.description = "test_yield_%s_%s" % (5, 5)
yield _actual_test, 5, 5
_actual_test.description = "test_yield_%s_%s" % (4, 8) # fail
yield _actual_test, 4, 8
_actual_test.description = "test_yield_%s_%s" % (2, 2)
yield _actual_test, 2, 2
Rename survives #attr too.
does this work?
#attr(attribute1='someValue', attribute2='anotherValue')
def sample_test_generator(self):
def get_f(f, key):
return lambda x: f(), key
for (key, value) in _input_dictionary.items()
f = partial(self._do_test, key, value)
f.attribute1='someValue'
yield get_f(f, key)
def _do_test(self, input1, input2):
# Some code
The Problem ist that the local variables change after you created the lambda.
I'm trying to write a freeze decorator for Python.
The idea is as follows :
(In response to the two comments)
I might be wrong but I think there is two main use of
test case.
One is the test-driven development :
Ideally , developers are writing case before writing the code.
It usually helps defining the architecture because this discipline
forces to define the real interfaces before development.
One may even consider that in some case the person who
dispatches job between dev is writing the test case and
use it to illustrate efficiently the specification he has in mind.
I don't have any experience of the use of test case like that.
The second is the idea that all project with a decent
size and a several programmers is suffering from broken code.
Something that use to work may get broken from a change
that looked like an innocent refactoring.
Though good architecture, loose couple between component may
help to fight against this phenomenon ; you will sleep better
at night if you have written some test case to make sure
that nothing will break your program's behavior.
HOWEVER,
Nobody can deny the overhead of writting test cases. In the
first case one may argue that test case is actually guiding
development and is therefore not to be considered as an overhead.
Frankly speaking, I'm a pretty young programmer and if I were
you, my word on this subject is not really valuable...
Anyway, I think that mosts company/projects are not working
like that, and that unit tests are mainly used in the second
case...
In other words, rather than ensuring that the program is
working correctly, it is aiming at checking that it will
work the same in the future.
This needs can be met without the cost of writing tests,
by using this freezing decorator.
Let's say you have a function
def pow(n,k):
if n == 0: return 1
else: return n * pow(n,k-1)
It is perfectly nice, and you want to rewrite it as an optimized version.
It is part of a big project. You want it to give back the same result
for a few value.
Rather than going through the pain of test cases, one could use some
kind of freeze decorator.
Something such that the first time the decorator is run,
the decorator run the function with the defined args (below 0, and 7)
and saves the result in a map ( f --> args --> result )
#freeze(2,0)
#freeze(1,3)
#freeze(3,5)
#freeze(0,0)
def pow(n,k):
if n == 0: return 1
else: return n * pow(n,k-1)
Next time the program is executed, the decorator will load this map and check
that the result of this function for these args as not changed.
I already wrote quickly the decorator (see below), but hurt a few problems about
which I need your advise...
from __future__ import with_statement
from collections import defaultdict
from types import GeneratorType
import cPickle
def __id_from_function(f):
return ".".join([f.__module__, f.__name__])
def generator_firsts(g, N=100):
try:
if N==0:
return []
else:
return [g.next()] + generator_firsts(g, N-1)
except StopIteration :
return []
def __post_process(v):
specialized_postprocess = [
(GeneratorType, generator_firsts),
(Exception, str),
]
try:
val_mro = v.__class__.mro()
for ( ancestor, specialized ) in specialized_postprocess:
if ancestor in val_mro:
return specialized(v)
raise ""
except:
print "Cannot accept this as a value"
return None
def __eval_function(f):
def aux(args, kargs):
try:
return ( True, __post_process( f(*args, **kargs) ) )
except Exception, e:
return ( False, __post_process(e) )
return aux
def __compare_behavior(f, past_records):
for (args, kargs, result) in past_records:
assert __eval_function(f)(args,kargs) == result
def __record_behavior(f, past_records, args, kargs):
registered_args = [ (a, k) for (a, k, r) in past_records ]
if (args, kargs) not in registered_args:
res = __eval_function(f)(args, kargs)
past_records.append( (args, kargs, res) )
def __open_frz():
try:
with open(".frz", "r") as __open_frz:
return cPickle.load(__open_frz)
except:
return defaultdict(list)
def __save_frz(past_records):
with open(".frz", "w") as __open_frz:
return cPickle.dump(past_records, __open_frz)
def freeze_behavior(*args, **kvargs):
def freeze_decorator(f):
past_records = __open_frz()
f_id = __id_from_function(f)
f_past_records = past_records[f_id]
__compare_behavior(f, f_past_records)
__record_behavior(f, f_past_records, args, kvargs)
__save_frz(past_records)
return f
return freeze_decorator
Dumping and Comparing of results is not trivial for all type. Right now I'm thinking about using a function (I call it postprocess here), to solve this problem.
Basically instead of storing res I store postprocess(res) and I compare postprocess(res1)==postprocess(res2), instead of comparing res1 res2.
It is important to let the user overload the predefined postprocess function.
My first question is :
Do you know a way to check if an object is dumpable or not?
Defining a key for the function decorated is a pain. In the following snippets
I am using the function module and its name.
** Can you think of a smarter way to do that. **
The snippets below is kind of working, but opens and close the file when testing and when recording. This is just a stupid prototype... but do you know a nice way to open the file, process the decorator for all function, close the file...
I intend to add some functionalities to this. For instance, add the possibity to define
an iterable to browse a set of argument, record arguments from real use, etc.
Why would you expect from such a decorator?
In general, would you use such a feature, knowing its limitation... Especially when trying to use it with POO?
"In general, would you use such a feature, knowing its limitation...?"
Frankly speaking -- never.
There are no circumstances under which I would "freeze" results of a function in this way.
The use case appears to be based on two wrong ideas: (1) that unit testing is either hard or complex or expensive; and (2) it could be simpler to write the code, "freeze" the results and somehow use the frozen results for refactoring. This isn't helpful. Indeed, the very real possibility of freezing wrong answers makes this a bad idea.
First, on "consistency vs. correctness". This is easier to preserve with a simple mapping than with a complex set of decorators.
Do this instead of writing a freeze decorator.
print "frozen_f=", dict( (i,f(i)) for i in range(100) )
The dictionary object that's created will work perfectly as a frozen result set. No decorator. No complexity to speak of.
Second, on "unit testing".
The point of a unit test is not to "freeze" some random results. The point of a unit test is to compare real results with results developed another (simpler, more obvious, poorly-performing way). Usually unit tests compare hand-developed results. Other times unit tests use obvious but horribly slow algorithms to produce a few key results.
The point of having test data around is not that it's a "frozen" result. The point of having test data is that it is an independent result. Done differently -- sometimes by different people -- that confirms that the function works.
Sorry. This appears to me to be a bad idea; it looks like it subverts the intent of unit testing.
"HOWEVER, Nobody can deny the overhead of writting test cases"
Actually, many folks would deny the "overhead". It isn't "overhead" in the sense of wasted time and effort. For some of us, unittests are essential. Without them, the code may work, but only by accident. With them, we have ample evidence that it actually works; and the specific cases for which it works.
Are you looking to implement invariants or post conditions?
You should specify the result explicitly, this wil remove most of you problems.