Error when running Python parameterized test method - python

IDE: PyCharm Community Edition 3.1.1
Python: 2.7.6
I using DDT for test parameterization http://ddt.readthedocs.org/en/latest/example.html
I want to choose and run parameterized test method from test class in PyCharm -> see example:
from unittest import TestCase
from ddt import ddt, data
#ddt
class Test_parameterized(TestCase):
def test_print_value(self):
print 10
self.assertIsNotNone(10)
#data(10, 20, 30, 40)
def test_print_value_parametrized(self, value):
print value
self.assertIsNotNone(value)
When I navigate to the first test method test_print_value in code and hit ctrl+Shift+F10 (or use Run Unittest test_print... option from context menu)
then test is executed.
When I try the same with parameterized test I get error:
Test framework quit unexpectedly
And output contains:
/usr/bin/python2 /home/s/App/pycharm-community-3.1.1/helpers/pycharm/utrunner.py
/home/s/Documents/Py/first/fib/test_parametrized.py::Test_parameterized::test_print_value_parametrized true
Testing started at 10:35 AM ...
Traceback (most recent call last):
File "/home/s/App/pycharm-community-3.1.1/helpers/pycharm/utrunner.py", line 148, in <module>
testLoader.makeTest(getattr(testCaseClass, a[2]), testCaseClass))
AttributeError: 'TestLoader' object has no attribute 'makeTest'
Process finished with exit code 1
However when I run all tests in class (by navigating to test class name in code and using mentioned run test option) all parameterized and non parameterized tests are executed together without errors.
The problem is how to independently run prameterized method from the test class - workaround is putting one parameterized test per test class but it is rather messy solution.

Actually this is issue in PyCharm utrunner.py who runs unittests. If you are using DDT there is a wrapper #ddt and #data - it is responsible for creating separate tests for each data entry. In the background these tests have different names e.g.
#ddt
class MyTestClass(unittest.TestCase):
#data(1, 2)
def test_print(self, command):
print command
This would create tests named:
- test_print_1_1
- test_print_2_2
When you try to run one test from the class (Right Click -> Run 'Unittest test_print') PyCharm has a problem to load your tests print_1_1, print_2_2 as it is trying to load test_print test.
When you look at the code of utrunner.py:
if a[1] == "":
# test function, not method
all.addTest(testLoader.makeTest(getattr(module, a[2])))
else:
testCaseClass = getattr(module, a[1])
try:
all.addTest(testCaseClass(a[2]))
except:
# class is not a testcase inheritor
all.addTest(
testLoader.makeTest(getattr(testCaseClass, a[2]), testCaseClass))
and you will debug it you see that issue.
Ok. So my fix for that is to load proper tests from the class. It is just a workaround and it is not perfect, however as DDT is adding a TestCase as another method to the class it is hard to find a different way to detect right test cases than comparing by string. So instead of:
try:
all.addTest(testCaseClass(a[2]))
you can try to use:
try:
all_tests = testLoader.getTestCaseNames(getattr(module, a[1]))
for test in all_tests:
if test.startswith(a[2]):
if test.split(a[2])[1][1].isdigit():
all.addTest(testLoader.loadTestsFromName(test, getattr(module,a[1])))
Checking if digit is found after the main name is a workaround to exclude similar test cases:
test_print
test_print_another_case
But of course it would not exclude cases:
test_if_prints_1
test_if_prints_2
So in the worst case, if we haven't got a good name convention we will run similar tests, but in most cases it should just work for you.

When I ran into this error, it was because I had implemented an init function as follows:
def __init__(self):
super(ClassInheritingTestCase, self).__init__()
When I changed it to the following, it worked properly:
def __init__(self, *args, **kwargs):
super(ClassInheritingTestCase, self).__init__(*args, **kwargs)
The problem was caused by me not propagating the *args and **kwargs through properly.

Related

How to use Nose tests to test an individual method on a Mac?

I'm using Nose tests to test a particular function. After entering the correct file directory, I run the following command in the Mac terminal: nosetests test_hardening.py: TestVoceIsotropicThetaHardening.test_dhistory.
test_hardening.py is a python file, TestVoceIsotropicThetaHardening is a python class, and test_dhistory is the particular method I am running tests on.
I am consistently getting the following error: ModuleNotFoundError: No module named 'TestVoceIsotropicThetaHardening'.
For your reference, here is a snippet of my code:
class HardeningBase:
def test_dhistory(self):
... # some code to calculate rv1, rv2, rv3, exact, and number
print(rv1)
print(rv2)
print(rv3)
self.assertTrue(np.allclose(exact, numer, rtol=1.0e-3))
class TestVoceIsotropicThetaHardening(unittest.TestCase, HardeningBase):
def setUp(self):
self.a = 1
self.b = 2
self.c = 3
Is there a particular way for me to test test_dhistory of the child class TestVoceIsotropicThetaHardening using Nose on a Mac terminal?
Resolved the problem:
nosetests test_hardening.py:TestVoceIsotropicThetaHardening.test_dhistory
instead of
nosetests test_hardening.py: TestVoceIsotropicThetaHardening.test_dhistory - you can't leave a space in-between the colon and the class name.

Separate test cases per input files?

Most test frameworks assume that "1 test = 1 Python method/function",
and consider a test as passed when the function executes without
raising assertions.
I'm testing a compiler-like program (a program that reads *.foo
files and process their contents), for which I want to execute the same test on many input (*.foo) files. IOW, my test looks like:
class Test(unittest.TestCase):
def one_file(self, filename):
# do the actual test
def list_testcases(self):
# essentially os.listdir('tests/') and filter *.foo files.
def test_all(self):
for f in self.list_testcases():
one_file(f)
My current code uses
unittest from
Python's standard library, i.e. one_file uses self.assert...(...)
statements to check whether the test passes.
This works, in the sense that I do get a program which succeeds/fails
when my code is OK/buggy, but I'm loosing a lot of the advantages of
the testing framework:
I don't get relevant reporting like "X failures out of Y tests" nor
the list of passed/failed tests. (I'm planning to use such system
not only to test my own development but also to grade student's code
as a teacher, so reporting is important for me)
I don't get test independence. The second test runs on the
environment left by the first, and so on. The first failure stops
the testsuite: testcases coming after a failure are not ran at all.
I get the feeling that I'm abusing my test framework: there's only
one test function so automatic test discovery of unittest sounds
overkill for example. The same code could (should?) be written in
plain Python with a basic assert.
An obvious alternative is to change my code to something like
class Test(unittest.TestCase):
def one_file(self, filename):
# do the actual test
def test_file1(self):
one_file("first-testcase.foo")
def test_file2(self):
one_file("second-testcase.foo")
Then I get all the advantages of unittest back, but:
It's a lot more code to write.
It's easy to "forget" a testcase, i.e. create a test file in
tests/ and forget to add it to the Python test.
I can imagine a solution where I would generate one method per testcase dynamically (along the lines of setattr(self, 'test_file' + str(n), ...)), to generate the code for the second solution without having to write it by hand. But that sounds really overkill for a use-case which doesn't seem so complex.
How could I get the best of both, i.e.
automatic testcase discovery (list tests/*.foo files), test
independence and proper reporting?
If you can use pytest as your test runner, then this is actually pretty straightforward using the parametrize decorator:
import pytest, glob
all_files = glob.glob('some/path/*.foo')
#pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
# do the actual test
This will also automatically name the tests in a useful way, so that you can see which files have failed:
$ py.test
================================== test session starts ===================================
platform darwin -- Python 3.6.1, pytest-3.1.3, py-1.4.34, pluggy-0.4.0
[...]
======================================== FAILURES ========================================
_____________________________ test_one_file[some/path/a.foo] _____________________________
filename = 'some/path/a.foo'
#pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
> assert False
E assert False
test_it.py:7: AssertionError
_____________________________ test_one_file[some/path/b.foo] _____________________________
filename = 'some/path/b.foo'
#pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
[...]
Here is a solution, although it might be considered not very beautiful... The idea is to dynamically create new functions, add them to the test class, and use the function names as arguments (e.g., filenames):
# import
import unittest
# test class
class Test(unittest.TestCase):
# example test case
def test_default(self):
print('test_default')
self.assertEqual(2,2)
# set string for creating new function
func_string="""def test(cls):
# get function name and use it to pass information
filename = inspect.stack()[0][3]
# print function name for demonstration purposes
print(filename)
# dummy test for demonstration purposes
cls.assertEqual(type(filename),str)"""
# add new test for each item in list
for f in ['test_bla','test_blu','test_bli']:
# set name of new function
name=func_string.replace('test',f)
# create new function
exec(name)
# add new function to test class
setattr(Test, f, eval(f))
if __name__ == "__main__":
unittest.main()
This correctly runs all four tests and returns:
> test_bla
> test_bli
> test_blu
> test_default
> Ran 4 tests in 0.040s
> OK

How to decide where Python debugger stops and which line is to be blamed?

Background:
I write Squish GUI tests in Python. I tried to make test code as Pythonic and DRY as I could and hence I moved all repeating code to separate classes / modules.
Problem definition:
test.verify or assert statement tells the debugger to stop at the very line where the statement is and that's in most cases the module with details of single test step. This line is shown in eclipse during manual run and output by automatic test in Jenkins.
To actually see what failed in test it would be far better to stop the debugger at the invocation point of the procedures with asserts inside. Then tester / GUI developer can spot what actions on GUI lead to a problem and what was checked.
Example:
test_abstract.py
class App():
def open_file(self, filename):
pass # example
def test_file_content(content):
# squish magic to get file content from textbox etc.
# ...
test.verify(content in textBoxText)
test_file_opening.py
def main():
app = App()
app.open_file('filename.txt')
app.test_file_content('lorem')
As the test fails on test.verify() invocation the debugger stops and directs to test_abstract.py file. It actually say nothing about test steps that lead to this test failure.
Is there a way to tell debugger to ignore direct place of test failure and make it show where the procedure with test was invoked. I'm looking for elegant way that would not need too much of code in generic test file itself.
Not ideal solution which works:
For now I'm not using test.verify inside of abstract modules and invoke this in the particular test case code. Generalized test functions return a tuple (test_result, test_descriptive_message_with error) which is unpacked with *:
def test_file_content(content):
# test code
return (result, 'Test failed because...')
and test case code contains:
test.verify(*test_file_content('lorem'))
which works fine, but the each and every test case code has to contain a lot of test.verify(*... and test developers have to remember about it. Not to mention that it looks wet... (not DRY).
Yes! If you have access to Squish 6 there is some new functionality to do exactly that. The fixateResultContext() function will cause all results to be rewritten such that they appear to originate at an ancestor frame. See Documentation.
If you are using python this can be wrapped into a handy context manager
def resultsReportedAtCallsite(ancestorLevel = 1):
class Ctx:
def __enter__(self):
test.fixateResultContext(ancestorLevel + 1)
def __exit__(self, exc_type, exc_value, traceback):
test.restoreResultContext()
return Ctx()
def libraryFunction():
with resultsReportedAtCallsite():
test.compare("Apples", "Oranges")
Any later call to libraryFunction() that fails will point at the line of code containing libraryFunction(), and not the test.compare() within.

Preventing unittest from calling sys.exit()

I'm having trouble preventing Unittest from calling sys.exit(). I found Unittest causing sys.exit() searching for an answer.I modified the code to
unittest.TextTestRunner().run(unittest.TestLoader().loadTestsFromTestCase(run_tests.Compare))
and put it in my main. I believe the only thing I changed is that my test is in a seperate file named run_tests.py. It looks as such
import unittest
from list_of_globals import globvar
value1 = globvar.relerror
value2 = globvar.tolerance
class Compare(unittest.TestCase):
def __init__(self,value1,value2):
super(Compare, self).__init__()
self.value1 = value1
self.value2 = value2
def runTest(self):
self.assertTrue(self.value1 < self.value2)
When I run my main function I receive the following error
File "./main.py", line 44, in <module> unittest.TextTestRunner().run(unittest.TestLoader().loadTestsFromTestCase(run_tests.Compare))
File "/usr/lib64/python2.6/unittest.py", line 550, in loadTestsFromTestCase
return self.suiteClass(map(testCaseClass, testCaseNames))
TypeError: __init__() takes exactly 3 arguments (2 given)
I don't understand why this error occurs or how to go about fixing it. Any help would be much appreciated. I am using python 2.6 on linux
Your problem is with your unit test class.
When writing unit tests you are not meant to override init -- as the init method is used to process which tests are to be run. For test configurations, such as setting variables, you should write a method called setUp which takes no parameters. For instance:
class Compare(unittest.TestCase):
def setUp(self):
self.value1 = globvar.relerror
self.value2 = globvar.tolerance
def runTest(self):
self.assertTrue(self.value1 < self.value2)
The problem in the linked question is that unittest.main exits python after all tests have been run. That was not the desired outcome for that user as they were running tests from IPython Notebook, which is essentially an enhanced interpreter. As such it was also terminated by sys.exit. That is, exit is called outside of the test rather than in the test. I'd assumed a function under test was calling exit.

PyCharm and unittest won't run

I have a problem with PyCharm 3.0.1 I can't run basic unittests.
Here is my code :
import unittest from MysqlServer import MysqlServer
class MysqlServerTest(unittest.TestCase):
def setUp(self):
self.mysqlServer = MysqlServer("ip", "username", "password", "db", port)
def test_canConnect(self):
self.mysqlServer.connect()
self.fail()
if __name__ == '__main__':
unittest.main()
Here is All the stuff PyCharm give me
Unable to attach test reporter to test framework or test framework quit unexpectedly
It also says
AttributeError: class TestLoader has no attribute '__init__'
And the event log :
2:14:28 PM Empty test suite
The problem is when I run manually the Python file (with PyCharm, as a script)
Ran 1 tests in 0.019s
FAILED (failures=1)
Which is normal I make the test fail on purpose. I am a bit clueless on what is going on.
here more information :
Setting->Python Integrated Tools->Package requirements file: <PROJECT_HOME>/src/test
Default test runner: Unittests
pyunit 1.4.1 Is installed
EDIT: Same thing happen with the basic usage from unitests.py
import unittest
class IntegerArithmenticTestCase(unittest.TestCase):
def testAdd(self): ## test method names begin 'test*'
self.assertEquals((1 + 2), 3)
self.assertEquals(0 + 1, 1)
def testMultiply(self):
self.assertEquals((0 * 10), 0)
self.assertEquals((5 * 8), 40)
if __name__ == '__main__':
unittest.main()
Although this wasn't the case with the original poster, I'd like to note that another thing that will cause this are test functions that don't begin with the word 'test.'
class TestSet(unittest.TestCase):
def test_will_work(self):
pass
def will_not_work(self):
pass
This is probably because you did not set your testing framework correctly in the settings dialogue.
Definitely a pycharm thingy, repeating from above,
Run --> Edit Configurations.
select the instances of the test, and press the red minus button.
I have the exact same problem. It turned out that the fact of recognizing an individual test was related to the file name. In my case, test_calculate_kpi.py, which PyCharm didn't recognize as a test when renamed to test_calculate_kpis.py, was immediately recognized.
4 steps to generate html reports with PyCharm and most default test runners (py.test, nosetest, unittest etc.):
make sure you give your test methods a prefix 'test' (as stated before by others), e.g. def test_run1()
the widespread example code from test report packageā€˜s documentation is
import unittest
import HtmlTestRunner
class TestGoodnessOfFitTests(unittest.TestCase):
def test_run1(self):
...
if __name__ == '__main__':
unittest.main(testRunner=HtmlTestRunner.HTMLTestRunner(output='t.html'))
This code is usually located in a test class file which contains all the unittests and the "main"-catcher code. This code continuously yielded the warning Empty test suite for me. Therefore, remove all the if __name__ ... code. The file now only contains the TestGoodnessOfFitTests class. Now, additionally
create a new main.py in the same directory of the test class file and use the following code:
import unittest
import HtmlTestRunner
test_class = TestGoodnessOfFitTests()
unittest.main(module=test_class,
testRunner=HtmlTestRunner.HTMLTestRunner(output='t.html'))
Remove your old run configurations, right-click on your main.py and press Run 'main'. Verify correct settings under Preferences -> Python Integrated Tools -> Default test runner (in my case py.test and nose worked)
Output:
Running tests...
----------------------------------------------------------------------
test_gaussian_dummy_kolmogorov_cdf_1 (tests.evaluation_tests.TestGoodnessOfFitTests) ... OK (1.033877)s
----------------------------------------------------------------------
Ran 1 test in 0:00:01
OK
Generating HTML reports...
Even I had the same problem, I felt the workspace was not properly refreshed. Even I did File->Synchronize(Ctrl+Aly+y). But that wasn't the solution. I just renamed my test python file name and again I tried executing the code, it started worked fine.
I had the same problem. The file was named test_exercise_detectors.py (note the plural "detectors") and it was in a packed named test_exercise_detectors. Changing the name of the file to test_exercise_detector.py (singular "detector") fixed the problem.
Adding
if __name__ == "__main__":
unittest.main()
fixed the issue for me.

Categories

Resources