Python nose processes parameter and dynamically generated tests - python

I have the minimal working example below for dynamically generating test cases and running them using nose.
class RegressionTests(unittest.TestCase):
"""Our base class for dynamically created test cases."""
def regression(self, input):
"""Method that runs the test."""
self.assertEqual(1, 1)
def create(input):
"""Called by create_all below to create each test method."""
def do_test(self):
self.regression(input)
return do_test
def create_all():
"""Create all of the unit test cases dynamically"""
logging.info('Start creating all unit tests.')
inputs = ['A', 'B', 'C']
for input in inputs:
testable_name = 'test_{0}'.format(input)
testable = create(input)
testable.__name__ = testable_name
class_name = 'Test_{0}'.format(input)
globals()[class_name] = type(class_name, (RegressionTests,), {testable_name: testable})
logging.debug('Created test case %s with test method %s', class_name, testable_name)
logging.info('Finished creating all unit tests.')
if __name__ == '__main__':
# Create all the test cases dynamically
create_all()
# Execute the tests
logging.info('Start running tests.')
nose.runmodule(name='__main__')
logging.info('Finished running tests.')
When I run the tests using python nose_mwe.py --nocapture --verbosity=2, they run fine and I get the output:
test_A (__main__.Test_A) ... ok
test_B (__main__.Test_B) ... ok
test_C (__main__.Test_C) ... ok
However, when I try to use the processes command line parameter to make the tests run in parallel, e.g. python nose_mwe.py --processes=3 --nocapture --verbosity=2, I get the following errors.
Failure: ValueError (No such test Test_A.test_A) ... ERROR
Failure: ValueError (No such test Test_B.test_B) ... ERROR
Failure: ValueError (No such test Test_C.test_C) ... ERROR
Is there something simple that I am missing here to allow the dynamically generated tests to run in parallel?

as far as I can tell you just need to make sure that create_all is run in every test process. just moving it out of the __main__ test works for me, so the end of the file would look like:
# as above
# Create all the test cases dynamically
create_all()
if __name__ == '__main__':
# Execute the tests
logging.info('Start running tests.')
nose.runmodule(name='__main__')
logging.info('Finished running tests.')

Related

How do I run a fixture only when the test fails?

I have the following example:
conftest.py:
#pytest.fixture:
def my_fixture_1(main_device)
yield
if FAILED:
-- code lines --
else:
pass
main.py:
def my_test(my_fixture_1):
main_device = ...
-- code lines --
assert 0
-- code lines --
assert 1
When assert 0, for example, the test should fail and execute my_fixture_1. If the test pass, the fixture must not execute. I tried using hookimpl but didn't found a solution, the fixture is always executing even if the test pass.
Note that main_device is the device connected where my test is running.
You could use request as an argument to your fixture. From that, you can check the status of the corresponding tests, i.e. if it has failed or not. In case it failed, you can execute the code you want to get executed on failure. In code that reads as
#pytest.fixture
def my_fixture_1(request):
yield
if request.session.testsfailed:
print("Only print if failed")
Of course, the fixture will always run but the branch will only be executed if the corresponding test failed.
In Simon Hawe's answer, request.session.testsfailed denotes the number of test failures in that particular test run.
Here is an alternative solution that I can think of.
import os
#pytest.fixture(scope="module")
def main_device():
return None
#pytest.fixture(scope='function', autouse=True)
def my_fixture_1(main_device):
yield
if os.environ["test_result"] == "failed":
print("+++++++++ Test Failed ++++++++")
elif os.environ["test_result"] == "passed":
print("+++++++++ Test Passed ++++++++")
elif os.environ["test_result"] == "skipped":
print("+++++++++ Test Skipped ++++++++")
def pytest_runtest_logreport(report):
if report.when == 'call':
os.environ["test_result"] = report.outcome
You can do your implementations directly in the pytest_runtest_logreport hook itself. But the drawback is that you won't get access to the fixtures other than the report.
So, if you need main_device, you have to go with a custom fixture like as shown above.
Use #pytest.fixture(scope='function', autouse=True) which will automatically run it for every test case. you don't have to give main_device in all test functions as an argument.

Unable to run unittest test suite (using TestCase subclasses) in PyCharm, whereas it runs in Python console

I am following this answer to generate multiple test cases programmatically using the unittest approach.
Here's the code:
import unittest
import my_code
# Test cases (List of input output pairs not explicitly mentioned here)
known_values = [
{'input': {}, 'output': {}},
{'input': {}, 'output': {}}
]
# Subclass TestCase
class KnownGood(unittest.TestCase):
def __init__(self, input_params, output):
super(KnownGood, self).__init__()
self.input_params = input_params
self.output = output
def runTest(self):
self.assertEqual(
my_code.my_func(self.input_params['a'], self.input_params['b']),
self.output
)
# Test suite
def suite():
global known_values
suite = unittest.TestSuite()
suite.addTests(KnownGood(input_params=k['input'], output=k['output']) for k in known_values)
return suite
if __name__ == '__main__':
unittest.TextTestRunner().run(suite())
If I open a Python console in PyCharm and run the above code chunk (running unittest.TextTestRunner() without the if condition), the tests run successfully.
..
----------------------------------------------------------------------
Ran 2 tests in 0.002s
OK
<unittest.runner.TextTestResult run=2 errors=0 failures=0>
If I run the test by clicking on the green run button for the if __name__ block in PyCharm, I get the following error:
TypeError: __init__() missing 1 required positional argument: 'output'
Process finished with exit code 1
Empty suite
Empty suite
Python version: 3.7
Project structure: (- denotes folder and . denotes file)
-project_folder
-tests
.test_my_code.py
.my_code.py
The problem is that PyCharm by default is running unittest or pytest (whatever you have configured as test runner) on the module if it identifies it as containing tests, ignoring the part in if __name__ == '__main__'.
That basically means that it executes unittest.main() instead of your customized version of running the tests.
The only solution I know to get the correct run configuration is to manually add it:
select Edit configurations... in the configuration list
add a new config using +
select Python as type
fill in the Script path by your test path (or use the browse button)
Maybe someone knows a more convenient way to force PyCharm to use "Run" instead of "Run Test"...

Using pytest.fixture to opt out of function

I have a function that I don't want to run every time I run tests in my Flask-RESTFul API. This is an example of the setup:
class function(Resource):
def post(self):
print 'test'
do_action()
return {'success':True}
in my test I want to run this function, but ignore do_action(). How would I make this happen using pytest?
This seems like a good opportunity to mark the tests
#pytest.mark.foo_test
class function(Resource):
def post(self):
print 'test'
do_action()
return {'success':True}
Then if you call with
py.test -v -m foo_test
It will run only those tests marked "foo_test"
If you call with
py.test -v -m "not foo_test"
It will run all tests not marked "foo_test"
You can mock do_action in your test:
def test_post(resource, mocker):
m = mocker.patch.object(module_with_do_action, 'do_action')
resource.post()
assert m.call_count == 1
So the actual function will not be called in this test, with the added benefit
that you can check if the post implementation is actuall calling the function.
This requires pytest-mocker to
be installed (shameless plug).

How to use Python Unittest TearDownClass with TestResult.wasSuccessful()

I wanted to call setUpClass and tearDownClass so that setup and teardown would be performed only once for each test. However, it keeps failing for me when I call tearDownClass. I only want to record 1 test result, either PASS if both tests passed or FAIL if both tests failed. If I call only setup and tearDown then all works fine:
Calling setUpClass and tearDownClass:
#!/usr/bin/python
import datetime
import itertools
import logging
import os
import sys
import time
import unittest
LOGFILE = 'logfile.txt'
class MyTest(unittest.TestCase):
global testResult
testResult = None
#classmethod
def setUpClass(self):
## test result for DB Entry:
self.dbresult_dict = {
'SCRIPT' : 'MyTest.py',
'RESULT' : testResult,
}
def test1(self):
expected_number = 10
actual_number = 10
self.assertEqual(expected_number, actual_number)
def test2(self):
expected = True
actual = True
self.assertEqual(expected, actual)
def run(self, result=None):
self.testResult = result
unittest.TestCase.run(self, result)
#classmethod
def tearDownClass(self):
ok = self.testResult.wasSuccessful()
errors = self.testResult.errors
failures = self.testResult.failures
if ok:
self.dbresult_dict['RESULT'] = 'Pass'
else:
logging.info(' %d errors and %d failures',
len(errors), len(failures))
self.dbresult_dict['RESULT'] = 'Fail'
if __name__ == '__main__':
logger = logging.getLogger()
logger.addHandler(logging.FileHandler(LOGFILE, mode='a'))
stderr_file = open(LOGFILE, 'a')
runner = unittest.TextTestRunner(verbosity=2, stream=stderr_file, descriptions=True)
itersuite = unittest.TestLoader().loadTestsFromTestCase(MyTest)
runner.run(itersuite)
sys.exit()
unittest.main(module=itersuite, exit=True)
stderr_file.close()
Error:
test1 (__main__.MyTest) ... ok
test2 (__main__.MyTest) ... ok
ERROR
===================================================================
ERROR: tearDownClass (__main__.MyTest)
-------------------------------------------------------------------
Traceback (most recent call last):
File "testTearDownClass.py", line 47, in tearDownClass
ok = self.testResult.wasSuccessful()
AttributeError: type object 'MyTest' has no attribute 'testResult'
----------------------------------------------------------------------
Ran 2 tests in 0.006s
FAILED (errors=1)
like #Marcin already pointed out, you're using the Unittest-Framework in a way it isn't intended.
To see if the tests are successful you check the given values with the expected, like you already did: assertEqual(given, expected). Unittest will then collect a summary of failed ones. you don't have to do this manually.
If you want to check that two tests need to be together successful or fail together, these should be combined in ONE Test, maybe as a additionally one, if the individual Tests need to be checked as well. This is nothing you want to save and load afterwards. The tests itself should be as stateless as possible.
When you say you want to run the SetUp and TearDown 'once per test', do you mean once per test-method or per test-run? This is different if you have more than one test-method inside your class:
setUp() Will be called before each test-method
tearDown() Will be called after each test-method
setUpClass() Will be called once per class (before the first test-method of this class)
tearDownClass() Will be called once per class (after the last test-method of this class)
Here's the official documentation
Here's a related answer
Change tearDownClass(self) to tearDownClass(cls) and setUpClass(self) to setUpClass(cls).

nosetests not running all the tests in a file

i run
nosetests -v --nocapture --nologcapture tests/reward/test_stuff.py
i get
----------------------------------------------------------------------
Ran 7 tests in 0.005s
OK
the tests without decorators run fine, however i have some fixture tests setup liike so which aren't running in the above command:
#use_fixtures(fixtures.ap_user_data, fixtures.ap_like_data, fixtures.other_reward_data)
def test_settings_are_respected(data):
"""
Tests setting takes effect
"""
res = func()
assert len(res) > 0
the decorator is:
def use_fixtures(*fixtures):
"""
Decorator for tests, abstracts the build-up required when using fixtures
"""
def real_use_fixtures(func):
def use_fixtures_inner(*args, **kwargs):
env = {}
for cls in fixtures:
name = cls.__name__
table = name[:-5] #curtails _data
env[table] = get_orm_class(table)
fixture = SQLAlchemyFixture(env=env, engine=engine, style=TrimmedNameStyle(suffix="_data"))
return fixture.with_data(*fixtures)(func)()
return use_fixtures_inner
return real_use_fixtures
is my use of decorators stopping nosetests from running my tests ?
I don't know if your decorator is causing nose to miss tests, but what if you take a test with a decorator, and remove the decorator long enough to see if it's still missed?

Categories

Resources