I'm creating a solution that scales based on the number of tests in a Python unittest Test Suite. I need to access the number of tests I've selected to run, whether that's just one or a whole class of tests. I see these test names available when I run in debug mode under _handleClassSetUp, suite.py:163, under the variable named self _tests, but I've been unable to interact with it or select it when trying to reference it from my setUpClass(cls) method.
I am able to get the tests from the class by using number_tests = cls.countTestCases(cls) in the setUpClass method, but this doesn't change if I run only one test as it selects all from that class I'm running.
Any help would be greatly appreciated.
Related
I need to test functions Initialize/Shutdown with different parameters. Each of these functions can be executed only once during app lifetime. Do I have to create 10 files with only one test function each or can I define 10 tests in one file and mark each function to be run using new instance of python interpreter?
Is this possible with either PyTest or the built-in unittest package?
I made it work with unittest. Created _runner.py (sources below) which runs all unit tests in current directory using test discovery (unittest.TestLoader). It loops through all test suites and checks test case names for "IsolatedTest" words. These will be run using new Python instance by calling subprocess.check_output("python.."). Others are run normally in current process. For example I'm declaring class FooIsolatedTest(unittest.TestCase). In isolated tests as a replacement for unittest.main() using such code: import _runner; _runner.main(os.path.basename(__file__)). You can take a look at sources here.
I have a set of unit tests in python. Some of them open graphical objects using pyqt, some are just standard standalone tests. My goal is to automatically run at least the tests that don't need to open window, because unless it will wait for user input and then fail.
Note that:
I can't remove the graphical tests (constraint from the project)
By default all tests should run, but passing some parameter only non graphical one will run
My test suite is built using unittest.TestLoader().discover
My best guess would be to pass a global parameter to the TestSuite, so that each test could check the value to know if it should skip or not. But after reading unittest documentation I could not find a way to do this.
I'm aware of this question: How To Send Arguments to a UnitTest Without Global Variables, but I would have expected some unittest configuration.
You could use unittest.skipIf(condition, reason) and an environment variable to skip the graphical tests.
Create a decorator like:
graphical_test = unittest.skipIf(
os.environ.get('GRAPHICAL_TESTS', False), 'Non graphical tests only'
)
Then annotate your graphical tests with #graphical_test and run your tests after setting GRAPHICAL_TESTS=1
Following is the structure of my tests in a file.
Class
setup
test01
test02
test03
teardown
I have a requirement to run specific code before and after each test.
For before, I could invoke that code from the setup.
But for after the test, I am not able to figure how to do it.
Obviously invoking the code from teardown would work for the last test, but how can I have it run for the tests in between?
Assuming that you're properly using a class descended from unittest.TestCase, then the setUp method is run before each test, and the tearDown method is run after each test. Check the documentation. So it's completely feasible to put your code in those two methods.
I've created a GUI for running nosetests in PyQt.
GUI code: http://pastebin.com/uVhkdDZc
My code: http://pastebin.com/3MG8PJn0
My interface reads the files in a folder of unittests and then populates a combobox with those tests and in turn, another combo box based on the tests it finds in the selected test file.
Based on these docs I thought I could run nosetests /path/to/test/file.py:test_function
However when I try to run a specific test within my unittest.py file I get a ValueError: No such test test_123
An example of the command that my interface generates is:
nosetests C:\path\to\my\unittest.py:test_123
And yet unittest.py contains def test_123():
So where am I going wrong? Do I need to add to my test? The setup/teardowns just currently pass
This should perhaps of been more obvious than I thought, but as always when I follow docs to set things up I overlook the basics.
Because my tests are setup within a class the class needs to be referenced when calling a single test from within that class.
So where I tried to call a test with
nosetests C:\path\to\my\unittest.py:test_123
I should have ran that in relation to it's class with
nosetests C:\path\to\my\unittest.py:tests.test_123
Short Question
Is it possible to select, at run time, what unittests are going to be run when using an auto-discovery method in Python's unittest module.
Background
I am using the unittest module to run system tests on an external system. See below for an example sudo-testcase. The unittest module allows me to create an arbitrary number testcases that I can run using the unittest's testrunner. I have been using this method roughly 6 months of constant use and it is working out very well.
At this point in time I am wanting to try and make this more generic and user friendly. For all of my test suites that I am running now, I have hard coded which tests must run for every system. This is fine for an untested system, but when a test fails incorrectly (a user connects to the wrong test point etc...) they must re-run the entire test suite. As some of the complete suites can take up to 20 min, this is no where near ideal.
I know it is possible to create custom testsuite builders that could define which tests to run. My issue with this is that there are hundreds of testcases that can be run and maintaining this would be come a nightmare if test case names change etc...
My hope was to use nose, or the built-in unittest module to achieve this. The discovery part seems to be pretty straight forward for both options, but my issue is that only way to select a subset of testcases to run is to define a pattern that exists in the testcase name. This means I would still have to hard code a list of patterns to define what testcases to run. So if I have to hard code this list, what is the point of using the auto-discovery (please note this is rhetorical question)?
My end goal is to have a generic way to select which unittests to run during execution, in the form of check boxes or a text field the user can edit. Ideally the solution would be using Python 2.7 and need would need to run on Windows, OSX, and Linux.
Edit
To help clarify, I do not want the tool to generate the list of choices or the check boxes. The tool ideally would return a list of all of the tests in a directory, including the full path and what Suite (if any) the testcase belongs down. With this list, I would build the check boxes or a combo box the user interacts with and pass these tests into a testsuite on the fly to run.
Example Testcase
test_measure_5v_reference
1) Connect to DC power supply via GPIB
2) Set DC voltage to 12.0 V
3) Connect to a Digital Multimeter via GPIB
4) Measure / Record the voltage at a given 5V reference test point
5) Use unittest's assert functions to make sure the value is within tolerance
Store each subset of tests in its own module. Get a list of module names by having the user select them using, as you stated, checkboxes or text entry. Once you have the list of module names, you can build a corresponding test suite doing something similar to the following.
testsuite = unittest.TestSuite()
for module in modules:
testsuite.addTest(unittest.defaultTestLoader.loadTestsFromModule(module))