Short Question
Is it possible to select, at run time, what unittests are going to be run when using an auto-discovery method in Python's unittest module.
Background
I am using the unittest module to run system tests on an external system. See below for an example sudo-testcase. The unittest module allows me to create an arbitrary number testcases that I can run using the unittest's testrunner. I have been using this method roughly 6 months of constant use and it is working out very well.
At this point in time I am wanting to try and make this more generic and user friendly. For all of my test suites that I am running now, I have hard coded which tests must run for every system. This is fine for an untested system, but when a test fails incorrectly (a user connects to the wrong test point etc...) they must re-run the entire test suite. As some of the complete suites can take up to 20 min, this is no where near ideal.
I know it is possible to create custom testsuite builders that could define which tests to run. My issue with this is that there are hundreds of testcases that can be run and maintaining this would be come a nightmare if test case names change etc...
My hope was to use nose, or the built-in unittest module to achieve this. The discovery part seems to be pretty straight forward for both options, but my issue is that only way to select a subset of testcases to run is to define a pattern that exists in the testcase name. This means I would still have to hard code a list of patterns to define what testcases to run. So if I have to hard code this list, what is the point of using the auto-discovery (please note this is rhetorical question)?
My end goal is to have a generic way to select which unittests to run during execution, in the form of check boxes or a text field the user can edit. Ideally the solution would be using Python 2.7 and need would need to run on Windows, OSX, and Linux.
Edit
To help clarify, I do not want the tool to generate the list of choices or the check boxes. The tool ideally would return a list of all of the tests in a directory, including the full path and what Suite (if any) the testcase belongs down. With this list, I would build the check boxes or a combo box the user interacts with and pass these tests into a testsuite on the fly to run.
Example Testcase
test_measure_5v_reference
1) Connect to DC power supply via GPIB
2) Set DC voltage to 12.0 V
3) Connect to a Digital Multimeter via GPIB
4) Measure / Record the voltage at a given 5V reference test point
5) Use unittest's assert functions to make sure the value is within tolerance
Store each subset of tests in its own module. Get a list of module names by having the user select them using, as you stated, checkboxes or text entry. Once you have the list of module names, you can build a corresponding test suite doing something similar to the following.
testsuite = unittest.TestSuite()
for module in modules:
testsuite.addTest(unittest.defaultTestLoader.loadTestsFromModule(module))
Related
if __name__ == '__main__':
if is_running_under_teamcity():
runner = TeamcityTestRunner()
else:
runner = HTMLTestRunner.HTMLTestRunner(
stream=outfile,
title='Test Report',
description='This is an example.'
)
unittest.main(testRunner=runner)
I am currently running some tests using unittest module in python this is my current code above. I am deploying this test setup on Teamcity, the first module allows me to convert the output into teamcity-messages and the second creates a html report of the results. Is there a way I can run both of these runners while only running one set of tests? The only option I can see at the minuete is to either try and combine both these modules into a hybrid or using another testing module that Teamcity supports. However I would like to keep the dependancies as low as possible
Any ideas would be great :)
Any ideas would be great :)
Looks like you'll have to handroll it, looking at the code TeamcityTestRunner is a pretty simple extension of the standard TextTestRunner, however HTMLTestRunner is a way more complex beast.
Sadly this is one area of the stdlib which is really badly architected: one could expect the test runner to be concerned solely with discovering and running tests, however it's also tasked with part of the test reporting rather than have an entirely separate test reporter (this test reporting is furthermore a split responsability with the test result, which shouldn't be part of that one's job description either).
Frankly if you don't have any further customisation I'd suggest just using pytest as your test runner instead of unittest with a custom runner:
it should be able to run unittest tests fine
IME it has better separation of concerns and pluggability so having multiple reporters / formatters should work out of the box
pytest-html certainly has no issue generating its reports without affecting the normal text output
according to the readme teamcity gets automatically enabled and used for pytest
so I'd assume generating html reports during your teamcity builds would work fine (to test)
and you can eventually migrate to using pytest tests (which are so much better it's not even funny)
I am trying to reproduce some functionality from Python in C++. It is an involved numerical method with a bunch of subfunctions.
Is there a nice way to compare the values of the Python functions with the C++ functions that I am writing and that should mirror them?
Could someone paste some code or give a mini tutorial?
Or some pointers or references?
Many thanks!
Artabalt
Testing.
Write tests for the functionality on Python (unless they already exist; Python code is usually tested).
Port the tests to C++. Then make your C++ functions pass the tests. Ideally, make the tests a target on your makefile and run them whenever you can.
Edit:
You can test randomly, if it's a good idea or not, might depend on your particular case.
The rule of thumb is to use a couple of each border case (you can use code coverage to see if there's a case that you're missing).
As an example, at my work I use an integration test that requires a lot of components. We simulate a betting process (all the process excluding user interface), at a given point in time, with a hardcoded server response, and we simulate the hardware printer with a redirection to a bmp.
Now, you can't test everything. I can't test every valid combination of numbers (6 numbers from 0 to 36?), in every possible order. Neither can I test every invalid combination (infinite?). You test a couple. The meaningful ones.
So we give it always the same numbers. The test is that it has to produce always the same bmp (has to render fonts the same, has to have the same texts, the same betting date, etc.).
Now, depending on your code and your application is the level of automation that generating this tests can have.
I don't know your particular case, but I would start creating a little program (the simpler, the better) that uses your library. One implementation in Python, one implementation in C++. If you were testing a string library, for example, you do a small program that deletes a letter inside a string, adds up a text, erases every other letter, etc.
Then you automate for a couple of test cases:
cat largetext.txt | ./my_python_program > output.pyhthon_program.txt
cat largetext.txt | ./my_cpp_program > output.cpp_program.txt
diff output.pyhthon_program.txt output.cpp_program.txt || echo "ERROR"
The ideal case is that everytime you compile, the tests get run (this can be done if the code is simple; if not, you can do testing at the end of the day, as an example).
This doesn't grant the programs are the same (you can't test every possible input). But gives you some confidence. If you see something is not right, you first add it to your test, then make it fail, then fix it until it passess.
Regards.
I have a set of unit tests in python. Some of them open graphical objects using pyqt, some are just standard standalone tests. My goal is to automatically run at least the tests that don't need to open window, because unless it will wait for user input and then fail.
Note that:
I can't remove the graphical tests (constraint from the project)
By default all tests should run, but passing some parameter only non graphical one will run
My test suite is built using unittest.TestLoader().discover
My best guess would be to pass a global parameter to the TestSuite, so that each test could check the value to know if it should skip or not. But after reading unittest documentation I could not find a way to do this.
I'm aware of this question: How To Send Arguments to a UnitTest Without Global Variables, but I would have expected some unittest configuration.
You could use unittest.skipIf(condition, reason) and an environment variable to skip the graphical tests.
Create a decorator like:
graphical_test = unittest.skipIf(
os.environ.get('GRAPHICAL_TESTS', False), 'Non graphical tests only'
)
Then annotate your graphical tests with #graphical_test and run your tests after setting GRAPHICAL_TESTS=1
I have tests which have a huge variance in their runtime. Most will take much less than a second, some maybe a few seconds, some of them could take up to minutes.
Can I somehow specify that in my Nosetests?
In the end, I want to be able to run only a subset of my tests which take e.g. less than 1 second (via my specified expected runtime estimate).
Have a look at this write up about attribute plugin for nose tests, where you can manually tag tests as #attr('slow') and #attr('fast'). You can tun nosetests -a '!slow' afterward to run your tests quickly.
It would be great if you can do it automatically, but I'm afraid that you would have to write additional code to do it on the fly. If you are into rapid development, I would run the nose with xunit xml output enabled (which tracks the runtime of each test). Your test module can dynamically read in your xml output file from previous runs and set attribute settings for tests accordingly to filter out quick tests. This way you do not have to do it manually, alas with more work (and you have to run all tests at least once).
I have to test a piece of hardware using it's provided python API.
The hardware has two interfaces one of which has to be programmed by
using it's API and has to be checked if values are read/written correctly by using another interface.
Is there a python library I can use ?
It's something like this:
Test1
write using Interface under Test
check if written correctly by working interface.
program hardware using working interface 3 then
Test2
write using Interface under Test and check
Also try out various range of values for writing within the test at various speeds set through the API
and so on...
A log or results file should be created at the end of this series of tests which details all these tests and whether they passed or failed and some other results from the test
Try the unittest module from the standard library (formerly known as PyUnit).
I'd recommend py.test. It features auto discovery of tests, is non-invasive and you can easily log test results to a file (though that should be possible with every test framework).
Just to be complete another of these auto discovery test suites is nose (http://code.google.com/p/python-nose/). I normally just use just straight up unittest (http://docs.python.org/library/unittest.html) but I am in a possibly more formal environment.
If you want to have a simple test library for auto-logging, and providing you the ability to try out speed, retry, and something else related to the test, you could try test_steps package, which can be used independently or with py.test / nose platform together.