I have to test a piece of hardware using it's provided python API.
The hardware has two interfaces one of which has to be programmed by
using it's API and has to be checked if values are read/written correctly by using another interface.
Is there a python library I can use ?
It's something like this:
Test1
write using Interface under Test
check if written correctly by working interface.
program hardware using working interface 3 then
Test2
write using Interface under Test and check
Also try out various range of values for writing within the test at various speeds set through the API
and so on...
A log or results file should be created at the end of this series of tests which details all these tests and whether they passed or failed and some other results from the test
Try the unittest module from the standard library (formerly known as PyUnit).
I'd recommend py.test. It features auto discovery of tests, is non-invasive and you can easily log test results to a file (though that should be possible with every test framework).
Just to be complete another of these auto discovery test suites is nose (http://code.google.com/p/python-nose/). I normally just use just straight up unittest (http://docs.python.org/library/unittest.html) but I am in a possibly more formal environment.
If you want to have a simple test library for auto-logging, and providing you the ability to try out speed, retry, and something else related to the test, you could try test_steps package, which can be used independently or with py.test / nose platform together.
Related
if __name__ == '__main__':
if is_running_under_teamcity():
runner = TeamcityTestRunner()
else:
runner = HTMLTestRunner.HTMLTestRunner(
stream=outfile,
title='Test Report',
description='This is an example.'
)
unittest.main(testRunner=runner)
I am currently running some tests using unittest module in python this is my current code above. I am deploying this test setup on Teamcity, the first module allows me to convert the output into teamcity-messages and the second creates a html report of the results. Is there a way I can run both of these runners while only running one set of tests? The only option I can see at the minuete is to either try and combine both these modules into a hybrid or using another testing module that Teamcity supports. However I would like to keep the dependancies as low as possible
Any ideas would be great :)
Any ideas would be great :)
Looks like you'll have to handroll it, looking at the code TeamcityTestRunner is a pretty simple extension of the standard TextTestRunner, however HTMLTestRunner is a way more complex beast.
Sadly this is one area of the stdlib which is really badly architected: one could expect the test runner to be concerned solely with discovering and running tests, however it's also tasked with part of the test reporting rather than have an entirely separate test reporter (this test reporting is furthermore a split responsability with the test result, which shouldn't be part of that one's job description either).
Frankly if you don't have any further customisation I'd suggest just using pytest as your test runner instead of unittest with a custom runner:
it should be able to run unittest tests fine
IME it has better separation of concerns and pluggability so having multiple reporters / formatters should work out of the box
pytest-html certainly has no issue generating its reports without affecting the normal text output
according to the readme teamcity gets automatically enabled and used for pytest
so I'd assume generating html reports during your teamcity builds would work fine (to test)
and you can eventually migrate to using pytest tests (which are so much better it's not even funny)
I'm trying to make a testing system for continuous integration and I want to be able to have checks that do specific operations, like formatting files in a specific way, and those checks need to have a flow to them.
Example:
FormatFilesCheck(files):
Test(see if formatter is installed)
for prep in prework:
Test(do prep)
for file in files:
Test(try formatting file)
Test(clean up work)
I was wondering if there is a python library out there that handles test control. Obviously there is unittest but from what I can tell it's meant linear testing, i.e. testing with dependencies.
If unittest has this capability an example would be amazing.
*edit: What I'm really looking for is a framework with unittest like capabilities for error handling and assertions which allows me to write checks and have them plug into it.
Thanks ahead of time
If I'm understanding what you're asking, you're looking for something along the lines of Chai JS. Fortunately, someone has written a python library that approximates it, also called Chai. Your testcase would look something like this:
FormatFilesCheck(files):
expect(formatter.status).equals(installed)
for prep in prework:
do prep
expect(prep).equals(what_prep_should_look_like)
for file in files:
format file
expect(file).equals(what_file_should_look_like)
clean up work
# your expect/assert statements to make sure everything was cleaned up
Earlier I was using python unittest in my project, and with it came unittest.TextTestRunner and unittest.defaultTestLoader.loadTestsFromTestCase. I used them for the following reasons,
Control the execution of unittest using a wrapper function which calls the unittests's run method. I did not want the command line approach.
Read the unittest's output from the result object and upload the results to a bug tracking system which allow us to generate some complex reports on code stability.
Recently there was a decision made to switch to py.test, how can I do the above using py.test ? I don't want to parse any CLI/HTML to get the output from py.test. I also don't want to write too much code on my unit test file to do this.
Can someone help me with this ?
You can use the pytest's hook to intercept the test result reporting:
conftest.py:
import pytest
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_logreport(report):
yield
# Define when you want to report:
# when=setup/call/teardown,
# fields: .failed/.passed/.skipped
if report.when == 'call' and report.failed:
# Add to the database or an issue tracker or wherever you want.
print(report.longreprtext)
print(report.sections)
print(report.capstdout)
print(report.capstderr)
Similarly, you can intercept one of these hooks to inject your code at the needed stage (in some cases, with the try-except around yield):
pytest_runtest_protocol(item, nextitem)
pytest_runtest_setup(item)
pytest_runtest_call(item)
pytest_runtest_teardown(item, nextitem)
pytest_runtest_makereport(item, call)
pytest_runtest_logreport(report)
Read more: Writing pytest plugins
All of this can be easily done either with a tiny plugin made as a simple installable library, or as a pseudo-plugin conftest.py which just lies around in one of the directories with the tests.
It looks like pytest lets you launch from Python code instead of using the command line. It looks like you just pass the same arguments to the function call that would be on the command line.
Pytest will create resultlog format files, but the feature is deprecated. The documentation suggests using the pytest-tap plugin that produces files in the Test Anything Protocol.
Short Question
Is it possible to select, at run time, what unittests are going to be run when using an auto-discovery method in Python's unittest module.
Background
I am using the unittest module to run system tests on an external system. See below for an example sudo-testcase. The unittest module allows me to create an arbitrary number testcases that I can run using the unittest's testrunner. I have been using this method roughly 6 months of constant use and it is working out very well.
At this point in time I am wanting to try and make this more generic and user friendly. For all of my test suites that I am running now, I have hard coded which tests must run for every system. This is fine for an untested system, but when a test fails incorrectly (a user connects to the wrong test point etc...) they must re-run the entire test suite. As some of the complete suites can take up to 20 min, this is no where near ideal.
I know it is possible to create custom testsuite builders that could define which tests to run. My issue with this is that there are hundreds of testcases that can be run and maintaining this would be come a nightmare if test case names change etc...
My hope was to use nose, or the built-in unittest module to achieve this. The discovery part seems to be pretty straight forward for both options, but my issue is that only way to select a subset of testcases to run is to define a pattern that exists in the testcase name. This means I would still have to hard code a list of patterns to define what testcases to run. So if I have to hard code this list, what is the point of using the auto-discovery (please note this is rhetorical question)?
My end goal is to have a generic way to select which unittests to run during execution, in the form of check boxes or a text field the user can edit. Ideally the solution would be using Python 2.7 and need would need to run on Windows, OSX, and Linux.
Edit
To help clarify, I do not want the tool to generate the list of choices or the check boxes. The tool ideally would return a list of all of the tests in a directory, including the full path and what Suite (if any) the testcase belongs down. With this list, I would build the check boxes or a combo box the user interacts with and pass these tests into a testsuite on the fly to run.
Example Testcase
test_measure_5v_reference
1) Connect to DC power supply via GPIB
2) Set DC voltage to 12.0 V
3) Connect to a Digital Multimeter via GPIB
4) Measure / Record the voltage at a given 5V reference test point
5) Use unittest's assert functions to make sure the value is within tolerance
Store each subset of tests in its own module. Get a list of module names by having the user select them using, as you stated, checkboxes or text entry. Once you have the list of module names, you can build a corresponding test suite doing something similar to the following.
testsuite = unittest.TestSuite()
for module in modules:
testsuite.addTest(unittest.defaultTestLoader.loadTestsFromModule(module))
I have a Python command line script that connects to a database, generates files from the data and sends e-mails. I already have some unit-tests for the important components. Now I'd like to do tests that use several or all components together, load the test database with sample data and check for the correct output.
Are there Python libraries that support this kind of testing? Specifically, I'm looking for easy ways to
put sample data in the database
check for specific changes in the database
check for existence and content of specific files
Should I even do these tests in Python or should I just write a bunch of shell scripts?
I use nosetests
http://somethingaboutorange.com/mrl/projects/nose/1.0.0/
And it fits very well with another component, "Coverage", that provides you with information about how much of your code is tested.
Yes. The unittest libraries can be used for this.
Don't be fooled by the name. A "unit" is not always a single, standalone class (or function).
A "unit" can be a composite "unit" of one or more classes or modules or packages.