I have a set of unit tests in python. Some of them open graphical objects using pyqt, some are just standard standalone tests. My goal is to automatically run at least the tests that don't need to open window, because unless it will wait for user input and then fail.
Note that:
I can't remove the graphical tests (constraint from the project)
By default all tests should run, but passing some parameter only non graphical one will run
My test suite is built using unittest.TestLoader().discover
My best guess would be to pass a global parameter to the TestSuite, so that each test could check the value to know if it should skip or not. But after reading unittest documentation I could not find a way to do this.
I'm aware of this question: How To Send Arguments to a UnitTest Without Global Variables, but I would have expected some unittest configuration.
You could use unittest.skipIf(condition, reason) and an environment variable to skip the graphical tests.
Create a decorator like:
graphical_test = unittest.skipIf(
os.environ.get('GRAPHICAL_TESTS', False), 'Non graphical tests only'
)
Then annotate your graphical tests with #graphical_test and run your tests after setting GRAPHICAL_TESTS=1
Related
I've built a series of tests (using pytest) for a codebase interacting with the Github API (both making calls to it, and receiving webhooks).
Currently, these tests run against a semi-realistic mock of github: calls to github are intercepted through Sentry's Responses and run through a fake github/git implementation (of the bits I need), which can also be interacted with directly from the tests cases. Any webhook which needs to be triggered uses Werkzeug's test client to call back into the WSGI application used as webhook endpoint.
This works nicely, fast (enough) and is an excellent default, but I'd like the option to run these same tests against github itself, and that's where I'm stumped: I need to switch out the current implementations of the systems under test (direct "library" access to the codebase & mock github) with different ones (API access to "externally" run codebase & actual github), and I'm not quite sure how to proceed.
I attempted to use pytest_generate_tests to switch out the fixture implementations (of the codebase & github) via a plugin but I don't quite know if that would even work, and so far my attempts to load a local file as plugin in pytest via pytest -p <package_name> have not been met with much success.
I'd like to know if I'm heading in the right direction, and in that case if anyone can help with using "local" plugins (not installed via setuptools and not conftest.py-based) in pytest.
Not sure if that has any relevance, but I'm using pytest 3.6.0 running on CPython 3.6, requests 2.18.4, responses 0.9.0 and Werkzeug 0.14.1.
There are several ways to approach this. The one I would go for it by default run your mocked tests and then when a command line flag is present test against both the mock and the real version.
So first the easier part, adding a command line option:
def pytest_addoption(parser):
parser.addoption('--github', action='store_true',
help='Also test against real github')
Now this is available via the pytestconfig fixture as pytestconfig.getoption('github'), often also indirectly available, e.g. via the request fixture as request.config.getoption('github').
Now you need to use this parametrize any test which needs to interact with the github API so that they get run both with the mock and with the real instance. Without knowing your code it sounds like a good point would be the Werkzeug client: make this into a fixture and then it can be parameterized to return both a real client or the test client you mention:
#pytest.fixture
def werkzeug_client(werkzeug_client_type):
if werkzeug_client_type == 'real':
return create_the_real_client()
else:
return create_the_mock_client()
def pytest_generate_tests(metafunc):
if 'werkzeug_client_type' in metafunc.fixturenames:
types = ['mock']
if metafunc.config.getoption('github'):
types.append('real')
metafunc.parametrize('werkzeug_client_type', types)
Now if you write your test as:
def test_foo(werkzeug_client):
assert werkzeug_client.whatever()
You will get one test normally and two tests when invoked with pytest --github.
(Be aware hooks must be in conftest.py files while fixtures can be anywhere. Be extra aware that the pytest_addoption hook should really only be used in the toplevel conftest file to avoid you from confusion about when the hook is used by pytest and when not. So you should put all this code in a toplevel conftest.py file really.)
I'm creating a solution that scales based on the number of tests in a Python unittest Test Suite. I need to access the number of tests I've selected to run, whether that's just one or a whole class of tests. I see these test names available when I run in debug mode under _handleClassSetUp, suite.py:163, under the variable named self _tests, but I've been unable to interact with it or select it when trying to reference it from my setUpClass(cls) method.
I am able to get the tests from the class by using number_tests = cls.countTestCases(cls) in the setUpClass method, but this doesn't change if I run only one test as it selects all from that class I'm running.
Any help would be greatly appreciated.
Earlier I was using python unittest in my project, and with it came unittest.TextTestRunner and unittest.defaultTestLoader.loadTestsFromTestCase. I used them for the following reasons,
Control the execution of unittest using a wrapper function which calls the unittests's run method. I did not want the command line approach.
Read the unittest's output from the result object and upload the results to a bug tracking system which allow us to generate some complex reports on code stability.
Recently there was a decision made to switch to py.test, how can I do the above using py.test ? I don't want to parse any CLI/HTML to get the output from py.test. I also don't want to write too much code on my unit test file to do this.
Can someone help me with this ?
You can use the pytest's hook to intercept the test result reporting:
conftest.py:
import pytest
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_logreport(report):
yield
# Define when you want to report:
# when=setup/call/teardown,
# fields: .failed/.passed/.skipped
if report.when == 'call' and report.failed:
# Add to the database or an issue tracker or wherever you want.
print(report.longreprtext)
print(report.sections)
print(report.capstdout)
print(report.capstderr)
Similarly, you can intercept one of these hooks to inject your code at the needed stage (in some cases, with the try-except around yield):
pytest_runtest_protocol(item, nextitem)
pytest_runtest_setup(item)
pytest_runtest_call(item)
pytest_runtest_teardown(item, nextitem)
pytest_runtest_makereport(item, call)
pytest_runtest_logreport(report)
Read more: Writing pytest plugins
All of this can be easily done either with a tiny plugin made as a simple installable library, or as a pseudo-plugin conftest.py which just lies around in one of the directories with the tests.
It looks like pytest lets you launch from Python code instead of using the command line. It looks like you just pass the same arguments to the function call that would be on the command line.
Pytest will create resultlog format files, but the feature is deprecated. The documentation suggests using the pytest-tap plugin that produces files in the Test Anything Protocol.
I need to test functions Initialize/Shutdown with different parameters. Each of these functions can be executed only once during app lifetime. Do I have to create 10 files with only one test function each or can I define 10 tests in one file and mark each function to be run using new instance of python interpreter?
Is this possible with either PyTest or the built-in unittest package?
I made it work with unittest. Created _runner.py (sources below) which runs all unit tests in current directory using test discovery (unittest.TestLoader). It loops through all test suites and checks test case names for "IsolatedTest" words. These will be run using new Python instance by calling subprocess.check_output("python.."). Others are run normally in current process. For example I'm declaring class FooIsolatedTest(unittest.TestCase). In isolated tests as a replacement for unittest.main() using such code: import _runner; _runner.main(os.path.basename(__file__)). You can take a look at sources here.
Short Question
Is it possible to select, at run time, what unittests are going to be run when using an auto-discovery method in Python's unittest module.
Background
I am using the unittest module to run system tests on an external system. See below for an example sudo-testcase. The unittest module allows me to create an arbitrary number testcases that I can run using the unittest's testrunner. I have been using this method roughly 6 months of constant use and it is working out very well.
At this point in time I am wanting to try and make this more generic and user friendly. For all of my test suites that I am running now, I have hard coded which tests must run for every system. This is fine for an untested system, but when a test fails incorrectly (a user connects to the wrong test point etc...) they must re-run the entire test suite. As some of the complete suites can take up to 20 min, this is no where near ideal.
I know it is possible to create custom testsuite builders that could define which tests to run. My issue with this is that there are hundreds of testcases that can be run and maintaining this would be come a nightmare if test case names change etc...
My hope was to use nose, or the built-in unittest module to achieve this. The discovery part seems to be pretty straight forward for both options, but my issue is that only way to select a subset of testcases to run is to define a pattern that exists in the testcase name. This means I would still have to hard code a list of patterns to define what testcases to run. So if I have to hard code this list, what is the point of using the auto-discovery (please note this is rhetorical question)?
My end goal is to have a generic way to select which unittests to run during execution, in the form of check boxes or a text field the user can edit. Ideally the solution would be using Python 2.7 and need would need to run on Windows, OSX, and Linux.
Edit
To help clarify, I do not want the tool to generate the list of choices or the check boxes. The tool ideally would return a list of all of the tests in a directory, including the full path and what Suite (if any) the testcase belongs down. With this list, I would build the check boxes or a combo box the user interacts with and pass these tests into a testsuite on the fly to run.
Example Testcase
test_measure_5v_reference
1) Connect to DC power supply via GPIB
2) Set DC voltage to 12.0 V
3) Connect to a Digital Multimeter via GPIB
4) Measure / Record the voltage at a given 5V reference test point
5) Use unittest's assert functions to make sure the value is within tolerance
Store each subset of tests in its own module. Get a list of module names by having the user select them using, as you stated, checkboxes or text entry. Once you have the list of module names, you can build a corresponding test suite doing something similar to the following.
testsuite = unittest.TestSuite()
for module in modules:
testsuite.addTest(unittest.defaultTestLoader.loadTestsFromModule(module))