Running pytest tests against multiple backends? - python

I've built a series of tests (using pytest) for a codebase interacting with the Github API (both making calls to it, and receiving webhooks).
Currently, these tests run against a semi-realistic mock of github: calls to github are intercepted through Sentry's Responses and run through a fake github/git implementation (of the bits I need), which can also be interacted with directly from the tests cases. Any webhook which needs to be triggered uses Werkzeug's test client to call back into the WSGI application used as webhook endpoint.
This works nicely, fast (enough) and is an excellent default, but I'd like the option to run these same tests against github itself, and that's where I'm stumped: I need to switch out the current implementations of the systems under test (direct "library" access to the codebase & mock github) with different ones (API access to "externally" run codebase & actual github), and I'm not quite sure how to proceed.
I attempted to use pytest_generate_tests to switch out the fixture implementations (of the codebase & github) via a plugin but I don't quite know if that would even work, and so far my attempts to load a local file as plugin in pytest via pytest -p <package_name> have not been met with much success.
I'd like to know if I'm heading in the right direction, and in that case if anyone can help with using "local" plugins (not installed via setuptools and not conftest.py-based) in pytest.
Not sure if that has any relevance, but I'm using pytest 3.6.0 running on CPython 3.6, requests 2.18.4, responses 0.9.0 and Werkzeug 0.14.1.

There are several ways to approach this. The one I would go for it by default run your mocked tests and then when a command line flag is present test against both the mock and the real version.
So first the easier part, adding a command line option:
def pytest_addoption(parser):
parser.addoption('--github', action='store_true',
help='Also test against real github')
Now this is available via the pytestconfig fixture as pytestconfig.getoption('github'), often also indirectly available, e.g. via the request fixture as request.config.getoption('github').
Now you need to use this parametrize any test which needs to interact with the github API so that they get run both with the mock and with the real instance. Without knowing your code it sounds like a good point would be the Werkzeug client: make this into a fixture and then it can be parameterized to return both a real client or the test client you mention:
#pytest.fixture
def werkzeug_client(werkzeug_client_type):
if werkzeug_client_type == 'real':
return create_the_real_client()
else:
return create_the_mock_client()
def pytest_generate_tests(metafunc):
if 'werkzeug_client_type' in metafunc.fixturenames:
types = ['mock']
if metafunc.config.getoption('github'):
types.append('real')
metafunc.parametrize('werkzeug_client_type', types)
Now if you write your test as:
def test_foo(werkzeug_client):
assert werkzeug_client.whatever()
You will get one test normally and two tests when invoked with pytest --github.
(Be aware hooks must be in conftest.py files while fixtures can be anywhere. Be extra aware that the pytest_addoption hook should really only be used in the toplevel conftest file to avoid you from confusion about when the hook is used by pytest and when not. So you should put all this code in a toplevel conftest.py file really.)

Related

Prevent any file system usage in Python's pytest

I have a program that, for data security reasons, should never persist anything to local storage if deployed in the cloud. Instead, any input / output needs to be written to the connected (encrypted) storage instead.
To allow deployment locally as well as to multiple clouds, I am using the very useful fsspec. However, other developers are working on the project as well, and I need a way to make sure that they aren't accidentally using local File I/O methods - which may pass unit tests, but fail when deployed to the cloud.
For this, my idea is to basically mock/replace any I/O methods in pytest with ones that don't work and make the test fail. However, this is probably not straightforward to implement. I am wondering whether anyone else has had this problem as well, and maybe best practices / a library exists for this already?
During my research, I found pyfakefs, which looks like it is very close what I am trying to do - except I don't want to simulate another file system, I want there to be no local file system at all.
Any input appreciated.
You can not use any pytest addons to make it secure. There will always be ways to overcome it. Even if you patch everything in the standard python library, the code always can use third-party C libraries which can't be patched from the Python side.
Even if you, by some way, restrict every way the python process can write the file, it will still be able to call the OS or other process to write something.
The only ways are to run only the trusted code or to use some sandbox to run the process.
In Unix-like operating systems, the workable solution may be to create a chroot and run the program inside it.
If you're ok with just preventing opening files using open function, you can patch this function in builtins module.
_original_open = builtins.open
class FileSystemUsageError(Exception):
pass
def patched_open(*args, **kwargs):
raise FileSystemUsageError()
#pytest.fixture
def disable_fs():
builtins.open = patched_open
yield
builtins.open = _original_open
I've done this example of code on the basis of the pytest plugin which is written by the company in which I work now to prevent using network in pytests. You can see a full example here: https://github.com/best-doctor/pytest_network/blob/4e98d816fb93bcbdac4593710ff9b2d38d16134d/pytest_network.py

Is it possible to implement multiple test runners in pyunitest? while only running the test suite once

if __name__ == '__main__':
if is_running_under_teamcity():
runner = TeamcityTestRunner()
else:
runner = HTMLTestRunner.HTMLTestRunner(
stream=outfile,
title='Test Report',
description='This is an example.'
)
unittest.main(testRunner=runner)
I am currently running some tests using unittest module in python this is my current code above. I am deploying this test setup on Teamcity, the first module allows me to convert the output into teamcity-messages and the second creates a html report of the results. Is there a way I can run both of these runners while only running one set of tests? The only option I can see at the minuete is to either try and combine both these modules into a hybrid or using another testing module that Teamcity supports. However I would like to keep the dependancies as low as possible
Any ideas would be great :)
Any ideas would be great :)
Looks like you'll have to handroll it, looking at the code TeamcityTestRunner is a pretty simple extension of the standard TextTestRunner, however HTMLTestRunner is a way more complex beast.
Sadly this is one area of the stdlib which is really badly architected: one could expect the test runner to be concerned solely with discovering and running tests, however it's also tasked with part of the test reporting rather than have an entirely separate test reporter (this test reporting is furthermore a split responsability with the test result, which shouldn't be part of that one's job description either).
Frankly if you don't have any further customisation I'd suggest just using pytest as your test runner instead of unittest with a custom runner:
it should be able to run unittest tests fine
IME it has better separation of concerns and pluggability so having multiple reporters / formatters should work out of the box
pytest-html certainly has no issue generating its reports without affecting the normal text output
according to the readme teamcity gets automatically enabled and used for pytest
so I'd assume generating html reports during your teamcity builds would work fine (to test)
and you can eventually migrate to using pytest tests (which are so much better it's not even funny)

Read py.test's output as object

Earlier I was using python unittest in my project, and with it came unittest.TextTestRunner and unittest.defaultTestLoader.loadTestsFromTestCase. I used them for the following reasons,
Control the execution of unittest using a wrapper function which calls the unittests's run method. I did not want the command line approach.
Read the unittest's output from the result object and upload the results to a bug tracking system which allow us to generate some complex reports on code stability.
Recently there was a decision made to switch to py.test, how can I do the above using py.test ? I don't want to parse any CLI/HTML to get the output from py.test. I also don't want to write too much code on my unit test file to do this.
Can someone help me with this ?
You can use the pytest's hook to intercept the test result reporting:
conftest.py:
import pytest
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_logreport(report):
yield
# Define when you want to report:
# when=setup/call/teardown,
# fields: .failed/.passed/.skipped
if report.when == 'call' and report.failed:
# Add to the database or an issue tracker or wherever you want.
print(report.longreprtext)
print(report.sections)
print(report.capstdout)
print(report.capstderr)
Similarly, you can intercept one of these hooks to inject your code at the needed stage (in some cases, with the try-except around yield):
pytest_runtest_protocol(item, nextitem)
pytest_runtest_setup(item)
pytest_runtest_call(item)
pytest_runtest_teardown(item, nextitem)
pytest_runtest_makereport(item, call)
pytest_runtest_logreport(report)
Read more: Writing pytest plugins
All of this can be easily done either with a tiny plugin made as a simple installable library, or as a pseudo-plugin conftest.py which just lies around in one of the directories with the tests.
It looks like pytest lets you launch from Python code instead of using the command line. It looks like you just pass the same arguments to the function call that would be on the command line.
Pytest will create resultlog format files, but the feature is deprecated. The documentation suggests using the pytest-tap plugin that produces files in the Test Anything Protocol.

Running the same tests with different configurations

I have some Python code abstracting the database and the business logic on it. This code is already covered by unit tests but now I need to test this code against different DBs (MySQL, SQLite, etc...)
What is the default pattern for passing the same set of tests with different configurations? My goal is making sure that that abstraction layer works as expected independently of the underlying database. If that could help, I'm using nosetests for running tests but it seems that it lacks the Suite Test concept
Best regards.
I like to use test fixtures for situations in which I have several similar tests. In Python, under Nose, I usually implement this as a common test module imported by other modules. For instance, I might use the following file structure:
db_fixtures.py:
import unittest
class BaseDB(unittest.TestCase):
def testFirstOperation(self):
self.db.query("Foo")
def testSecondOperation(self):
self.db.query("Blah")
database_tests.py:
import db_fixtures
class SQliteTest(db_fixtures.BaseDB):
def setUp(self):
self.db = createSqliteconnection()
class MySQLTest(db_fixtures.BaseDB):
def setUp(self):
self.db = createMySQLconnection()
This will run all tests defined in BaseDB on both MySQL and SQlite. Note that I named db_fixtures.py in such a way that it won't be run by Nose.
Nose supports test suites just import and use unittest.TestSuite. In fact nose will happily run any tesys written using the standard lib's unittest module so tesys do not need to be written in the nose style to be discovered by the nose test runner.
However, I suspect you need morw than test suite support to do the kind of testing you are talking about but more detail about your application is necessary to really address that.
use --attrib plugin, and in the commadn line
1. nosetests -s -a 'sqlite'
2. nosetests -s -a 'mysql'

python test framework

I have to test a piece of hardware using it's provided python API.
The hardware has two interfaces one of which has to be programmed by
using it's API and has to be checked if values are read/written correctly by using another interface.
Is there a python library I can use ?
It's something like this:
Test1
write using Interface under Test
check if written correctly by working interface.
program hardware using working interface 3 then
Test2
write using Interface under Test and check
Also try out various range of values for writing within the test at various speeds set through the API
and so on...
A log or results file should be created at the end of this series of tests which details all these tests and whether they passed or failed and some other results from the test
Try the unittest module from the standard library (formerly known as PyUnit).
I'd recommend py.test. It features auto discovery of tests, is non-invasive and you can easily log test results to a file (though that should be possible with every test framework).
Just to be complete another of these auto discovery test suites is nose (http://code.google.com/p/python-nose/). I normally just use just straight up unittest (http://docs.python.org/library/unittest.html) but I am in a possibly more formal environment.
If you want to have a simple test library for auto-logging, and providing you the ability to try out speed, retry, and something else related to the test, you could try test_steps package, which can be used independently or with py.test / nose platform together.

Categories

Resources