I have some Python code abstracting the database and the business logic on it. This code is already covered by unit tests but now I need to test this code against different DBs (MySQL, SQLite, etc...)
What is the default pattern for passing the same set of tests with different configurations? My goal is making sure that that abstraction layer works as expected independently of the underlying database. If that could help, I'm using nosetests for running tests but it seems that it lacks the Suite Test concept
Best regards.
I like to use test fixtures for situations in which I have several similar tests. In Python, under Nose, I usually implement this as a common test module imported by other modules. For instance, I might use the following file structure:
db_fixtures.py:
import unittest
class BaseDB(unittest.TestCase):
def testFirstOperation(self):
self.db.query("Foo")
def testSecondOperation(self):
self.db.query("Blah")
database_tests.py:
import db_fixtures
class SQliteTest(db_fixtures.BaseDB):
def setUp(self):
self.db = createSqliteconnection()
class MySQLTest(db_fixtures.BaseDB):
def setUp(self):
self.db = createMySQLconnection()
This will run all tests defined in BaseDB on both MySQL and SQlite. Note that I named db_fixtures.py in such a way that it won't be run by Nose.
Nose supports test suites just import and use unittest.TestSuite. In fact nose will happily run any tesys written using the standard lib's unittest module so tesys do not need to be written in the nose style to be discovered by the nose test runner.
However, I suspect you need morw than test suite support to do the kind of testing you are talking about but more detail about your application is necessary to really address that.
use --attrib plugin, and in the commadn line
1. nosetests -s -a 'sqlite'
2. nosetests -s -a 'mysql'
Related
if __name__ == '__main__':
if is_running_under_teamcity():
runner = TeamcityTestRunner()
else:
runner = HTMLTestRunner.HTMLTestRunner(
stream=outfile,
title='Test Report',
description='This is an example.'
)
unittest.main(testRunner=runner)
I am currently running some tests using unittest module in python this is my current code above. I am deploying this test setup on Teamcity, the first module allows me to convert the output into teamcity-messages and the second creates a html report of the results. Is there a way I can run both of these runners while only running one set of tests? The only option I can see at the minuete is to either try and combine both these modules into a hybrid or using another testing module that Teamcity supports. However I would like to keep the dependancies as low as possible
Any ideas would be great :)
Any ideas would be great :)
Looks like you'll have to handroll it, looking at the code TeamcityTestRunner is a pretty simple extension of the standard TextTestRunner, however HTMLTestRunner is a way more complex beast.
Sadly this is one area of the stdlib which is really badly architected: one could expect the test runner to be concerned solely with discovering and running tests, however it's also tasked with part of the test reporting rather than have an entirely separate test reporter (this test reporting is furthermore a split responsability with the test result, which shouldn't be part of that one's job description either).
Frankly if you don't have any further customisation I'd suggest just using pytest as your test runner instead of unittest with a custom runner:
it should be able to run unittest tests fine
IME it has better separation of concerns and pluggability so having multiple reporters / formatters should work out of the box
pytest-html certainly has no issue generating its reports without affecting the normal text output
according to the readme teamcity gets automatically enabled and used for pytest
so I'd assume generating html reports during your teamcity builds would work fine (to test)
and you can eventually migrate to using pytest tests (which are so much better it's not even funny)
I've built a series of tests (using pytest) for a codebase interacting with the Github API (both making calls to it, and receiving webhooks).
Currently, these tests run against a semi-realistic mock of github: calls to github are intercepted through Sentry's Responses and run through a fake github/git implementation (of the bits I need), which can also be interacted with directly from the tests cases. Any webhook which needs to be triggered uses Werkzeug's test client to call back into the WSGI application used as webhook endpoint.
This works nicely, fast (enough) and is an excellent default, but I'd like the option to run these same tests against github itself, and that's where I'm stumped: I need to switch out the current implementations of the systems under test (direct "library" access to the codebase & mock github) with different ones (API access to "externally" run codebase & actual github), and I'm not quite sure how to proceed.
I attempted to use pytest_generate_tests to switch out the fixture implementations (of the codebase & github) via a plugin but I don't quite know if that would even work, and so far my attempts to load a local file as plugin in pytest via pytest -p <package_name> have not been met with much success.
I'd like to know if I'm heading in the right direction, and in that case if anyone can help with using "local" plugins (not installed via setuptools and not conftest.py-based) in pytest.
Not sure if that has any relevance, but I'm using pytest 3.6.0 running on CPython 3.6, requests 2.18.4, responses 0.9.0 and Werkzeug 0.14.1.
There are several ways to approach this. The one I would go for it by default run your mocked tests and then when a command line flag is present test against both the mock and the real version.
So first the easier part, adding a command line option:
def pytest_addoption(parser):
parser.addoption('--github', action='store_true',
help='Also test against real github')
Now this is available via the pytestconfig fixture as pytestconfig.getoption('github'), often also indirectly available, e.g. via the request fixture as request.config.getoption('github').
Now you need to use this parametrize any test which needs to interact with the github API so that they get run both with the mock and with the real instance. Without knowing your code it sounds like a good point would be the Werkzeug client: make this into a fixture and then it can be parameterized to return both a real client or the test client you mention:
#pytest.fixture
def werkzeug_client(werkzeug_client_type):
if werkzeug_client_type == 'real':
return create_the_real_client()
else:
return create_the_mock_client()
def pytest_generate_tests(metafunc):
if 'werkzeug_client_type' in metafunc.fixturenames:
types = ['mock']
if metafunc.config.getoption('github'):
types.append('real')
metafunc.parametrize('werkzeug_client_type', types)
Now if you write your test as:
def test_foo(werkzeug_client):
assert werkzeug_client.whatever()
You will get one test normally and two tests when invoked with pytest --github.
(Be aware hooks must be in conftest.py files while fixtures can be anywhere. Be extra aware that the pytest_addoption hook should really only be used in the toplevel conftest file to avoid you from confusion about when the hook is used by pytest and when not. So you should put all this code in a toplevel conftest.py file really.)
I am using Python Tornado web server. When I write test, before all tests, I want to do something (such as prepare some data, reset database ...). How can I achieve this in Python or Tornado web server.
In some languages I can easily do this. Example: in Golang, there is a file named main_test.go.
Thanks
In your test folder, you create __init__.py and initialize everything here.
// __init__.py
reset_database()
run_migration()
seed_data()
Noted that, you should configure your project running test from root folder. For example, if your test inside app/tests/api/sample_api.py, your test should be run from app. In this case, __init__.py will always run before running your sample_api.py. Here is the command line I usually run for running all tests inside project:
python -m unittest discover
If you use unittest.TestCase or tornado.testing.*TestCase (which actually are subclasses of unittest.TestCase), look at the setUp() and tearDown() methods. You can wrap everything you want just like
class MyTests(unittest.TestCase):
def setUp(self):
load_data_to_db()
def test_smth(self):
self.assertIsInstance("setUp and tearDown are useful", str)
def tearDown(self):
cleanup_db()
Scenario:
Suppose I have some python scripts that do maintenance on the DB, if I use unittest to run tests on those scripts then they will interact with the DB and clobber it. So I'm trying to find a way to use the native Django test suite which simulates the DB without clobbering the real one. From the docs it looks like you can run test scripts using manage.py tests /path/to/someTests.py but only on Django > v1.6 (I'm on Django-Nonrel v1.5.5).
I found How do I run a django TestCase manually / against other database, which talks about running individual test cases and it seems dated--it mentions Django v1.2. However, I'd like to manually kick off the entire suite of tests I've defined. Assume for now that I can't kick off the suite using manage.py test myApp (because that's not what I'm doing). I tried kicking off the tests using unittest.main(). This works with one drawback... it completely clobbers the database. Thank goodness for backups.
someTest.py
import unittest
import myModule
from django.test import TestCase
class venvTest(TestCase): # some test
def test_outside_venv(self):
self.failUnlessRaises(EnvironmentError, myModule.main)
if __name__ == '__main__':
unittest.main() # this fires off all tests, with nasty side effect of
# totally clobbering the DB
How can I run python someTest.py and get it to fire off all the testcases using the Django test runner?
I can't even get this to work:
>>> venvTest.run()
Update
Since it's been mentioned in the comments, I asked a similar but different question here: django testing external script. That question concerns using manage.py test /someTestScript.py. This question concerns firing off test cases or a test suite separately from manage.py.
... and a pony! No, seriously. I am looking for a way to organize tests that "just works". Most things do work, but not all pieces fit together. So here is what I want:
Having tests automatically discovered. This includes doctests. Note that the sum of doctests must not appear as a single test. (i.e. not what py.test --doctest-modules does)
Being able to run tests in parallel. (Something like py.test -n from xdist)
Generating a coverage report.
Make python setup.py test just work.
My current approach involves a tests directory and the load_tests protocol. All files contained are named like test_*.py. This makes python -m unittest discover just work, if I create a file test_doctests.py with the following content.
import doctest
import mymodule1, mymodule2
def load_tests(loader, tests, ignore):
tests.addTests(doctest.DocTestSuite(mymodule1))
tests.addTests(doctest.DocTestSuite(mymodule2))
return tests
This approach also has the upside that one can use setuptools and supply setup(test_suite="unittest2.collector").
However this approach has a few problems.
coverage.py expects to run a script. So I cannot use unittest2 discovery here.
py.test does not run load_tests functions, so it does not find the doctests and the --doctest-modules option is crap.
nosetests runs the load_tests functions, but does not supply any parameters. This appears totally broken on the side of nose.
How can I make things work better than this or fix some of the issues above?
This is an old question, but the problem still persists for some of us! I was just working through it and found a solution similar to kaapstorm's, but with much nicer output. I use py.test to run it, but I think it should be compatible across test runners:
import doctest
from mypackage import mymodule
def test_doctest():
results = doctest.testmod(mymodule)
if results.failed:
raise Exception(results)
What I end up with in a failure case is the printed stdout output that you would get from running doctest manually, with an additional exception that looks like this:
Exception: TestResults(failed=1, attempted=21)
As kaapstrom mentioned, it doesn't count tests properly (unless there are failures) but I find that isn't worth a whole lot provided the coverage numbers come back high :)
I use nose, and found your question when I experienced the same problem.
What I've ended up going with is not pretty, but it does run the tests.
import doctest
import mymodule1, mymodule2
def test_mymodule1():
assert doctest.testmod(mymodule1, raise_on_error=True)
def test_mymodule2():
assert doctest.testmod(mymodule2, raise_on_error=True)
Unfortunately it runs all the doctests in a module as a single test. But if things go wrong, at least I know where to start looking. A failure results in a DocTestFailure, with a useful message:
DocTestFailure: <DocTest mymodule1.myfunc from /path/to/mymodule1.py:63 (4 examples)>