Scenario:
Suppose I have some python scripts that do maintenance on the DB, if I use unittest to run tests on those scripts then they will interact with the DB and clobber it. So I'm trying to find a way to use the native Django test suite which simulates the DB without clobbering the real one. From the docs it looks like you can run test scripts using manage.py tests /path/to/someTests.py but only on Django > v1.6 (I'm on Django-Nonrel v1.5.5).
I found How do I run a django TestCase manually / against other database, which talks about running individual test cases and it seems dated--it mentions Django v1.2. However, I'd like to manually kick off the entire suite of tests I've defined. Assume for now that I can't kick off the suite using manage.py test myApp (because that's not what I'm doing). I tried kicking off the tests using unittest.main(). This works with one drawback... it completely clobbers the database. Thank goodness for backups.
someTest.py
import unittest
import myModule
from django.test import TestCase
class venvTest(TestCase): # some test
def test_outside_venv(self):
self.failUnlessRaises(EnvironmentError, myModule.main)
if __name__ == '__main__':
unittest.main() # this fires off all tests, with nasty side effect of
# totally clobbering the DB
How can I run python someTest.py and get it to fire off all the testcases using the Django test runner?
I can't even get this to work:
>>> venvTest.run()
Update
Since it's been mentioned in the comments, I asked a similar but different question here: django testing external script. That question concerns using manage.py test /someTestScript.py. This question concerns firing off test cases or a test suite separately from manage.py.
Related
if __name__ == '__main__':
if is_running_under_teamcity():
runner = TeamcityTestRunner()
else:
runner = HTMLTestRunner.HTMLTestRunner(
stream=outfile,
title='Test Report',
description='This is an example.'
)
unittest.main(testRunner=runner)
I am currently running some tests using unittest module in python this is my current code above. I am deploying this test setup on Teamcity, the first module allows me to convert the output into teamcity-messages and the second creates a html report of the results. Is there a way I can run both of these runners while only running one set of tests? The only option I can see at the minuete is to either try and combine both these modules into a hybrid or using another testing module that Teamcity supports. However I would like to keep the dependancies as low as possible
Any ideas would be great :)
Any ideas would be great :)
Looks like you'll have to handroll it, looking at the code TeamcityTestRunner is a pretty simple extension of the standard TextTestRunner, however HTMLTestRunner is a way more complex beast.
Sadly this is one area of the stdlib which is really badly architected: one could expect the test runner to be concerned solely with discovering and running tests, however it's also tasked with part of the test reporting rather than have an entirely separate test reporter (this test reporting is furthermore a split responsability with the test result, which shouldn't be part of that one's job description either).
Frankly if you don't have any further customisation I'd suggest just using pytest as your test runner instead of unittest with a custom runner:
it should be able to run unittest tests fine
IME it has better separation of concerns and pluggability so having multiple reporters / formatters should work out of the box
pytest-html certainly has no issue generating its reports without affecting the normal text output
according to the readme teamcity gets automatically enabled and used for pytest
so I'd assume generating html reports during your teamcity builds would work fine (to test)
and you can eventually migrate to using pytest tests (which are so much better it's not even funny)
I am using Python Tornado web server. When I write test, before all tests, I want to do something (such as prepare some data, reset database ...). How can I achieve this in Python or Tornado web server.
In some languages I can easily do this. Example: in Golang, there is a file named main_test.go.
Thanks
In your test folder, you create __init__.py and initialize everything here.
// __init__.py
reset_database()
run_migration()
seed_data()
Noted that, you should configure your project running test from root folder. For example, if your test inside app/tests/api/sample_api.py, your test should be run from app. In this case, __init__.py will always run before running your sample_api.py. Here is the command line I usually run for running all tests inside project:
python -m unittest discover
If you use unittest.TestCase or tornado.testing.*TestCase (which actually are subclasses of unittest.TestCase), look at the setUp() and tearDown() methods. You can wrap everything you want just like
class MyTests(unittest.TestCase):
def setUp(self):
load_data_to_db()
def test_smth(self):
self.assertIsInstance("setUp and tearDown are useful", str)
def tearDown(self):
cleanup_db()
I want to use the --keepdb option that allow to keep the database after running tests, because in my case, the creation of the dabatase takes a while (but running tests is actualy fast)
However, I would like to keep the database structure only, not the data.
Each time a run tests, I need an empty database.
I know I could use tearDown method to delete each object created, but it is a tedious and error-prone way to do.
I just need to find a way to tell Django to flush the whole dabatase (not destroy it) at the end of the unit tests.
I have thinking of making a very simple script that:
Run the tests keeping the db: manage.py test --keepdb
run manage.py flushdb --database test_fugodb
However, with 2nd step, I got a django.db.utils.ConnectionDoesNotExist: The connection test_fugodb doesn't exist. What is the name of this test db? I took the one displayed when running tests:
What's wrong?
Thanks!
drop all the collections of the test database after tests.
in this way, the db doesn't need to be recreated, and the db is empty after tests.
for the problem of test database name, it is described in django doc:
https://docs.djangoproject.com/en/1.8/topics/testing/overview/#the-test-database
you could create your own test runner to replace your script:
django unit tests without a db
I have a test in Django 1.5 that passes in these conditions:
when run by itself in isolation
when the full TestCase is run
when all of my app's tests are run
But it fails when the full test suite is run with python manage.py test. Why might this be happening?
The aberrant test uses django.test.Client to POST some data to an endpoint, and then the test checks that an object was successfully updated. Could some other app be modifying the test client or the data itself?
I have tried some print debugging and I see all of the data being sent and received as expected. The specific failure is a does-not-exist exception that is raised when I try to fetch the to-be-updated object from the db. Strangely, in the exception handler itself, I can query for all objects of that type and see that the target object does in fact exist.
Edit:
My issue was resolved when I found that I was querying for the target object by id and User and not id and UserProfile, but it's still confusing to me that this would work in some cases but fail in others.
I also found that the test would fail with python manage.py test auth <myapp>
It sounds like your problem does not involve mocks, but I just spent all day debugging an issue with similar symptoms, and your question is the first one that came up when I was searching for a solution, so I wanted to share my solution here, just in case it will prove helpful for others. In my case, the issue was as follows.
I had a single test that would pass in isolation, but fail when run as part of my full test suite. In one of my view functions I was using the Django send_mail() function. In my test, rather than having it send me an email every time I ran my tests, I patched send_mail in my test method:
from mock import patch
...
def test_stuff(self):
...
with patch('django.core.mail.send_mail') as mocked_send_mail:
...
That way, after my view function is called, I can test that send_mail was called with:
self.assertTrue(mocked_send_mail.called)
This worked fine when running the test on its own, but failed when run with other tests in the suite. The reason this fails is that when it runs as part of the suite other views are called beforehand, causing the views.py file to be loaded, causing send_mail to be imported before I get the chance to patch it. So when send_mail gets called in my view, it is the actual send_mail that gets called, not my patched version. When I run the test alone, the function gets mocked before it is imported, so the patched version ends up getting imported when views.py is loaded. This situation is described in the mock documentation, which I had read a few times before, but now understand quite well after learning the hard way...
The solution was simple: instead of patching django.core.mail.send_mail I just patched the version that had already been imported in my views.py - myapp.views.send_mail. In other words:
with patch('myapp.views.send_mail') as mocked_send_mail:
...
Try this to help you debug:
./manage.py test --reverse
In my case I realised that one test was updating certain data which would cause the following test to fail.
Another possibility is that you've disconnected signals in the setUp of a test class and did not re-connect in the tearDown. This explained my issue.
There is a lot of nondeterminism that can come from tests that involve the database.
For instance, most databases don't offer deterministic selects unless you do an order by. This leads to the strange behavior where when the stars align, the database returns things in a different order than you might expect, and tests that look like
result = pull_stuff_from_database()
assert result[0] == 1
assert result[1] == 2
will fail because result[0] == 2 and result[1] == 1.
Another source of strange nondeterministic behavior is the id autoincrement together with sorting of some kind.
Let's say each tests creates two items and you sort by item name before you do assertions. When you run it by itself, "Item 1" and "Item 2" work fine and pass the test. However, when you run the entire suite, one of the tests generates "Item 9" and "Item 10". "Item 10" is sorted ahead of "Item 9" so your test fails because the order is flipped.
So I first read #elethan's answer and went 'well this is certainly not my problem, I'm not patching anything'. But it turned out I was indeed patching a method in a different test suite, which stayed permanently patched for the rest of the time the tests were run.
I had something of this sort going on;
send_comment_published_signal_mock = MagicMock()
comment_published_signal.send = send_comment_published_signal_mock
You can see why this would be a problem if some things are not cleaned up after running the test suite. The solution in my case was to use the helpful with to restrict the scope.
signal = 'authors.apps.comments.signals.comment_published_signal.send'
with patch(signal) as comment_published_signal_mock:
do_your_test_stuff()
This is the easy part though, after you know where to look. The guilty test could come from anywhere. The solution to this is to run the failing test and other tests together until you find the cause, and then again, progressively narrowing it down, module by module.
Something like;
./manage.py test A C.TestModel.test_problem
./manage.py test B C.TestModel.test_problem
./manage.py test D C.TestModel.test_problem
Then recursively, if for example B is the problem child;
./manage.py test B.TestModel1 C.TestModel.test_problem
./manage.py test B.TestModel2 C.TestModel.test_problem
./manage.py test B.TestModel3 C.TestModel.test_problem
This answer gives a good explanation for all this.
This answer is in the context of django, but can really apply to any python tests.
Good luck.
This was happening to me too.
When running the tests individually they passed, but running all tests with ./manage.py test it failed.
The problem in my case is because I had some tests inheriting from unittest.TestCase instead of from django.test.TestCase, so some tests were failing because there were registers on the database from previous tests.
After making all tests inherit from django.test.TestCase this problem has gone away.
I found the answer on https://stackoverflow.com/a/436795/6490637
I have some Python code abstracting the database and the business logic on it. This code is already covered by unit tests but now I need to test this code against different DBs (MySQL, SQLite, etc...)
What is the default pattern for passing the same set of tests with different configurations? My goal is making sure that that abstraction layer works as expected independently of the underlying database. If that could help, I'm using nosetests for running tests but it seems that it lacks the Suite Test concept
Best regards.
I like to use test fixtures for situations in which I have several similar tests. In Python, under Nose, I usually implement this as a common test module imported by other modules. For instance, I might use the following file structure:
db_fixtures.py:
import unittest
class BaseDB(unittest.TestCase):
def testFirstOperation(self):
self.db.query("Foo")
def testSecondOperation(self):
self.db.query("Blah")
database_tests.py:
import db_fixtures
class SQliteTest(db_fixtures.BaseDB):
def setUp(self):
self.db = createSqliteconnection()
class MySQLTest(db_fixtures.BaseDB):
def setUp(self):
self.db = createMySQLconnection()
This will run all tests defined in BaseDB on both MySQL and SQlite. Note that I named db_fixtures.py in such a way that it won't be run by Nose.
Nose supports test suites just import and use unittest.TestSuite. In fact nose will happily run any tesys written using the standard lib's unittest module so tesys do not need to be written in the nose style to be discovered by the nose test runner.
However, I suspect you need morw than test suite support to do the kind of testing you are talking about but more detail about your application is necessary to really address that.
use --attrib plugin, and in the commadn line
1. nosetests -s -a 'sqlite'
2. nosetests -s -a 'mysql'