Is there any difference between creating a TestSuite and add to it all the TestCases, or just running python -m unittest discover in the TestCases directory?
For example, for a directory with two TestCases: test_case_1.py and test_case_2.py:
import unittest
from test_case_1 import TestCaseClass as test1
from test_case_2 import TestCaseClass as test2
suite = unittest.TestSuite()
suite.addTest(unittest.makeSuite(test1))
suite.addTest(unittest.makeSuite(test2))
unittest.TextTestRunner().run(suite)
Or just cd into that directory and run python -m unittest discover.
I'm getting the same result with either way, but I'm interesting in knowing whether one way is preferred over the other, and why.
I think an obvious benefit in favor of discover is maintenance.
After a month, you get rid of test_case_2 - some of your code above will fail (the import) and you'll have to correct your above script. That's annoying, but not the end of the world.
After two months, someone on your team made test_case_3, but was unaware that they need to add it to the script above. No tests fail, and everyone is happy - the problem is, nothing from test_case_3 actually runs. However, you might counter that it's unreasonable to write new tests, and not notice that they're not running. This brings to the next scenario.
Even worse - after three months, someone merges two versions of your above script, and test_case_3 gets squeezed out again. This might go unnoticed. Until it's corrected, people can work all they want on the stuff that test_case_3 is supposed to check, but it's just untested.
Related
if __name__ == '__main__':
if is_running_under_teamcity():
runner = TeamcityTestRunner()
else:
runner = HTMLTestRunner.HTMLTestRunner(
stream=outfile,
title='Test Report',
description='This is an example.'
)
unittest.main(testRunner=runner)
I am currently running some tests using unittest module in python this is my current code above. I am deploying this test setup on Teamcity, the first module allows me to convert the output into teamcity-messages and the second creates a html report of the results. Is there a way I can run both of these runners while only running one set of tests? The only option I can see at the minuete is to either try and combine both these modules into a hybrid or using another testing module that Teamcity supports. However I would like to keep the dependancies as low as possible
Any ideas would be great :)
Any ideas would be great :)
Looks like you'll have to handroll it, looking at the code TeamcityTestRunner is a pretty simple extension of the standard TextTestRunner, however HTMLTestRunner is a way more complex beast.
Sadly this is one area of the stdlib which is really badly architected: one could expect the test runner to be concerned solely with discovering and running tests, however it's also tasked with part of the test reporting rather than have an entirely separate test reporter (this test reporting is furthermore a split responsability with the test result, which shouldn't be part of that one's job description either).
Frankly if you don't have any further customisation I'd suggest just using pytest as your test runner instead of unittest with a custom runner:
it should be able to run unittest tests fine
IME it has better separation of concerns and pluggability so having multiple reporters / formatters should work out of the box
pytest-html certainly has no issue generating its reports without affecting the normal text output
according to the readme teamcity gets automatically enabled and used for pytest
so I'd assume generating html reports during your teamcity builds would work fine (to test)
and you can eventually migrate to using pytest tests (which are so much better it's not even funny)
I have about 10 tests in same file and each one them has following set to execute in order
import pytest
#pytest.mark.order1
.
.
.
#pytest.mark.order10
But the tests never run in the order they are assigned. They always run in the order they are arranged. Anything I am missing ?
even #pytest.mark.tryfirst didn't work. One thing I noticed is, the #pytest.mark.order never shows up in suggestions while atleast #pyetst.mark.tryfirst was there in pycharm.
It looks like you're using pytest-ordering. That package is indeed "alpha quality" -- I wrote it and I haven't spent much time keeping it updated.
Instead of decorating with #pytest.mark.order1, try decorating with #pytest.mark.run(order=1). I believe the Read the Docs documentation is out of date.
I faced the same issue, you can use this command to order it:
#pytest.mark.run(order=1)
But before, install it from this site.
Then it will work fine.
The pytest marks do not do anything special, except for marking the tests. The marks can be used only for the purpose of filtering them with -m CLI option.
This is all pytest alone can do with the marks. Well, and few little things like parameterization & skipif's.
Specifically, there is no such special mark as tryfirst. It is a parameter to the hook declaration, but this is not applicable for the tests/marks.
Some external or internal plugins can add special behavior which is dependent on the marks.
Pytest executes the tests in the order they were found (collected). In some cases, pytest can reorder (regroup) the tests for better fixture usage. At least, this is declared; not sure if actually done.
The tests are assumed to be completely independent by design. If your tests depend on each another, e.g. use the state of the system under test from the previous test-cases, you have a problem with the test design. That state should be somehow converted to the fixture(s).
If you still want to force some dependencies or order of the tests (contrary to the test design principles), you have to install a plugin for the test ordering based on marks, e.g., http://pytest-ordering.readthedocs.io/en/develop/, and mark the tests according to its supported mark names.
It looks like you're using pytest-ordering. Make sure you already have installed pytest-ordering in you env, here de docs: https://github.com/ftobia/pytest-ordering and try to use the following decoration
#pytest.mark.run(order=1)
I was also facing the same issue multiple times and tried everything available online. However, what worked for me is to install ordering using pip3 install pytest-ordering and restart pycharm.
I have a test in Django 1.5 that passes in these conditions:
when run by itself in isolation
when the full TestCase is run
when all of my app's tests are run
But it fails when the full test suite is run with python manage.py test. Why might this be happening?
The aberrant test uses django.test.Client to POST some data to an endpoint, and then the test checks that an object was successfully updated. Could some other app be modifying the test client or the data itself?
I have tried some print debugging and I see all of the data being sent and received as expected. The specific failure is a does-not-exist exception that is raised when I try to fetch the to-be-updated object from the db. Strangely, in the exception handler itself, I can query for all objects of that type and see that the target object does in fact exist.
Edit:
My issue was resolved when I found that I was querying for the target object by id and User and not id and UserProfile, but it's still confusing to me that this would work in some cases but fail in others.
I also found that the test would fail with python manage.py test auth <myapp>
It sounds like your problem does not involve mocks, but I just spent all day debugging an issue with similar symptoms, and your question is the first one that came up when I was searching for a solution, so I wanted to share my solution here, just in case it will prove helpful for others. In my case, the issue was as follows.
I had a single test that would pass in isolation, but fail when run as part of my full test suite. In one of my view functions I was using the Django send_mail() function. In my test, rather than having it send me an email every time I ran my tests, I patched send_mail in my test method:
from mock import patch
...
def test_stuff(self):
...
with patch('django.core.mail.send_mail') as mocked_send_mail:
...
That way, after my view function is called, I can test that send_mail was called with:
self.assertTrue(mocked_send_mail.called)
This worked fine when running the test on its own, but failed when run with other tests in the suite. The reason this fails is that when it runs as part of the suite other views are called beforehand, causing the views.py file to be loaded, causing send_mail to be imported before I get the chance to patch it. So when send_mail gets called in my view, it is the actual send_mail that gets called, not my patched version. When I run the test alone, the function gets mocked before it is imported, so the patched version ends up getting imported when views.py is loaded. This situation is described in the mock documentation, which I had read a few times before, but now understand quite well after learning the hard way...
The solution was simple: instead of patching django.core.mail.send_mail I just patched the version that had already been imported in my views.py - myapp.views.send_mail. In other words:
with patch('myapp.views.send_mail') as mocked_send_mail:
...
Try this to help you debug:
./manage.py test --reverse
In my case I realised that one test was updating certain data which would cause the following test to fail.
Another possibility is that you've disconnected signals in the setUp of a test class and did not re-connect in the tearDown. This explained my issue.
There is a lot of nondeterminism that can come from tests that involve the database.
For instance, most databases don't offer deterministic selects unless you do an order by. This leads to the strange behavior where when the stars align, the database returns things in a different order than you might expect, and tests that look like
result = pull_stuff_from_database()
assert result[0] == 1
assert result[1] == 2
will fail because result[0] == 2 and result[1] == 1.
Another source of strange nondeterministic behavior is the id autoincrement together with sorting of some kind.
Let's say each tests creates two items and you sort by item name before you do assertions. When you run it by itself, "Item 1" and "Item 2" work fine and pass the test. However, when you run the entire suite, one of the tests generates "Item 9" and "Item 10". "Item 10" is sorted ahead of "Item 9" so your test fails because the order is flipped.
So I first read #elethan's answer and went 'well this is certainly not my problem, I'm not patching anything'. But it turned out I was indeed patching a method in a different test suite, which stayed permanently patched for the rest of the time the tests were run.
I had something of this sort going on;
send_comment_published_signal_mock = MagicMock()
comment_published_signal.send = send_comment_published_signal_mock
You can see why this would be a problem if some things are not cleaned up after running the test suite. The solution in my case was to use the helpful with to restrict the scope.
signal = 'authors.apps.comments.signals.comment_published_signal.send'
with patch(signal) as comment_published_signal_mock:
do_your_test_stuff()
This is the easy part though, after you know where to look. The guilty test could come from anywhere. The solution to this is to run the failing test and other tests together until you find the cause, and then again, progressively narrowing it down, module by module.
Something like;
./manage.py test A C.TestModel.test_problem
./manage.py test B C.TestModel.test_problem
./manage.py test D C.TestModel.test_problem
Then recursively, if for example B is the problem child;
./manage.py test B.TestModel1 C.TestModel.test_problem
./manage.py test B.TestModel2 C.TestModel.test_problem
./manage.py test B.TestModel3 C.TestModel.test_problem
This answer gives a good explanation for all this.
This answer is in the context of django, but can really apply to any python tests.
Good luck.
This was happening to me too.
When running the tests individually they passed, but running all tests with ./manage.py test it failed.
The problem in my case is because I had some tests inheriting from unittest.TestCase instead of from django.test.TestCase, so some tests were failing because there were registers on the database from previous tests.
After making all tests inherit from django.test.TestCase this problem has gone away.
I found the answer on https://stackoverflow.com/a/436795/6490637
... and a pony! No, seriously. I am looking for a way to organize tests that "just works". Most things do work, but not all pieces fit together. So here is what I want:
Having tests automatically discovered. This includes doctests. Note that the sum of doctests must not appear as a single test. (i.e. not what py.test --doctest-modules does)
Being able to run tests in parallel. (Something like py.test -n from xdist)
Generating a coverage report.
Make python setup.py test just work.
My current approach involves a tests directory and the load_tests protocol. All files contained are named like test_*.py. This makes python -m unittest discover just work, if I create a file test_doctests.py with the following content.
import doctest
import mymodule1, mymodule2
def load_tests(loader, tests, ignore):
tests.addTests(doctest.DocTestSuite(mymodule1))
tests.addTests(doctest.DocTestSuite(mymodule2))
return tests
This approach also has the upside that one can use setuptools and supply setup(test_suite="unittest2.collector").
However this approach has a few problems.
coverage.py expects to run a script. So I cannot use unittest2 discovery here.
py.test does not run load_tests functions, so it does not find the doctests and the --doctest-modules option is crap.
nosetests runs the load_tests functions, but does not supply any parameters. This appears totally broken on the side of nose.
How can I make things work better than this or fix some of the issues above?
This is an old question, but the problem still persists for some of us! I was just working through it and found a solution similar to kaapstorm's, but with much nicer output. I use py.test to run it, but I think it should be compatible across test runners:
import doctest
from mypackage import mymodule
def test_doctest():
results = doctest.testmod(mymodule)
if results.failed:
raise Exception(results)
What I end up with in a failure case is the printed stdout output that you would get from running doctest manually, with an additional exception that looks like this:
Exception: TestResults(failed=1, attempted=21)
As kaapstrom mentioned, it doesn't count tests properly (unless there are failures) but I find that isn't worth a whole lot provided the coverage numbers come back high :)
I use nose, and found your question when I experienced the same problem.
What I've ended up going with is not pretty, but it does run the tests.
import doctest
import mymodule1, mymodule2
def test_mymodule1():
assert doctest.testmod(mymodule1, raise_on_error=True)
def test_mymodule2():
assert doctest.testmod(mymodule2, raise_on_error=True)
Unfortunately it runs all the doctests in a module as a single test. But if things go wrong, at least I know where to start looking. A failure results in a DocTestFailure, with a useful message:
DocTestFailure: <DocTest mymodule1.myfunc from /path/to/mymodule1.py:63 (4 examples)>
This is more of a convenience than a real problem, but the project I'm working on has a lot of separate files, and I want to basically be able to run any of those files (that all basically only contain classes) to run the main file.
Now in the middle of writing the first sentence of this question, I tried just importing main.py into each file, and that seemed to work fine and dandy, but I cant help but feeling that:
it might cause problems, and
that I had problems with circular imports before and I am somewhat surprised that nothing came up.
First let me say: this is most likely a bad idea, and it's definitely not at all standard. It will likely lead to confusion and frustration down the road.
However, if you really want to do it, you can put:
if __name__ == "__main__":
from mypackage import main
main.run()
Which, assuming mypackage.main.run() is your main entry point, will let you run any file you want as if it were the main file.
You may still hit issues issues with circular imports, and those will be completely unavoidable, unless mypackage.main doesn't import anything… Which would make it fairly useless :)
As an alternative, you may wish to use a testing framework like doctest or unittest, then configure your IDE to run the unit tests from a hotkey. This way you're automatically building the repeatable tests as you develop your code.