Python unittest, running only the tests that failed - python

I have a large python test file using unittest that I run from the command line. Some tests take a while to run. This is a mild pain point because I'm often only concerned with the last test I added. What I want is this:
add test.
run tests (one fails because I haven't written the code to make it pass)
implement the behaviour
run only the test that failed last time
fix the silly error I made when implementing the code
run only the failing test, which passes this time
run all the tests to find out what I broke.
Is it possible to do this from the command line?

(Not a fully automated solution, but better than the existing one)
If you pass the name of a test class as an argument to the test script, only that test will be run. For example, if you only want to run tests in the MyTest class in the script test_whatever.py:
python3 test_whatever.py MyTest
You can also specify an individual test as a member of that class. For example, suppose you want to run the test test_something in the class MyTest:
python3 test_whatever.py MyTest.test_something

Every test function is declared like:
def test_something_something(self):
If you add an underscore in front, like:
def _test_something_something(self):
that test will be ignored. One thing you can do is to do a quick find and replace in vim. Find all "test_"s and replace them with "_test_" and then find the one test that failed and remove the underscore.

Just run the test with --last-failed option (you might need pytest)

Related

Why do I only get a function as a return value by using a fixture (from pytest) in a test script?

I want to write test functions for my code and decided to use pytest. I had a look into this tutorial: https://semaphoreci.com/community/tutorials/testing-python-applications-with-pytest
My real code involves another script, written by me, so I made an example, which also creates the same problem, but does not rely on my other code.
#pytest.fixture()
def example():
value = 10
return value
def test_value(example):
print(example)
assert(example == 10)
test_value(example)
When I run my script with this toy example, the print returns a function:
<function example at 0x0391E540>
and the assertion fails.
If I try to call example() with the parenthesis, I get this:
Failed: Fixture "example_chunks" called directly. Fixtures are not meant to be called directly,
but are created automatically when test functions request them as parameters.
See https://docs.pytest.org/en/stable/fixture.html for more information about fixtures, and
https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly about how to update your code.
I am sure, I am missing something important here, but searching google did not help me, which is why I hope somebody here can provide some assistance.
Remove this line from your script
test_value(example)
Run your script file with pytest file.py
Fixtures will be automatically resolved by pytest
In your example you run code directly and fixtures are just simple functions

Using parameters in unit tests in Python 3.x from IDEA

We run unit tests in Python that have previously been hard coded with information such as which server we want tests to run on. Instead, I'd like to pass that information to the test via command line argument. The problem is that using the Python unit testing framework, I'm stuck calling my custom parameters as a single parameter which is then caught by utrunner.py which assumes that the parameter is about which tests to run (regarding test discovery).
So running from IDEA I send out this command to start up the test suite:
C:\Users\glenp\AppData\Local\Programs\Python\Python36-32\python.exe C:\Users\glenp\.IntelliJIdea2016.3\config\plugins\python\helpers\pycharm\utrunner.py C:\Root\svn\trunk\src\test\python\test.py "server=deathStar language=klingon" true
This is the parameters that get read back to me from print(sys.argv):
['C:\\Users\\glenp\\.IntelliJIdea2016.3\\config\\plugins\\python\\helpers\\pycharm\\utrunner.py', 'C:\\Root\\svn\\trunk\\src\\test\\python\\schedulePollTest.py', 'server=deathStar language=klingon', 'true']
Note, I'm not actually calling my own test, I'm calling the utrunner.py with my test as one of the arguments to it.
I get a FileNotFound error: FileNotFoundError: [Errno 2] No such file or directory: 'server=deathStar language=klingon' which kills the test before I get to run it.
I think I need to modify either this:
if __name__ == "__main__":
unittest.main()
or this:
class testThatWontRun(unittest.TestCase):
I COULD modify imp.py, which is throwing the error, but I happen to be on a team and modifying core Python functionality isn't going to scale well at all. (And everyone on the team will be sad)
So, is there a way to phrase my arguments in a way that utrunner.py (and imp.py) will ignore those parameters?
Yes, there is a way to get the utrunner.py to ignore the parameters: put a -- in front of the parameter you want it to ignore.
so server=deathStar becomes --server=deathStar
Thank you rubber ducky :)

Avoid setUpClass to run every time for nose cherry picked tests

This is my tests class, in mymodule.foo:
class Some TestClass(TestCase):
def setUpClass(cls):
# Do the setup for my tests
def test_Something(self)
# Test something
def test_AnotherThing(self)
# Test another thing
def test_DifferentStuff(self)
# Test another thing
I'm running the tests from Python with the following lines:
tests_to_run = ['mymodule.foo:test_AnotherThing', 'mymodule.foo:test_DifferentStuff']
result = nose.run(defaultTest= tests_to_run)
(This is obviously a bit more complicated and there's some logic to pick what tests I want to run)
Nose will run just the selected tests, as expected, but the setUpClass will run once for every test in tests_to_run. Is there any way to avoid this?
What I'm trying to achieve is to be able to run some dynamic set of tests while using nose in a Python script (not from the command line)
As #jonrsharpe mentioned, setupModule is what I was after: it will run just once per the whole module where my tests reside.

Can't get Nose to honor the attributes I set on my tests

I'm using Django 1.7 with django-nose 1.4 and nose 1.3.6.
According to the documentation, I should be able to select tests to be run by using attributes. I have a test set like this:
from nose.plugins.attrib import attr
from django_webtest import TransactionWebTest
#attr(isolation="menu")
class MenuTestCase(TransactionWebTest):
def test_home(self):
pass
When I try to run my tests with:
./manage.py test -a isolation
Nose eliminates all tests from the run. In other words, it does not run any test. Note that when I do not use -a, all the tests run fine. I've also tried:
-a=isolation
-a isolation=menu
-a=isolation=menu
-a '!isolation'
The last one should select almost all of my test suite since the isolation attribute is used only on one class but it does not select anything! I'm starting to think I just don't understand how the whole attributes system works.
It is unclear to me what causes the problem. It probably has to do with how Django passes the command line arguments to django-nose, which then passes them to nose. At any rate, using the long form of the command line arguments solves the problem:
$ ./manage.py test --attr=isolation
and similarly:
--attr=isolation=menu
--attr='!isolation' (with the single quotes to prevent the shell form interpreting !)
--eval-attr=isolation
--eval-attr='isolation=="menu"' (the single quotes prevent the shell from removing the double quotes)
etc...

Py.test skip messages don't show in jenkins

I have a minor issue using py.test for my unit tests.
I use py.test to run my tests and output a junitxml report of the tests.
This xml report is imported in jenkins and generates nice statistics.
When I use a test class which derives from unittest.TestCase,
I skip expected failures using:
#unittest.skip("Bug 1234 : This does not work")
This message also shows up in jenkins when selecting this test.
When I don't use a unittest.TestCase class, e.g. to use py.test parametrize functionality,
I skip expected failures using:
#pytest.mark.xfail(reason="Bug 1234 : This does not work", run=False)
But then this reason is not actually displayed in jenkins, instead it will say:
Skip Message
expected test failure
How can I fix this?
I solved it using this line as the first line of the test:
pytest.skip("Bug 1234: This does not work")
I'd rather have used one of the pytest decorators, but this'll do.
I had a similar problem except I had a different Jenkins message and could not tell which test was skipped.
It turns out that if the only test in the module is a skipped test, then jenkins would not show the test in the test result list (using either decorator or jr-be's soloution). You could see that there was a skipped test in the total resuls, but could not tell which test or which module the skipped test was in.
To solve this (ok hack solve), I went back to using the decorator on my test and added a dummy test (so have 1 test that runs and 1 test that gets skipped):
#pytest.skip('SONIC-3218')
def test_segments_create_delete(self, api):
logging.info('TestCreateDeleteSegments.test_segments_create_delete')
def test_dummy(self, api):
'''
Dummy test to see if suite will display in jenkins if one
test is run and 1 is skipped (instead of having only skipped tests)
'''
logging.info('TestCreateDeleteSegments.test_dummy')
For me that works since I would rather have 1 extra dummy test and be able to find my skipped tests.

Categories

Resources