How to list available tests with python? - python

How to just list all discovered tests?
I found this command:
python3.4 -m unittest discover -s .
But it's not exactly what I want, because the above command executes tests. I mean let's have a project with a lot of tests. Execution time is a few minutes. This force me to wait until tests are finished.
What I want is something like this (above command's output)
test_choice (test.TestSequenceFunctions) ... ok
test_sample (test.TestSequenceFunctions) ... ok
test_shuffle (test.TestSequenceFunctions) ... ok
or even better, something more like this (after editing above):
test.TestSequenceFunctions.test_choice
test.TestSequenceFunctions.test_sample
test.TestSequenceFunctions.test_shuffle
but without execution, only printing tests "paths" for copy&paste purpose.

Command line command discover is implemented using unittest.TestLoader. Here's the somewhat elegant solution
import unittest
def print_suite(suite):
if hasattr(suite, '__iter__'):
for x in suite:
print_suite(x)
else:
print(suite)
print_suite(unittest.defaultTestLoader.discover('.'))
Running example:
In [5]: print_suite(unittest.defaultTestLoader.discover('.'))
test_accounts (tests.TestAccounts)
test_counters (tests.TestAccounts)
# More of this ...
test_full (tests.TestImages)
This works because TestLoader.discover returns TestSuite objects, that implement __iter__ method and therefore are iterable.

You could do something like:
from your_tests import TestSequenceFunctions
print('\n'.join([f.__name__ for f in dir(TestSequenceFunctions) if f.__name__.startswith('test_')]))
I'm not sure if there is an exposed method for this via unittest.main.

Related

pytest: custom mark with arguments

I would like to mark tests in the following fashion:
#pytest.mark.expectedruntime(100)
def test_function():
blahblah()
And then run pytest with, for example, -m not expectedruntime>50 (or some other syntax)
So that only tests with an expected run time of 50 or less would be run, or tests without that mark.
Is there a way to do this with native pytest/with a plugin? If not, what would I need to do in order to accomplish this?
https://docs.pytest.org/en/latest/writing_plugins.html mentions a custom mark called "mark_with" which consumes arguments but doesn't mention how to actually use those arguments.
I hope this example will help http://doc.pytest.org/en/latest/example/markers.html#custom-marker-and-command-line-option-to-control-test-runs

Python unittest, running only the tests that failed

I have a large python test file using unittest that I run from the command line. Some tests take a while to run. This is a mild pain point because I'm often only concerned with the last test I added. What I want is this:
add test.
run tests (one fails because I haven't written the code to make it pass)
implement the behaviour
run only the test that failed last time
fix the silly error I made when implementing the code
run only the failing test, which passes this time
run all the tests to find out what I broke.
Is it possible to do this from the command line?
(Not a fully automated solution, but better than the existing one)
If you pass the name of a test class as an argument to the test script, only that test will be run. For example, if you only want to run tests in the MyTest class in the script test_whatever.py:
python3 test_whatever.py MyTest
You can also specify an individual test as a member of that class. For example, suppose you want to run the test test_something in the class MyTest:
python3 test_whatever.py MyTest.test_something
Every test function is declared like:
def test_something_something(self):
If you add an underscore in front, like:
def _test_something_something(self):
that test will be ignored. One thing you can do is to do a quick find and replace in vim. Find all "test_"s and replace them with "_test_" and then find the one test that failed and remove the underscore.
Just run the test with --last-failed option (you might need pytest)

How to set nosetests to only log errors?

I have a hundred or so unit tests I'm running with nose. When I change something in my models obviously I get fails, with some errors mixed in. Is there an easy way to tell nose to only log the errors? Then I don't have to go through pages of fails to look for one error log.
nose provides tools for testing exceptions (like unittest does). Try this example (and read about the other tools at Nose Testing Tools
from nose.tools import *
l = []
d = dict()
#raises(Exception)
def test_Exception1():
'''this test should pass'''
l.pop()
#raises(KeyError)
def test_Exception2():
'''this test should pass'''
d[1]
An alternative, is to redirect output to stdout and use grep (adjust the number of lines, 15 in this example, to your liking):
nosetests tests.py 2>&1 | grep "ERROR" -A 15
Another alternative is to use --pdb-errors to stop on every error and open the debugger.
It's not what you asked, but it's what I ended up using.

How to execute a python test using httpretty/sure

I'm new to python tests so don't hesitate to provide any obvious information.
Basically I want to do some RESTful tests using python, and found the httpretty and sure libraries which look really nice.
I have a python file containing:
#!/usr/bin/python
from sure import expect
import requests, httpretty
#httpretty.activate
def RestTest():
httpretty.register_uri(httpretty.GET, "http://localhost:8090/test.json",
body='{"status": "ok"}',
content_type="application/json")
response = requests.get("http://localhost:8090/test.json")
expect(response.json()).to.equal({"status": "ok"}
Which is basically the same as the example code provided at https://github.com/gabrielfalcao/HTTPretty
My question is; how do I simply run this test to see it either passing or failing? I tried just executing it using ./pythonFile but that doesn't work.
If your test is implemented as a Python function, then of course simply trying to execute the file isn't going to run the test: nothing in that file actually calls RestTest.
You need some sort of test framework that will call your tests and collate the results.
One such solution is python-nose, which will look for methods named test_* and run them. So if you were to rename RestTest to test_rest, you could run:
$ nosetests myfile.py
.
----------------------------------------------------------------------
Ran 1 test in 0.012s
OK
The nosetests command has a variety of options that control which tests are run, how errors are handled and reported, and more.
Python 3 includes similar functionality in the unittest module, which is also available as a backport for Python 2 called unittest2. You could modify your code to take advantage of unittest like this:
#!/usr/bin/python
from sure import expect
import requests, httpretty
import unittest
class RestTest(unittest.TestCase):
#httpretty.activate
def test_rest(self):
httpretty.register_uri(httpretty.GET, "http://localhost:8090/test.json",
body='{"status": "ok"}',
content_type="application/json")
response = requests.get("http://localhost:8090/test.json")
expect(response.json()).to.equal({"status": "ok"})
if __name__ == '__main__':
unittest.main()
Running your file would now provide output similar to what we saw with
nosetests:
$ python myfile.py
.
----------------------------------------------------------------------
Ran 1 test in 0.012s
OK
Have you tried calling your method?
Or does the annotation mean you don't have to explicitly call your method?
If I call your method, it seems like it works. If I change the value on one side of the expect, it complains properly about the values not matching.

Can tests with pytest fixtures be run interactively?

I have some tests written using pytest and fixtures, e.g.:
class TestThing:
#pytest.fixture()
def temp_dir(self, request):
my_temp_dir = tempfile.mkdtemp()
def fin():
shutil.rmtree(my_temp_dir)
request.addfinalizer(fin)
return my_temp_dir
def test_something(self, temp_dir)
with open(os.path.join(temp_dir, 'test.txt'), 'w') as f:
f.write('test')
This works fine when the tests are invoked from the shell, e.g.
$ py.test
but I don't know how to run them from within a python/ipython session; trying e.g.
tt = TestThing()
tt.test_something(tt.temp_dir())
fails because temp_dir requires a request object to be passed on. So, how does one invoke a fixture with a request object injected?
Yes. You don't have to manually assemble any test fixtures or anything like that. Everything runs just like calling pytest in the project directory.
Method1:
This is the best method because it gives you access to the debugger if your test fails
In ipython shell use:
**ipython**> run -m pytest prj/
This will run all your tests in the prj/tests directory.
This will give you access to the debugger, or allow you to set breakpoints if you have a
import ipdb; ipdb.set_trace() in your program (https://docs.pytest.org/en/latest/usage.html#setting-breakpoints).
Method2:
Use !pytest while in the test directory. This wont give you access to the debugger. However, if you use
**ipython**> !pytest --pdb
If you have a test failure, it will drop you into the debugger (subshell), so you can run your post-mortem analysis (https://docs.pytest.org/en/latest/usage.html#dropping-to-pdb-python-debugger-on-failures)
Using these methods you can even run individual modules/test_fuctions/TestClasses in ipython using (https://docs.pytest.org/en/latest/usage.html#specifying-tests-selecting-tests)
**ipython**> run -m pytest prj/tests/test_module1.py::TestClass1::test_function1
You can bypass the pytest.fixture decorator and directly call the wrapped test function.
tmp = tt.temp_dir.__pytest_wrapped__.obj(request=...)
Accessing internals, it's bad, but when you need it...
The best method I have which is far from ideal is to just %run the test file, then manually assemble the fixtures, then simply call the tests. The problem with this is tracking down the modules where the default fixtures are defined, and then calling them in their order of dependencies.
you can use two cells for this:
first:
def test_something():
assert True
second:
from tempfile import mktemp
test_file = mktemp('.py', 'test_')
open(test_file, 'wb').write(_i) # write last cell input
!pytest $test_file
also u can do this on one cell like this but you won't have code highlighting:
from tempfile import mktemp
test_code = """
def test_something():
assert True
"""
test_file = mktemp('.py', 'test_')
open(test_file, 'wb').write(test_code)
!pytest $test_file
The simple answer is that you don't want to run py.test interactively from python. Most people set up some integration with their text editor or IDE to run py.test and parse it's output. But really it's a command line tool and that is how it should be used.
As a sidenode you may want to check out the built-in tmpdir fixture: http://pytest.org/latest/tmpdir.html Because it seems like you're re-inventing this.

Categories

Resources